OpenCV3.0的神经网络类-MLP(多层感知机参考)[cv::ml::ANN_MLP Class Reference]

来源:互联网 发布:talkingdata数据造假 编辑:程序博客网 时间:2024/05/16 15:26

继承关系为:

cv::ml::ANN_MLP——>cv::ml::StatModel——>cv::Algorithm

</pre><h1><span style="font-size:12px;">公有枚举类型</span></h1><div><pre name="code" class="cpp"><span style="font-size:18px;color:#3366ff;">enum ActivationFunctions {   IDENTITY = 0,   SIGMOID_SYM = 1,   GAUSSIAN = 2 }enum TrainFlags {   UPDATE_WEIGHTS = 1,   NO_INPUT_SCALE = 2,   NO_OUTPUT_SCALE = 4 }enum TrainingMethods {   BACKPROP =0,   RPROP =1 }</span>

继承自统计模型类的共有类型(枚举类型)

<span style="font-size:18px;color:#3366ff;">enum  Flags {   UPDATE_MODEL = 1,   RAW_OUTPUT =1,   COMPRESSED_INPUT =2,   PREPROCESSED_INPUT =4 }</span>

公有成员函数

获取值

<span style="font-size:18px;color:#3366ff;">virtual double getBackpropMomentumScale () const =0virtual double getBackpropWeightScale () const =0virtual cv::Mat getLayerSizes () const =0virtual double getRpropDW0 () const =0virtual double getRpropDWMax () const =0virtual double getRpropDWMin () const =0virtual double getRpropDWMinus () const =0virtual double getRpropDWPlus () const =0virtual TermCriteria getTermCriteria () const =0 返回各个项的范围virtual int getTrainMethod () const =0virtual Mat getWeights (int layerIdx) const =0</span><span style="font-size:14px;"></span>

设置值

<span style="color:#3366ff;">virtual void setActivationFunction (int type, double param1=0, double param2=0)=0virtual void setBackpropMomentumScale (double val)=0virtual void setBackpropWeightScale (double val)=0virtual void setLayerSizes (InputArray _layer_sizes)=0virtual void setRpropDW0 (double val)=0virtual void setRpropDWMax (double val)=0virtual void setRpropDWMin (double val)=0virtual void setRpropDWMinus (double val)=0virtual void setRpropDWPlus (double val)=0virtual void setTermCriteria (TermCriteria val)=0virtual void setTrainMethod (int method, double param1=0, double param2=0)=0</span>

继承自统计模型类的公有成员函数

<span style="font-size:18px;color:#3366ff;">virtual float calcError (const Ptr< TrainData > &data, bool test, OutputArray resp) const Computes error on the training or test dataset. More... virtual bool empty () const Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read. More... virtual int getVarCount () const =0 Returns the number of variables in training samples. More... virtual bool isClassifier () const =0 Returns true if the model is classifier. More... virtual bool isTrained () const =0 Returns true if the model is trained. More... virtual float predict (InputArray samples, OutputArray results=noArray(), int flags=0) const =0 Predicts response(s) for the provided sample(s) More... virtual bool train (const Ptr< TrainData > &trainData, int flags=0) Trains the statistical model. More... virtual bool train (InputArray samples, int layout, InputArray responses) Trains the statistical model. More...</span>

继承自算法类的公有成员函数

 <span style="font-size:18px;color:#3366ff;">Algorithm () virtual ~Algorithm () virtual void clear () Clears the algorithm state. More... virtual String getDefaultName () const virtual void read (const FileNode &fn) Reads algorithm parameters from a file storage. More... virtual void save (const String &filename) const virtual void write (FileStorage &fs) const Stores algorithm parameters in a file storage. More...</span>

静态公有成员函数

static Ptr< ANN_MLP > create () Creates empty model. More...

继承自统计模型类的静态公有成员函数

template<typename _Tp >static Ptr< _Tp > train (const Ptr< TrainData > &data, int flags=0) Create and train model with default parameters. More...

继承自算法类的静态公有成员函数


template<typename _Tp >static Ptr< _Tp > load (const String &filename, const String &objname=String()) Loads algorithm from the file. More... template<typename _Tp >static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String()) Loads algorithm from a String. More... template<typename _Tp >static Ptr< _Tp > read (const FileNode &fn) Reads algorithm from the file node. More...

详细描述

【机器翻译】人工神经网络——多层感知器。与许多其他模型毫升构造和训练,在模型这些步骤是分开的。首先,创建一个与指定的网络拓扑结构使用非默认的构造函数或方法ANN_MLP::创建。所有的重量都设置为0。然后,网络训练使用一组输入和输出向量。训练过程可以重复不止一次,也就是说,重量可以调整基于新的训练数据。额外的旗帜StatModel::火车可用:ANN_MLP::TrainFlags。
 另请参阅   神经网络

成员细目枚举文件

enum cv::ml::ANN_MLP::ActivationFunctions

possible activation functions

EnumeratorIDENTITY 

Identity function: f(x)=x

SIGMOID_SYM 

Symmetrical sigmoid: f(x)=β(1eαx)/(1+eαx

Note
If you are using the default sigmoid activation function with the default parameter values fparam1=0 and fparam2=0 then the function used is y = 1.7159*tanh(2/3 * x), so the output will range from [-1.7159, 1.7159], instead of [0,1].
GAUSSIAN 

Gaussian function: f(x)=βeαxx


enum cv::ml::ANN_MLP::TrainFlags

Train options

EnumeratorUPDATE_WEIGHTS 

Update the network weights, rather than compute them from scratch. In the latter case the weights are initialized using the Nguyen-Widrow algorithm.

NO_INPUT_SCALE 

Do not normalize the input vectors. If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization.

NO_OUTPUT_SCALE 

Do not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function.


enum cv::ml::ANN_MLP::TrainingMethods

Available training methods

EnumeratorBACKPROP 

The back-propagation algorithm.

RPROP 

The RPROP algorithm. See [101] for details.


成员函数文件

好累,翻译不下去了。等有大段空闲时间时,再接着翻译。


1 0
原创粉丝点击