keras如何保存模型

来源:互联网 发布:js中bind 编辑:程序博客网 时间:2024/04/28 10:07

使用model.save(filepath)将Keras模型和权重保存在一个HDF5文件中,该文件将包含:

模型的结构,以便重构该模型模型的权重训练配置(损失函数,优化器等)优化器的状态,以便于从上次训练中断的地方开始

使用keras.models.load_model(filepath)来重新实例化你的模型,如果文件中存储了训练配置的话,该函数还会同时完成模型的编译


只保存模型结构,而不包含其权重或配置信息


#保存成json格式的文件# save as JSON json_string = model.to_json()  open('my_model_architecture.json','w').write(json_string)   from keras.models import model_from_jsonmodel = model_from_json(open('my_model_architecture.json').read())  #保存成yaml文件# save as YAML  yaml_string = model.to_yaml()  open('my_model_architectrue.yaml','w').write(yaml_string)from keras.models import model_from_yamlmodel = model_from_yaml(open('my_model_architecture.yaml').read())#这项操作将把模型序列化为json或yaml文件,这些文件对人而言也是友好的,如果需要的话你甚至可以手动打开这些文件并进行编辑。当然,你也可以从保存好的json文件或yaml文件中载入模型:# model reconstruction from JSON:  from keras.modelsimport model_from_json  model = model_from_json(json_string)  # model reconstruction from YAML  model =model_from_yaml(yaml_string)  

需要保存模型的权重


import keras.models import load_modelmodel.save_weights('my_model_weights.h5') #需要在代码中初始化一个完全相同的模型 model.load_weights('my_model_weights.h5')#需要加载权重到不同的网络结构(有些层一样)中,例如fine-tune或transfer-learning,可以通过层名字来加载模型  model.load_weights('my_model_weights.h5', by_name=True)
 open('my_model_architecture.json','w').write(json_string)  model.save_weights('my_model_weights.h5')  model = model_from_json(open('my_model_architecture.json').read())  model.load_weights('my_model_weights.h5')

实时保存模型结构、训练出来的权重、及优化器状态并调用


keras 的callback参数可以帮助我们实现在训练过程中的适当时机被调用。实现实时保存训练模型以及训练参数

keras.callbacks.ModelCheckpoint(    filepath,     monitor='val_loss',     verbose=0,     save_best_only=False,     save_weights_only=False,     mode='auto',     period=1)1. filename:字符串,保存模型的路径2. monitor:需要监视的值3. verbose:信息展示模式,014. save_best_only:当设置为True时,将只保存在验证集上性能最好的模型5. mode:‘auto’,‘min’,‘max’之一,在save_best_only=True时决定性能最佳模型的评判准则,例如,当监测值为val_acc时,模式应为max,当检测值为val_loss时,模式应为min。在auto模式下,评价准则由被监测值的名字自动推断。6. save_weights_only:若设置为True,则只保存模型权重,否则将保存整个模型(包括模型结构,配置信息等)7. period:CheckPoint之间的间隔的epoch数

示例


"""假如原模型为:    model = Sequential()    model.add(Dense(2, input_dim=3, name="dense_1"))    model.add(Dense(3, name="dense_2"))    ...    model.save_weights(fname)"""# new modelmodel = Sequential()model.add(Dense(2, input_dim=3, name="dense_1"))  # will be loadedmodel.add(Dense(10, name="new_dense"))  # will not be loaded# load weights from first model; will only affect the first layer, dense_1.model.load_weights(fname, by_name=True)

How to Check-Point Deep Learning Models in Keras


Checkpoint Neural Network Model Improvements

# Checkpoint the weights when validation accuracy improvesfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.callbacks import ModelCheckpointimport matplotlib.pyplot as pltimport numpy# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load pima indians datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]Y = dataset[:,8]# create modelmodel = Sequential()model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))model.add(Dense(8, kernel_initializer='uniform', activation='relu'))model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))# Compile modelmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# checkpointfilepath="weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')callbacks_list = [checkpoint]# Fit the modelmodel.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running the example produces the following output (truncated for brevity):

...Epoch 00134: val_acc did not improveEpoch 00135: val_acc did not improveEpoch 00136: val_acc did not improveEpoch 00137: val_acc did not improveEpoch 00138: val_acc did not improveEpoch 00139: val_acc did not improveEpoch 00140: val_acc improved from 0.83465 to 0.83858, saving model to weights-improvement-140-0.84.hdf5Epoch 00141: val_acc did not improveEpoch 00142: val_acc did not improveEpoch 00143: val_acc did not improveEpoch 00144: val_acc did not improveEpoch 00145: val_acc did not improveEpoch 00146: val_acc improved from 0.83858 to 0.84252, saving model to weights-improvement-146-0.84.hdf5Epoch 00147: val_acc did not improveEpoch 00148: val_acc improved from 0.84252 to 0.84252, saving model to weights-improvement-148-0.84.hdf5Epoch 00149: val_acc did not improve

You will see a number of files in your working directory containing the network weights in HDF5 format. For example:

...weights-improvement-53-0.76.hdf5weights-improvement-71-0.76.hdf5weights-improvement-77-0.78.hdf5weights-improvement-99-0.78.hdf5

Checkpoint Best Neural Network Model Only

# Checkpoint the weights for best model on validation accuracyfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.callbacks import ModelCheckpointimport matplotlib.pyplot as pltimport numpy# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load pima indians datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]Y = dataset[:,8]# create modelmodel = Sequential()model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))model.add(Dense(8, kernel_initializer='uniform', activation='relu'))model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))# Compile modelmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# checkpointfilepath="weights.best.hdf5"checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')callbacks_list = [checkpoint]# Fit the modelmodel.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running this example provides the following output (truncated for brevity):

...Epoch 00139: val_acc improved from 0.79134 to 0.79134, saving model to weights.best.hdf5Epoch 00140: val_acc did not improveEpoch 00141: val_acc did not improveEpoch 00142: val_acc did not improveEpoch 00143: val_acc did not improveEpoch 00144: val_acc improved from 0.79134 to 0.79528, saving model to weights.best.hdf5Epoch 00145: val_acc improved from 0.79528 to 0.79528, saving model to weights.best.hdf5Epoch 00146: val_acc did not improveEpoch 00147: val_acc did not improveEpoch 00148: val_acc did not improveEpoch 00149: val_acc did not improve

You should see the weight file in your local directory.

weights.best.hdf5

Loading a Check-Pointed Neural Network Model

# How to load and use weights from a checkpointfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.callbacks import ModelCheckpointimport matplotlib.pyplot as pltimport numpy# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# create modelmodel = Sequential()model.add(Dense(12, input_dim=8, kernel_initializer='uniform', activation='relu'))model.add(Dense(8, kernel_initializer='uniform', activation='relu'))model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))# load weightsmodel.load_weights("weights.best.hdf5")# Compile model (required to make predictions)model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])print("Created model and loaded weights from file")# load pima indians datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]Y = dataset[:,8]# estimate accuracy on whole dataset using loaded weightsscores = model.evaluate(X, Y, verbose=0)print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

Running the example produces the following output

Created model and loaded weights from fileacc: 77.73%

参考文献


How to Check-Point Deep Learning Models in Keras

http://blog.csdn.net/u010159842/article/details/54602217

用Keras搞一个阅读理解机器人
Keras中文文档
如何保存Keras模型
人工神经网络(三) –keras模型的保存和使用

原创粉丝点击