机器学习实验(十一):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_2(keras版)

来源:互联网 发布:苹果 淘宝 编辑:程序博客网 时间:2024/05/17 02:17


声明:版权所有,转载请联系作者并注明出处  http://blog.csdn.net/u013719780?viewmode=contents


上一个实验机器学习实验(十):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_1(tensorflow版)是用tensorflow实现的,本次用keras实现一下,具体代码如下:



import pandas as pdimport numpy as npfrom sklearn.preprocessing import scaledataset = pd.read_csv("trainingData.csv",header = 0)features = scale(np.asarray(dataset.ix[:,0:520]))labels = np.asarray(dataset["BUILDINGID"].map(str) + dataset["FLOOR"].map(str))labels = np.asarray(pd.get_dummies(labels))train_val_split = np.random.rand(len(features)) < 0.70train_x = features[train_val_split]train_y = labels[train_val_split]val_x = features[~train_val_split]val_y = labels[~train_val_split]test_dataset = pd.read_csv("validationData.csv",header = 0)test_features = scale(np.asarray(test_dataset.ix[:,0:520]))test_labels = np.asarray(test_dataset["BUILDINGID"].map(str) + test_dataset["FLOOR"].map(str))test_labels = np.asarray(pd.get_dummies(test_labels))
/Applications/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py:420: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.  warnings.warn(msg, DataConversionWarning)/Applications/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py:420: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.  warnings.warn(msg, DataConversionWarning)
from keras.models import Sequentialfrom keras.layers import Denseimport timenb_epochs = 20batch_size = 10input_size = 520num_classes = 13def encoder():    model = Sequential()    model.add(Dense(256, input_dim=input_size, activation='tanh', bias=True))    model.add(Dense(128, activation='tanh', bias=True))    model.add(Dense(64, activation='tanh', bias=True))    return modeldef decoder(e):       e.add(Dense(128, input_dim=64, activation='tanh', bias=True))    e.add(Dense(256, activation='tanh', bias=True))    e.add(Dense(input_size, activation='tanh', bias=True))    e.compile(optimizer='adam', loss='mse')    return e    e = encoder()d = decoder(e)d.fit(train_x, train_x, nb_epoch=nb_epochs, batch_size=batch_size, verbose=2)time.sleep(0.1)def classifier(d):    num_to_remove = 3    for i in range(num_to_remove):        d.pop()    d.add(Dense(128, input_dim=64, activation='tanh', bias=True))    d.add(Dense(128, activation='tanh', bias=True))    d.add(Dense(num_classes, activation='softmax', bias=True))    d.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])    return dc = classifier(d)c.fit(train_x, train_y, validation_data=(val_x, val_y), nb_epoch=nb_epochs, batch_size=batch_size, verbose=2)time.sleep(0.1)loss, acc = c.evaluate(test_features, test_labels, verbose=0)time.sleep(0.1)print loss, acc
Using Theano backend.
Epoch 1/206s - loss: 0.7049Epoch 2/206s - loss: 0.6808Epoch 3/205s - loss: 0.6752Epoch 4/205s - loss: 0.6724Epoch 5/205s - loss: 0.6703Epoch 6/205s - loss: 0.6685Epoch 7/205s - loss: 0.6670Epoch 8/205s - loss: 0.6656Epoch 9/205s - loss: 0.6641Epoch 10/205s - loss: 0.6630Epoch 11/205s - loss: 0.6619Epoch 12/205s - loss: 0.6610Epoch 13/205s - loss: 0.6599Epoch 14/205s - loss: 0.6593Epoch 15/205s - loss: 0.6584Epoch 16/205s - loss: 0.6578Epoch 17/205s - loss: 0.6571Epoch 18/205s - loss: 0.6565Epoch 19/205s - loss: 0.6560Epoch 20/205s - loss: 0.6555Train on 13945 samples, validate on 5992 samplesEpoch 1/203s - loss: 0.3205 - acc: 0.8881 - val_loss: 0.1862 - val_acc: 0.9356Epoch 2/203s - loss: 0.1333 - acc: 0.9558 - val_loss: 0.1674 - val_acc: 0.9513Epoch 3/204s - loss: 0.1072 - acc: 0.9645 - val_loss: 0.1554 - val_acc: 0.9449Epoch 4/203s - loss: 0.0860 - acc: 0.9717 - val_loss: 0.1836 - val_acc: 0.9383Epoch 5/203s - loss: 0.0752 - acc: 0.9750 - val_loss: 0.1699 - val_acc: 0.9534Epoch 6/203s - loss: 0.0691 - acc: 0.9770 - val_loss: 0.1610 - val_acc: 0.9554Epoch 7/203s - loss: 0.0637 - acc: 0.9796 - val_loss: 0.1886 - val_acc: 0.9489Epoch 8/204s - loss: 0.0601 - acc: 0.9814 - val_loss: 0.1604 - val_acc: 0.9569Epoch 9/204s - loss: 0.0589 - acc: 0.9812 - val_loss: 0.1312 - val_acc: 0.9606Epoch 10/204s - loss: 0.0496 - acc: 0.9826 - val_loss: 0.1882 - val_acc: 0.9488Epoch 11/204s - loss: 0.0489 - acc: 0.9823 - val_loss: 0.1662 - val_acc: 0.9541Epoch 12/203s - loss: 0.0517 - acc: 0.9833 - val_loss: 0.1311 - val_acc: 0.9613Epoch 13/203s - loss: 0.0470 - acc: 0.9841 - val_loss: 0.2039 - val_acc: 0.9474Epoch 14/204s - loss: 0.0495 - acc: 0.9835 - val_loss: 0.1874 - val_acc: 0.9543Epoch 15/203s - loss: 0.0401 - acc: 0.9872 - val_loss: 0.1639 - val_acc: 0.9503Epoch 16/204s - loss: 0.0466 - acc: 0.9857 - val_loss: 0.1649 - val_acc: 0.9593Epoch 17/203s - loss: 0.0423 - acc: 0.9859 - val_loss: 0.1639 - val_acc: 0.9574Epoch 18/203s - loss: 0.0369 - acc: 0.9875 - val_loss: 0.1536 - val_acc: 0.9608Epoch 19/203s - loss: 0.0416 - acc: 0.9863 - val_loss: 0.1624 - val_acc: 0.9619Epoch 20/203s - loss: 0.0362 - acc: 0.9882 - val_loss: 0.1545 - val_acc: 0.96091.25824190562 0.747974797694

注意:调用fit方法的时候出现错误 ValueError: I/O operation on closed file ,这应该是ipython notebook的一个IO bug,解决办法也很简单,将参数verbose = 0或2即可,verbose的默认值是1。verbose是指日志显示,0为不在标准输出流输出日志信息,1为输出进度条记录,2为每个epoch输出一行记录。


1 1