python-recsys 4 Algorithms 4 算法

来源:互联网 发布:正版办公软件 编辑:程序博客网 时间:2024/06/10 15:56

pyrecsys提供开箱即用的,一些基本的基于矩阵分解的算法

4.0 SVD

为了分解输入数据(矩阵),pyrecsys使用了SVD算法。一旦矩阵降到低维空间,pyrecsys可以提供预测,推荐和‘元素’(用户或者物品,这取决于你加载数据的问题)的相似度。

4.0.0 加载数据

如何加载数据集(以Movielens  10M为例)?

from recsys.algorithm.factorize import SVDfilename = './data/movielens/ratings.dat'svd = SVD()svd.load_data(filename=filename, sep='::', format={'col':0, 'row':1, 'value':2, 'ids': int})    # About format parameter:    #   'row': 1 -> Rows in matrix come from second column in ratings.dat file    #   'col': 0 -> Cols in matrix come from first column in ratings.dat file    #   'value': 2 -> Values (Mij) in matrix come from third column in ratings.dat file    #   'ids': int -> Ids (row and col ids) are integers (not strings)
分离数据集(训练和测试):

from recsys.datamodel.data import Datafrom recsys.algorithm.factorize import SVDfilename = './data/movielens/ratings.dat'data = Data()format = {'col':0, 'row':1, 'value':2, 'ids': int}data.load(filename, sep='::', format=format)train, test = data.split_train_test(percent=80) # 80% train, 20% testsvd = SVD()svd.set_data(train)

加载一个文件,执行外部SVDLIBC项目,创建一个SVD实例:

from recsys.utils.svdlibc import SVDLIBCsvdlibc = SVDLIBC('./data/movielens/ratings.dat')svdlibc.to_sparse_matrix(sep='::', format={'col':0, 'row':1, 'value':2, 'ids': int}) # Convert to sparse matrix format [http://tedlab.mit.edu/~dr/SVDLIBC/SVD_F_ST.html]svdlibc.compute(k=100)svd = svdlibc.export()

4.0.1 计算

>>> K=100>>> svd.compute(k=K, min_values=10, pre_normalize=None, mean_center=True, post_normalize=True, savefile=None)

参数:

min_values:以小于min_values的非零值,从输入矩阵中移除列或者行

pre_normalize:

thidf:

rows:

cols:

all:

mean_center:

post_normalize:

savefile:

4.0.2 预测

预测评级, \hat{r}_{ui}, SVD类重构原始矩阵, M^\prime = U \Sigma_k V^T

>>> svd.predict(ITEMID, USERID, MIN_RATING=0.0, MAX_RATING=5.0)
等价于:

\hat{r}_{ui} = M^\prime_{ij}

这是Movielens 10M数据集的RMSE和MAE(训练:8000043ratings,测试:2000011),使用5-fold交叉验证,不同的k值或因数(10,20,50,100) for SVD:

K102050100RMSE0.872240.867740.865570.86628MAE0.671140.667190.664840.66513

4.0.3 推荐

推荐也来自M^\prime = U \Sigma_k V^T 

>>> svd.recommend(USERID, n=10, only_unknowns=True, is_row=False)
返回 M^\prime_{i \cdot} \forall_j{M_{ij}=\emptyset}的更高值,同时:

>>> svd.recommend(USERID, n=10, only_unknowns=False, is_row=False)
返回最适合用户的物品。

4.1 Neighbourhood SVD

经典近邻算法使用相似用户(或物品)的评级来预测输出矩阵M的值

from recsys.algorithm.factorize import SVDNeighbourhoodsvd = SVDNeighbourhood()svd.load_data(filename=sys.argv[1], sep='::', format={'col':0, 'row':1, 'value':2, 'ids': int})K=100svd.compute(k=K, min_values=5, pre_normalize=None, mean_center=True, post_normalize=True)
4.1.0 预测

与普通SVD唯一的不同是计算预测值\hat{r}_{ui}的方式:

>>> svd.predict(ITEMID, USERID, weighted=True, MIN_VALUE=0.0, MAX_VALUE=5.0)
为例计算预测值,使用以下方程(u=USERID, i=ITEMID):

\hat{r}_{ui} = \frac{\sum_{j \in S^{k}(i;u)} s_{ij} r_{uj}}{\sum_{j \in S^{k}(i;u)} s_{ij}}

S^k(i; u) 代表被u评级的与i最相似的k个商品的集。
为了计算与i最相似的k个物品,使用svd.similar(i)的方法。Then it gets those items that user u has already rated

s_{ij} is the similarity between i and j, computed using svd.similarity(i, j)。

4.1.2 对比

For those who love RMSE, MAE and the like, here are some numbers comparing both SVD approaches. The evaluation uses the Movielens 1M ratings dataset, splitting the train/test dataset with ~80%-20%。

Note

 

Computing svd k=100, min_values=5, pre_normalize=None, mean_center=True, post_normalize=True

Warning

 

Because of min_values=5, some rows (movies) or columns (users) in the input matrix are removed. In fact, those movies that had less than 5 users who rated it, and those users that rated less than 5 movies are removed.

4.1.3 结果

Movielens 1M dataset (number of ratings in the Test dataset: 209,908):

 SVDSVD Neigh.RMSE0.918110.875496MAE0.717030.684173






0 0
原创粉丝点击