Deep Learning in Customer Churn Prediction (三) (初步特征构建实践及基本模型试验)

来源:互联网 发布:js 弹出div层 居中 编辑:程序博客网 时间:2024/06/06 09:56

有关特征构建的思路可以参见Deep Learning in Customer Churn Prediction (一) (提升平衡随机森林及特征构建)

主要采取该文中对于移动通信系统的特征处理方法,其提出使用对于通信时间进行分段统计的方法。

数据的基本准备是将不同Customer的数据区分开,

采用np.ndarray进行存储,假设第一列指定了客户操作的时间(pd.Timestamp

其它列是一些操作的分类及资金流动的值


可以采用对客户操作的间隔时间的分段累加作为一个特征,辅之以分类变量及资金流动的均值描述

从而将每个客户数据(二维)扁平化为一个向量


对于指定客户是否为流失的问题(贴标签)

可以通过对现有数据在时间维度进行clip(指定到某个之前的时间点)对于之后一段时间有操作的认定为未流失

用户。同时将距上述时间点的最近操作的与该时间点的时间差也作为Customer的一个特征。

(在将此变量挑选为特征的同时对于是否将此变量的加入模型的结果进行了比较,加入此变量在一定程度

提升了正确率,但此变量并不是决定性的——提升的0.1以下)


下面给出由"user_dict.pkl"保存的Customer标识与信息ndarray的字典作为输入,

上述扁平化数据处理得到的X, y作为输出的 Python示例脚本

from __future__ import divisionimport pandas as pdimport numpy as npimport picklefrom sklearn.preprocessing import StandardScalerimport os# Preparing data and preprocessuser_dict = Nonewith open("user_dict.pkl", "rb") as f:    user_dict = pickle.load(f)one_category_set = set([])second_category_set = set([])for k, v in user_dict.items():    one_category_set = one_category_set.union(set(v[:, 2]))    second_category_set = second_category_set.union(set(v[:, -1]))one_category_list = list(one_category_set)second_category_list = list(second_category_set)time_gap_dict = dict()def generate_dict(input_dict, end_time):    global time_gap_dict    for k, v in input_dict.items():        time_gap_dict[k] = [element.total_seconds() for element in v[1:,0] - v[:-1, 0]]def linspace(start, end, num, series, desp = False):    List = np.linspace(start, end, num)    List_require = []    for i in range(len(List) - 1):        element = ((series > List[i]) * (series <= List[i+1])).sum()        List_require.append(r"{}_{}_{}".format(int(List[i]), int(List[i+1]), int(element)))    if desp:        for element in List_require:            print(element)    return List_requiredef generate_clip_dict(input_dict, end_time):    new_dict = dict()    for k, v in input_dict.items():        if v[:, 0].min() < end_time:            sub_list = [None, 1]            index = np.sum(v[:, 0] < end_time)            sub_list[0] = v[:index ,:]            if v[:, 0].max() > end_time:                sub_list[1] = 0            new_dict[k] = sub_list    return new_dictdef transform_to_1d_array(input_array, start, end, num, end_time):    global one_category_list    global second_category_list    temp_require = np.zeros([len(one_category_list) + len(second_category_list)])    for (index, category) in enumerate(one_category_list):        temp_require[index] = np.sum(input_array[:, 2] == category)    for (index, category) in enumerate(second_category_list):        temp_require[len(one_category_list) + index] = np.sum(input_array[:, -1] == category)    # now temp_list has category information    temp_list = list(temp_require)    # now add the final time gap    temp_list.append((end_time - input_array[:, 0].max()).total_seconds())    # now add two mean    temp_list.append(np.mean(input_array[:, 1]))    temp_list.append(np.mean(input_array[:, 3]))    time_gap_array = [element.total_seconds() for element in (input_array[1:, 0] - input_array[:-1, 0])]    series = pd.Series(time_gap_array)    times_list = [int(element.split("_")[-1]) for element in linspace(start, end, num, series)]    # now extend the times of different gap event    # must have identical length    temp_list.extend(times_list)    return np.asarray(temp_list)# end time is the split time for setting label y and a feature of Xend_time = pd.Timestamp('20XX-XX-XX XX:XX:XX')generate_dict(user_dict, end_time)all_gap_list = []for k, v in time_gap_dict.items():    all_gap_list.extend(list(v))all_gap_series = pd.Series(all_gap_list)clip_dict = generate_clip_dict(user_dict, end_time)X = Noney = np.zeros(len(clip_dict))average_split_num = 50for (index, (k, v)) in enumerate(clip_dict.items()):    temp_transform_array = transform_to_1d_array(v[0], all_gap_series.min(), all_gap_series.max(), average_split_num, end_time)    if X is None:        X = np.zeros([len(clip_dict), len(temp_transform_array)])    X[index] = temp_transform_array     y[index] = v[1]transform = Trueif transform:    scaler = StandardScaler()    X = scaler.fit(X).transform(X)with open("X.pkl", "wb") as f:    pickle.dump(X, f)with open("y.pkl", "wb") as f:    pickle.dump(y, f)

一般的,如前面的文章及论文中提到的,上面的数据处理及特征构建往往耗费比模型估计更多的

时间,序列化是重要的。


下面使用sklearn给出常用聚类方法的预测结果示例,

在sklearn 0.18里给出了多层感知神经网络(MLP)的实现,其在全连接的情况下与深度神经网络(DNN)

(这里选取100个隐藏神经元的结果相对较好)

是等价的,神经网络部分采用该实现。

其余classifier都是相对传统的。


下面是示例代码:

from __future__ import divisionimport numpy as npimport picklefrom sklearn.svm import SVCfrom sklearn.linear_model import LogisticRegressionfrom sklearn import treefrom sklearn.ensemble import BaggingClassifierfrom sklearn.neural_network import MLPClassifierfrom sklearn.neural_network import BernoulliRBMimport osX = Nonewith open("X.pkl", "rb") as f:    X = pickle.load(f)RBM_reduce = Falseif RBM_reduce:clf = BernoulliRBM(n_components=10)X = clf.fit_transform(X)y = Nonewith open("y.pkl", "rb") as f:y = pickle.load(f)def predict_model(classifier, classifier_name):print("{} classifier :".format(classifier_name))clf = classifierclf.fit(X[:5000], y[:5000])equal_num = np.sum(clf.predict(X[5000:]) == y[5000:])print("equal_num :")print(equal_num)print("equal_ratio :")print(equal_num / len(y[5000:]))print()if __name__ == "__main__":print(X.shape)predict_model(tree.DecisionTreeClassifier(), "decision tree")predict_model(LogisticRegression(), "logistic")predict_model(SVC(), "svc")predict_model(BaggingClassifier(n_estimators = 100), "bagging")predict_model(MLPClassifier(hidden_layer_sizes = (100, )), "MLP")predict_model(BaggingClassifier(MLPClassifier(hidden_layer_sizes = (100, )), n_estimators = 100, n_jobs = 1), "bagging MLP")

这里要对是否使用受限波尔兹曼机(RBM)进行降维(上百维到10维)

进行简要介绍。

采取RBM进行预处理的结果一般会使得不同classifier对于结果正确率的估计的差异极大地减小,

具体可以看下面的输出结果(且这种结果对降维的目标维数是不敏感的)

这较适用于在进行数据预处理及特征构建时,从模型精确性角度选择合适的特征构建方法,

RBM增强了数据对于模型的rubust性,

当然,RBM处理对于不同的模型而言效果并不是一致的。


下面先看一下不进行RBM降维的不同估计结果

(6185, 362)decision tree classifier :equal_num :944equal_ratio :0.796624472574logistic classifier :equal_num :985equal_ratio :0.831223628692svc classifier :equal_num :998equal_ratio :0.842194092827bagging classifier :equal_num :1031equal_ratio :0.870042194093MLP classifier :equal_num :990equal_ratio :0.835443037975bagging MLP classifier :equal_num :991equal_ratio :0.836286919831

下面是进行RBM降维的结果

(6185, 10)decision tree classifier :equal_num :973equal_ratio :0.821097046414logistic classifier :equal_num :974equal_ratio :0.82194092827svc classifier :equal_num :974equal_ratio :0.82194092827bagging classifier :equal_num :973equal_ratio :0.821097046414MLP classifier :equal_num :974equal_ratio :0.82194092827bagging MLP classifier :equal_num :974equal_ratio :0.82194092827


要指出的是,这仅仅是一个示例,因为整体user_dict的字典项数仅有6000多,

对于大数据量的实现的代码构建是需要更改的,否则实现过慢。(这里指第一个数据处理脚本)


实际预先完成的日志数据对用户的分割在之前就完成了,但这里有几个要注意的点,

从储存载体而言,一般习惯将sql里的表直接转换为pandas DataFrame,这会浪费大量的空间,(当浪费比较大时会影响对python效率)

在进行完对象数据处理操作后调用gc.collect()进行强制内存清理

大数据量下的载体应当是矩阵(numpy ndarray)——不仅应当是矩阵,还应当尽可能地使用强类型的矩阵。

使用强类型矩阵进行变量储存要注意尽量避免内存的重新分配,最好在一开始就分配要使用的所有矩阵的相应

内存大小(np.empty)

可以通过不同的数据类型将DataFrame分割为“”不同“类型”的矩阵.

举一个实例,DataFrame文件的本地化文件大小约8.46g 分割成一个由int32 及object的ndarray构成的元组时

大小减小到5.43g 是之前的64%。

进行数据分割时应当预先使用list储存索引,并在最终储存前cast成强类型,在使用索引抽取数据时对索引排序,这不仅节省索引存储的时间与空间,

还能快速抽取数据。

即使是矩阵,在对于字符串进行处理时,也很占空间,但仅仅进行有关去重的相关操作,推荐使用__hash__() 

并cast成numpy ndarray int64

使用python joblib Parallel利用服务器多核并发优势 在处理大对象时不失时机地分割并进行数据持久化,

这既可以防止长时间处理抛出异常导致“功亏一篑”,分割之后的操作如果是独立的还可以直接并行。

实际这些工作大抵是“造轮子”,但总比没有“轮子”干不成事情要强。


将上面过程的实现简要地给出,包含对上面第一个脚本的重构,颇有MapReduce的意味

load_and_dump.py

#coding: utf-8from __future__ import divisiondef serial_dataframe(index, mobile_frame):    with open(frame_dir + "//" + r"appmob_frame_{}.pkl".format(index),"wb") as f:        pickle.dump(mobile_frame, f)def manipulate_dataframe(dataframe):    import numpy as np    # fill None with 0 simply    dataframe = dataframe.fillna(0)    name_time_array = np.append(np.array(dataframe.as_matrix()[:, :2]) ,np.array([dataframe.TradeDate]).reshape(-1, 1), axis = 1)    others_array = np.asarray(dataframe.as_matrix()[:, 3:], dtype = "int32")    return (name_time_array, others_array)def parallel_transform(pkl_dataframe, frame_dir ,transform_dir):    import pickle    print("pkl_dataframe name :")    print(pkl_dataframe)    dataframe = None    with open(frame_dir + "\\" + pkl_dataframe, "rb") as f:        dataframe = pickle.load(f)    num_pkl = pkl_dataframe.split("_")[-1]    name_time_array, others_array = manipulate_dataframe(dataframe)    with open(transform_dir + "\\" + "name_time_{}".format(num_pkl), "wb") as f:        pickle.dump(name_time_array, f)    with open(transform_dir + "\\" + "others_{}".format(num_pkl), "wb") as f:        pickle.dump(others_array, f)if __name__ == "__main__":    set_chunksize = 5000000    frame_dir = "frame_dir"    transform_dir = "transform_dir"    import pickle    import pyodbc    import os    import pandas as pd    import numpy as np    from time import time    from joblib import Parallel, delayed    dump_end = False    if not dump_end:        conn = pyodbc.connect(r'')        SQL_String = '''                '''        mobile_frame_iter = pd.read_sql(SQL_String, conn, chunksize = set_chunksize)        if not os.path.exists(frame_dir):            os.mkdir(frame_dir)        if not os.path.exists(transform_dir):            os.mkdir(transform_dir)        start_time = time()        for (index, mobile_frame) in enumerate(mobile_frame_iter):            serial_dataframe(index, mobile_frame)            print("call index {}".format(index))        print("time cost :")        print(time() - start_time)    pkl_dataframe_set = set([])    for file_name in os.listdir(frame_dir):        pkl_dataframe_set.add(file_name)    start_time = time()    Parallel(n_jobs=10)(delayed(parallel_transform)(pkl_dataframe, frame_dir ,transform_dir) for pkl_dataframe in pkl_dataframe_set)    print("time cost :")    print(time() - start_time)
数据导出及初步强类型本地化(并行)
reduce_to_different_type.py
#coding: utf-8from __future__ import divisiondef generate_1d_array(input_array, name, index_range):    global transform_dir    print("start 1dlize of {}".format(name))    file_format = transform_dir + "\\" + "name_time_{}.pkl"    start_index = 0    for i in index_range:        print("times: {}".format(i))        with open(file_format.format(i), "rb") as f:            load_array = pickle.load(f)[:, 2]            load_array = np.asarray(load_array / 1e9, dtype="int32")            input_array[start_index: start_index + len(load_array)] = load_array            start_index += len(load_array)    input_array = input_array[:start_index]    return input_arraydef generate_name_array(input_array, name, index_range):    global transform_dir    global hash_name_dict    print("start 1dlize of {}".format(name))    file_format = transform_dir + "\\" + "name_time_{}.pkl"    start_index = 0    for i in index_range:        print("times: {}".format(i))        with open(file_format.format(i), "rb") as f:            load_array = pickle.load(f)[:, :2]            temp_list = []            for one_d_array in load_array:                SourceUserId, SourceUserIdx = one_d_array                hash = (str(SourceUserId) + str(SourceUserIdx)).__hash__()                if hash not in hash_name_dict:                    hash_name_dict[hash] = (SourceUserId, SourceUserIdx)                temp_list.append(hash)            load_array = np.asarray(temp_list, dtype="int64")            input_array[start_index: start_index + len(load_array)] = load_array            start_index += len(load_array)    input_array = input_array[:start_index]    return input_arraydef generate_2d_array(input_array, name, index_range):    global transform_dir    print("start 1dlize of {}".format(name))    file_format = transform_dir + "\\" + "others_{}.pkl"    start_index = 0    for i in index_range:        print("times: {}".format(i))        with open(file_format.format(i), "rb") as f:            load_array = pickle.load(f)            input_array[start_index: start_index + len(load_array), :] = load_array            start_index += len(load_array)    input_array = input_array[:start_index, :]    return input_arraydef serialize_and_flush(input_array, name, require_index, index_range):    global hash_name_dict    if require_index is None:        array_require = generate_2d_array(input_array, name, index_range)    elif require_index == 0:        array_require = generate_name_array(input_array, name, index_range)    else:         array_require = generate_1d_array(input_array, name, index_range)    with open(name + r".pkl", "wb") as f:        pickle.dump(array_require, f, protocol=4)    if name == "name":        with open("hash_name_dict.pkl", "wb") as f:            pickle.dump(hash_name_dict, f, protocol=4)        del(hash_name_dict)    del(array_require)    gc.collect()if __name__ == "__main__":    frame_dir = "frame_dir"    transform_dir = "transform_dir"    set_chunksize = 5000000    import pickle    import numpy as np    import os    import gc    range_index = len(os.listdir(frame_dir))    hash_name_dict = dict()    name_array = np.empty(shape=(set_chunksize * range_index,), dtype="int64")    time_array = np.empty(shape=(set_chunksize * range_index,), dtype="int32")    others_array = np.empty(shape=(set_chunksize * range_index, 5), dtype="int32")    serialize_and_flush(name_array, "name", 0, range(range_index))    serialize_and_flush(time_array, "time", 1, range(range_index))    serialize_and_flush(others_array, "others", None, range(range_index))
数据拼接及进一步强类型本地化

sort_and_index.py
#coding: utf-8from __future__ import divisionif __name__ == "__main__":    import pickle    import numpy  as np    import gc    import os    time_array = None    with open("time.pkl", "rb") as f:        time_array = pickle.load(f)    argsort_array = np.argsort(time_array)    name_array = None    with open("name.pkl", "rb") as f:        name_array = pickle.load(f)    order_name_array = name_array[argsort_array]    with open("order_name_array.pkl", "wb") as f:        pickle.dump(order_name_array, f, protocol=4)    del(name_array)    del(order_name_array)    gc.collect()    print("end of serial order_name")    order_time_array = time_array[argsort_array]    with open("order_time_array.pkl", "wb") as f:        pickle.dump(order_time_array, f, protocol=4)    del(time_array)    del(order_time_array)    gc.collect()    print("end of serial order_time")    others_array = None    with open("others.pkl", "rb") as f:        others_array = pickle.load(f)    others_array = others_array[argsort_array ,:]    with open("order_others_array.pkl", "wb") as f:        pickle.dump(others_array, f, protocol=4)    print("end of serial order_others_array")    name_index_dict = dict()    name_array = None    with open("order_name_array.pkl", "rb") as f:        name_array = pickle.load(f)    for (index, name) in enumerate(name_array):        if index % 100000 == 0:            print(index)        if name not in name_index_dict:            name_index_dict[name] = [index]        else:            name_index_dict[name].append(index)    for key in name_index_dict.keys():        name_index_dict[key] = np.asarray(name_index_dict[key], dtype="int32")    with open("name_index_dict.pkl", "wb") as f:        pickle.dump(name_index_dict, f, protocol=4)
排序并给出用户索引

map_to_users.py
#coding: utf-8from __future__ import divisionif __name__ == "__main__":    user_dir = "user_dir"    import numpy as np    import pickle    import os    name_index_dict = None    with open("name_index_dict.pkl", "rb") as f:        name_index_dict = pickle.load(f)    print("zize of name_index_dict :")    print(len(name_index_dict.keys()))    order_time_array = None    with open("order_time_array.pkl", "rb") as f:        order_time_array = pickle.load(f)    order_others_array = None    with open("order_others_array.pkl", "rb") as f:        order_others_array = pickle.load(f)    def manipulate_index(name):        global name_index_dict        global order_name_time_array        global order_others_array        order_index = np.sort(name_index_dict[name])        return (order_time_array[order_index], order_others_array[order_index])    if not os.path.exists(user_dir):        os.mkdir(user_dir)    temp_dict = dict()    dict_count = 1000    user_count = 0    file_count = 0    for name in name_index_dict.keys():        temp_dict[name] = manipulate_index(name)        user_count += 1        if user_count >= dict_count:            user_count = 0            with open(user_dir + "\\" + "user_dict_{}.pkl".format(file_count), "wb") as f:                pickle.dump(temp_dict, f)            temp_dict = dict()            file_count += 1            print("file_count :")            print(file_count)
将索引的用户数据导出

clip_and_map.py
#coding: utf-8from __future__ import divisiondef generate_clip_dict(input_dict, start_time, end_time, out_index):    new_dict = dict()    for k, v in input_dict.items():        if v[0].min() < end_time:            start_index = np.sum(v[0] < start_time)            sub_list = [None, 1, end_time]            index = np.sum(v[0] < end_time)            sub_list[0] = (v[0][start_index:index], v[1][start_index:index ,:])            if v[0].max() > end_time:                sub_list[1] = 0            new_dict[str(k) + "_{}".format(out_index)] = sub_list    return new_dictif __name__ == "__main__":    batch_size = 3000    user_dir = "user_dir"    new_user_dir = "new_user_dir"    import pandas as pd    import numpy as np    import pickle    import datetime    import gc    import os    factor_index_list = [0, 2, 3]    num_index_list = [1, 4]    factor_dict = dict()    for factor_index in factor_index_list:        factor_dict[factor_index] = set([])    user_dict = dict()    range_index = len(os.listdir(user_dir))    for i in range(range_index):        file_name = user_dir + "\\" + "user_dict_{}.pkl".format(i)        with open(file_name, "rb") as f:            for k, v in pickle.load(f).items():                user_dict[k] = v    print("first end ")    time_gap_min_max_list = [None, None]    for k, v in user_dict.items():        temp_time_gap_array = v[0][1:] - v[0][:-1]        if len(temp_time_gap_array) > 0:            temp_min, temp_max = np.min(temp_time_gap_array), np.max(temp_time_gap_array)            if time_gap_min_max_list == [None, None]:                time_gap_min_max_list = [temp_min, temp_max]            else:                if temp_min < time_gap_min_max_list[0]:                    time_gap_min_max_list[0] = temp_min                if temp_max > time_gap_min_max_list[1]:                    time_gap_min_max_list[1] = temp_max        for factor_index in factor_dict.keys():            factor_dict[factor_index] = factor_dict[factor_index].union(set(v[1][:, factor_index]))    # transform set in factor_dict to list    factor_length = 0    for name in factor_dict.keys():        factor_list = list(factor_dict[name])        factor_length += len(factor_list)        factor_dict[name] = factor_list    print("second end ")    # serialize the factor_dict to local    with open("factor_dict.pkl", "wb") as f:        pickle.dump(factor_dict, f)    all_gap_series = pd.Series(time_gap_min_max_list)    # serialize all_gap_series to local    with open("all_gap_series.pkl", "wb") as f:        pickle.dump(all_gap_series ,f)    print("thrid end")    start_time_range = pd.date_range(datetime.datetime(20XX, X, X), periods=X)    end_time_range = pd.date_range(datetime.datetime(20XX, X, X), periods=X)    all_clip_dict = dict()    for (index, start_timestamp) in enumerate(start_time_range):        start_time = int(start_timestamp.timestamp())        end_time = int(end_time_range[index].timestamp())        clip_dict = generate_clip_dict(user_dict, start_time, end_time, index)        for k, v in clip_dict.items():            all_clip_dict[k] = v        print("length of clip dict")        print(len(clip_dict))        print("length of all clip dict :")        print(len(all_clip_dict))        del(clip_dict)    del(start_time_range)    del(end_time_range)    gc.collect()    print("length of all_clip_dict :")    print(len(all_clip_dict))    if not os.path.exists(new_user_dir):        os.mkdir(new_user_dir)    keys = list(all_clip_dict.keys())    temp_dict = dict()    for (index, key) in enumerate(keys):        temp_dict[key] = all_clip_dict.pop(key)        first, second = divmod(index, batch_size)        if (first + second) != 0:            if second == 0:                with open(new_user_dir + "\\" +"user_dict_{}.pkl".format(first - 1), "wb") as f:                    pickle.dump(temp_dict, f, protocol=4)                temp_dict = dict()
对用户数据根据需求进行裁剪并贴标签

map_to_X_y.py
#coding: utf-8from __future__ import divisiondef linspace(series, input_linspace):    import numpy as np    array_require = np.empty(shape=(len(input_linspace),))    for i in range(len(array_require) - 1):        array_require[i] = ((series > input_linspace[i]) * (series <= input_linspace[i+1])).sum()    return array_requiredef transform_to_1d_array(input_tuple, end_time, num_index_list, factor_length, input_linspace, factor_dict):    import numpy as np    time_array, input_array = input_tuple    if len(time_array) == 0:        return None    temp_require = np.zeros([factor_length])    before_length = 0    for category_index, factor_list in factor_dict.items():        for (index, category) in enumerate(factor_list):            temp_require[before_length + index] = np.sum(input_array[:, category_index] == category)        before_length += len(factor_list)    # now temp_list has category information    temp_list = list(temp_require)    # now add the final time gap    temp_list.append(end_time - time_array.max())    # now means    for num_index in num_index_list:        temp_list.append(np.mean(input_array[:, num_index]))    time_gap_array = time_array[1:] - time_array[:-1]    times_list = list(linspace(time_gap_array, input_linspace))    # now extend the times of different gap event    # must have identical length    temp_list.extend(times_list)    return np.asarray(temp_list)def generate_X_y(keys_of_clip_dict, all_clip_dict, num_index_list, factor_length, input_linspace, factor_dict, input_index):    import numpy as np    import pickle    X = None    y = None    start_index = 0    index = 0    for (index, k) in enumerate(keys_of_clip_dict):        v = all_clip_dict.pop(k)        temp_transform_array = transform_to_1d_array(v[0], v[2], num_index_list, factor_length, input_linspace, factor_dict)        if temp_transform_array is None:            continue        if X is None:            X = np.empty(shape=(len(keys_of_clip_dict), len(temp_transform_array)), dtype="int32")            y = np.empty(shape=(len(keys_of_clip_dict),), dtype="int32")        X[start_index] = temp_transform_array        y[start_index] = v[1]        start_index += 1        if index % 100 == 0:            print(start_index)    with open(r"X_y_dir/X_sub_{}.pkl".format(input_index), "wb") as f:        pickle.dump(X[:start_index ,:], f, protocol=4)    with open(r"X_y_dir/y_sub_{}.pkl".format(input_index), "wb") as f:        pickle.dump(y[:start_index], f, protocol=4)def load_and_dump(pkl_file, num_index_list, factor_length, input_linspace, factor_dict, input_index):    import pickle    all_clip_dict = None    with open(pkl_file, "rb") as f:        all_clip_dict = pickle.load(f)    keys_of_clip_dict = list(all_clip_dict.keys())    generate_X_y(keys_of_clip_dict, all_clip_dict, num_index_list, factor_length, input_linspace, factor_dict, input_index)if __name__ == "__main__":    new_user_dir = "new_user_dir"    X_y_dir = "X_y_dir"    import numpy as np    import pickle    from time import time    from joblib import Parallel, delayed    import os    user_dir = new_user_dir    if not os.path.exists(X_y_dir):        os.mkdir(X_y_dir)    factor_index_list = [0, 2, 3]    num_index_list = [1, 4]    with open("factor_dict.pkl", "rb") as f:        factor_dict = pickle.load(f)    with open("all_gap_series.pkl", "rb") as f:        all_gap_series = pickle.load(f)    average_split_num = 50    min_gap = all_gap_series.min()    max_gap = all_gap_series.max()    input_linspace = np.linspace(min_gap, max_gap, average_split_num)    pkl_file_set = set([])    for file_name in os.listdir(user_dir):        pkl_file_set.add(user_dir + "\\" + file_name)    factor_length = 0    for v in factor_dict.values():        factor_length += len(v)    start_time = time()    Parallel(n_jobs=20)(delayed(load_and_dump)(file_name, num_index_list, factor_length, input_linspace, factor_dict, input_index) for (input_index, file_name) in enumerate(pkl_file_set))    print("time cost :")    print(time() - start_time)
根据裁剪数据生成自变量与因变量数据(并行)

reduce_to_X_y.py
#coding: utf-8from __future__ import divisionif __name__ == "__main__":    batch_size = 3000    X_y_dir = "X_y_dir"    import pickle    import numpy as np    import os    range_index = int(len(os.listdir(X_y_dir)) / 2)    X = None    y = None    start_index = 0    for index in range(range_index):        load_X, load_y = None, None        with open(X_y_dir + "\\" + "X_sub_{}.pkl".format(index), "rb") as f:            load_X = pickle.load(f)        with open(X_y_dir + "\\" + "y_sub_{}.pkl".format(index), "rb") as f:            load_y = pickle.load(f)        if X is None:            X = np.empty(shape=(batch_size * range_index, load_X.shape[1]), dtype="int32")            y = np.empty(shape=(batch_size * range_index,), dtype="int32")        X[start_index: start_index + load_X.shape[0], :] = load_X        y[start_index: start_index + load_X.shape[0]] = load_y        start_index += load_X.shape[0]    X = X[:start_index, :]    y = y[:start_index]    with open("X.pkl", "wb") as f:        pickle.dump(X, f, protocol=4)    with open("y.pkl", "wb") as f:        pickle.dump(y, f, protocol=4)
数据合并


当然,对于上面这些持久化处理相关的也可以使用joblib中的对应函数进行尝试,一种所说法是后者较pickle要快,

而且joblib的Memory提供了中间过程的缓存机制,这些可以作为进一步优化可以考虑的事情。

上述过程的类似Spark Scala rdd版本见

Deep Learning in Customer Churn Prediction (五) (Spark RDD 特征构建实践尝试)

上述过程的类似Spark Scala sql版本见

Deep Learning in Customer Churn Prediction (六) (Spark SQL 特征构建实践)


另外,要应用的数据量之大进行模型一次性初始化是不现实的(一般超过32g的内存限制),势必要使用流量计算

,这在模型方面是已经得到sklearn的支持——在线学习

http://scikit-learn.org/stable/modules/scaling_strategies.html


对于Deep Learning in Customer Churn Prediction (一) (提升平衡随机森林及特征构建)

中的提升平衡随机森林也尝试过跑上述数据,结果并不好,

可能定义出的equal_ratio(如排在前面真是negative个数的predict标签中的真实negative比例)在0.6-0.7间

并不如上面的结果好。

对于解决非平衡问题的抽样方法还需要进一步探究,相应解决方案已经有Python实现,见

http://contrib.scikit-learn.org/imbalanced-learn/








阅读全文
0 0
原创粉丝点击