【tensorflow-C++之一】Beginner开始工程化

来源:互联网 发布:广东省网络医院 编辑:程序博客网 时间:2024/06/06 02:46

0 模型训练之后

当模型在几百万次迭代之后终于收敛,在验证集表现的也不错之后,是时候可以把它做成真正的服务了。c++登场……..

以下代码可以保存一个最简单的图模型:y = a * x + b(标量)

import tensorflow as tfdef save():    x = tf.placeholder(dtype=tf.float32, shape=[], name='x')    with tf.variable_scope('beginner'):        a = tf.Variable(initial_value=.1, name='a', dtype=tf.float32)        b = tf.Variable(initial_value=0, name='b', dtype=tf.float32)    y = tf.add(tf.multiply(x, a,), b, name='y')    variables = tf.trainable_variables()    for v in variables:        print(v.name)    init = tf.global_variables_initializer()    with tf.Session() as sess:        sess.run(init)        output_graph = tf.graph_util.convert_variables_to_constants(sess=sess, input_graph_def=sess.graph_def, output_node_names=['y'])        tf.train.write_graph(output_graph, ".", "beginner.pb", False)if __name__ == "__main__":    save()

1 工程化

在工程化之前,长达很长时间都是使用python迭代模型,从未考虑过模型训练好之后如何提供服务。通过c++使用tensorflow需要tf的头文件和相应的库文件。但是通过bazel编译之后产生了很多头文件,并且原有的头文件并没有整理在一起,所以整理头文件是一个很费时费力的活。为什么tf不能像caffe一样把头文件都集中在include里边?

1.1 找不到头文件

众里寻他千百度,蓦然回首c++的头文件全部在目录python

site-packages/tensorflow/include

原来在pip install 之后自动有一份c++的头文件,开始征程吧!!

1.2 小试牛刀

/* * inference4beginer.cpp * Copyright (C) 2017 fisherman */#include <tensorflow/core/public/session.h>#include <tensorflow/core/platform/env.h>#include <tensorflow/core/framework/tensor.h>#include <tensorflow/core/framework/graph.pb.h>#include <glog/logging.h>#include <string>#include <vector>#include <memory>int main() {  tensorflow::GraphDef graph;  std::string graph_file = "beginner.pb";  //1 读取Graph, 如果是文本形式的pb,使用ReadTextProto  tensorflow::Status status = tensorflow::ReadBinaryProto(tensorflow::Env::Default(), graph_file, &graph);  if (!status.ok()) {    LOG(FATAL) << status.ToString() << " with graph_file:" << graph_file;    return -1;  }  //2 创建Session  std::unique_ptr<tensorflow::Session> sess;  sess.reset(tensorflow::NewSession({}));  status = sess->Create(graph);  if (!status.ok()) {    LOG(FATAL) << status.ToString();    return -1;  }  //3 构造Tensor并赋值  //3.1 创建tensorflow::Tensor x  tensorflow::Tensor x(tensorflow::DT_FLOAT, tensorflow::TensorShape({}));  //3.2 获取x的Eigen::TensorMap  auto x_map = x.tensor<float, 0>(); // == x.scalar<float>()  //auto ->Eigen::TensorMap<Eigen::Tensor<T, NDIMS, Eigen::RowMajor, IndexType>, Eigen::Aligned>  //3.3 获取指针, 可以任意复制了。  float* data = x_map.data();  *data = 1.0f;  std::vector<std::pair<std::string, tensorflow::Tensor>> inputs;  inputs.emplace_back(std::make_pair("x", x));  std::vector<std::string> output_tensor_names;  output_tensor_names.emplace_back("y");  std::vector<tensorflow::Tensor> y;  // 4 Session Run  status = sess->Run(inputs, output_tensor_names, {}, &y);  if (!status.ok()) {    LOG(ERROR) << status.ToString();    return -1;  }  // 5 打印结果  float y_value = *(y[0].tensor<float, 0>().data());  LOG(INFO) << "y = a*x+b\n" << " a(0.1) b(0) in graph, x(1), y(0.1):" << y_value;  return 0;}

1.3 结果

我使用scons进行编译并运行

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locallyI tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locallyI tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locallyI tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locallyI tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locallyI tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: Tesla M40major: 5 minor: 2 memoryClockRate (GHz) 1.112pciBusID 0000:02:00.0Total memory: 11.17GiBFree memory: 346.75MiBI tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla M40, pci bus id: 0000:02:00.0)WARNING: Logging before InitGoogleLogging() is written to STDERRI0615 00:39:15.321010 19296 inference4beginer.cpp:57] y = a*x+b a(0.1) b(0) in graph, x(1), y(0.1):0.1

1 如果忘记复制graph,提示信息很明白不多说。

F0615 00:42:42.124217 19389 inference4beginer.cpp:24] Not found: beginner.pb with graph_file:beginner.pb*** Check failure stack trace: ***./run.sh: line 3: 19389 Aborted                 (core dumped) CUDA_VISIBLE_DEVICES=0 ./inference

2 错把scalar(标量)写成vector(向量)

  tensorflow::Tensor x(tensorflow::DT_FLOAT, tensorflow::TensorShape({1}));

则会出现这样的错误

F tensorflow/core/framework/tensor_shape.cc:36] Check failed: NDIMS == dims() (0 vs. 1)Asking for tensor of 0 dimensions from a tensor of 1 dimensions./run.sh: line 3: 19490 Aborted                 (core dumped) CUDA_VISIBLE_DEVICES=0 ./inference
原创粉丝点击