CS 20SI|Lecture 1 Introduction to TensorFlow

来源:互联网 发布:手机淘宝旧版本 5.7.0 编辑:程序博客网 时间:2024/06/05 03:14

本文是基于课程CS 20SI: TensorFlow for Deep Learning Research进行总结。

课程目标

  • 理解TF计算图方法
  • 探索TF内建函数
  • 学习如何构建适合DL项目的结构化模型

参考书籍

  • TensorFlow for Machine Intelligence (TFFMI)
  • Hands-On Machine Learning with Scikit-Learn and TensorFlow. Chapter 9: Up and running with TensorFlow
  • Fundamentals of Deep Learning. Chapter 3: Implementing Neural Networks in TensorFlow (FODL)
    TensorFlow正在快速迭代,上述书籍可能会过时,直接参考官方网站

开始

High level API

  1. TF Learn
  2. TF Slim
  3. Keras(官方宣布支持作为上层API)
  4. Tensor Layer

Graphs and Sessions

Data Flow Graphs

TensorFlow将计算图的定义从执行过程中分离。TF的工作分为两个阶段:
阶段1:定义计算图
阶段2:使用一个session执行图中的operations

Tensor

Tensor是一个n维矩阵,相当于numpy中的ndarray

import tensorflow as tfa = tf.add(3, 5)

数据流图
上述代码构建了上面的数据流图,在数据流图中结点Node可以是operators, variables, and constants,边Edge是tensors
Tensor Flow即为Tensor(Data)在图中流动。

import tensorflow as tfa = tf.add(3, 5)print(a)>> Tensor("Add:0", shape=(), dtype=int32)

注意到print(a)并没有输出结果5.下面介绍如何得到a的数值

Session

Session对象封装了Operation对象执行和Tensor对象evaluated的环境

import tensorflow as tfa = tf.add(3, 5)sess = tf.Session()print(sess.run(a))sess.close()>> 8

Session会查看计算图中的结点,寻找如何能够得到a的结果,并计算沿途的nodes.
使用上下文管理器可以不用每次显式关闭Session.

import tensorflow as tfa = tf.add(3, 5)with tf.Session() as sess:    print(sess.run(a))

More (sub) graph

tf.Session.run(fetches, feed_dict=None, options=None, run_metadata=None)
将想要计算的变量作为一个list传递给fetches

import tensorflow as tfx = 2y = 3op1 = tf.add(x, y)op2 = tf.mul(x, y)useless = tf.mul(x,op1)op3 = tf.pow(op2, op1)with tf.Session() as sess:    op3, not_useless = sess.run([op3,useless])

Distributed Computation

分隔
将计算图拆分适合于多机多核并行计算

Another Graph

  • Multiple graphs require multiple sessions, each will try to use all available resources by default
  • Can’t pass data between them without passing them through python/numpy, which doesn’t work in distributed
  • It’s better to have disconnected subgraphs within one grap
g = tf.Graph()#to add operators to a graph, set it as default:with g.as_default():    x = tf.add(3, 5)sess = tf.Session(graph=g)with tf.Session() as sess:    sess.run(x)

小心不要混淆默认graph和用户定义graph,向任何计算图添加操作时,先将其定义为默认图

g1 = tf.get_default_graph()g2 = tf.Graph()#add ops to the default graphwith g1.as_default():    a = tf.Constant(3)#add ops to the user created graphwith g2.as_default():    b = tf.Constant(5)

Why graphs

  1. Save computation (only run subgraphs that lead to the values you want to fetch)
  2. Break computation into small, differential pieces to facilitates auto-differentiation
  3. Facilitate distributed computation, spread the work across multiple CPUs, GPUs, or devices
  4. Many common machine learning models are commonly taught and visualized as directed graphs already
0 0
原创粉丝点击