在Nvidia TX2上安装Cuda8.0+Cudnn5.1+TensorFlow1.2+OpenCV3.2
来源:互联网 发布:淘宝买家秀福利汇总 编辑:程序博客网 时间:2024/06/10 20:22
在Nvidia TX2上安装Cuda8.0+Cudnn5.1+TensorFlow1.2+OpenCV3.2
工具
- 宿主机(host):ubuntu14.04、至少50GB存储空间
- Jetpack3.0:软件下载地址、文档地址
因为只有Jetpack3.0包含cuda8.0和cudnn5.1,而Jetpack3.0只能运行在ubuntu14.04上(亲测16.04有问题会导致失败)
安装Jetpack3.0,并对TX2进行刷机和安装Cuda等框架
保证宿主机和tx2在同一局域网下,按照官方文档和Jetpak的提示执行即可,很简单。
注意事项:
(1)如果在host上安装cuda toolkit出现错误(类似held package等)的时候,换一下ubuntu的更新源(推荐163源)就可以了。
(2)期间如果出现host机提示determining IP …. (卡很久)之类的提示,可以关掉,然后重新来,flash system image那个选择no action,之后就会出现手动配置板子的IP和用户密码的选项。
安装TensorFlow1.2
我编译好的whl文件 —— 百度网盘 提取码:n1sz
可以尝试直接使用我编译好的whl文件(直接跳到 安装whl文件 这一步)
不保证可用,保险起见如果有时间的话最好还是执行下面的步骤。
安装依赖
sudo add-apt-repository ppa:webupd8team/javasudo apt-get updatesudo apt-get install oracle-java8-installer -ysudo apt-get install zip unzip autoconf automake libtool curl zlib1g-dev maven -ysudo apt install python3-numpy python3-dev python3-pip python3-wheel
安装bazel
bazel_version=0.5.1wget https://github.com/bazelbuild/bazel/releases/download/$bazel_version/bazel-$bazel_version-dist.zipunzip bazel-$bazel_version-dist.zip -d bazel-distsudo chmod -R ug+rwx bazel-distcd bazel-dist#编译这一步很耗时,约20~30分钟./compile.sh sudo cp output/bazel /usr/local/bin
下载TensorFlow源码
cd ~git clone --recursive https://github.com/tensorflow/tensorflow.gitcd tensorflowgit checkout v1.2.0
修改tensorflow/workspace.bzl,更改为如下:
#注意是eigen_archive节点native.new_http_archive( name = "eigen_archive", urls = [ "http://mirror.bazel.build/bitbucket.org/eigen/eigen/get/d781c1de9834.tar.gz", "https://bitbucket.org/eigen/eigen/get/d781c1de9834.tar.gz", ], sha256 = "a34b208da6ec18fa8da963369e166e4a368612c14d956dd2f9d7072904675d9b", strip_prefix = "eigen-eigen-d781c1de9834", build_file = str(Label("//third_party:eigen.BUILD")), )
配置编译变量
export PYTHON_BIN_PATH=$(which python3)# No Google Cloud Platform supportexport TF_NEED_GCP=0# No Hadoop file system supportexport TF_NEED_HDFS=0# Use CUDAexport TF_NEED_CUDA=1# Setup gcc ; just use the defaultexport GCC_HOST_COMPILER_PATH=$(which gcc)# TF CUDA Version export TF_CUDA_VERSION=8.0# CUDA pathexport CUDA_TOOLKIT_PATH=/usr/local/cuda# cuDNNexport TF_CUDNN_VERSION=5.1.10export CUDNN_INSTALL_PATH=/usr/lib/aarch64-linux-gnu# CUDA compute capabilityexport TF_CUDA_COMPUTE_CAPABILITIES=6.2export CC_OPT_FLAGS=-march=nativeexport TF_NEED_JEMALLOC=1export TF_NEED_OPENCL=0export TF_ENABLE_XLA=1
编译、生成、打包
#这一步有提示的话,直接enter就行,不用理./configure#这一步巨巨巨耗时!约3小时bazel build -c opt --verbose_failures --config=cuda //tensorflow/tools/pip_package:build_pip_packagebazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkgmv /tmp/tensorflow_pkg/tensorflow-1.2.0*-linux_aarch64.whl ~
安装whl文件
#有可能提示版本问题,参考错误提示修改#这步也很耗时,需要耐心等待pip3 install ~/tensorflow-1.2.0-cp35-cp35m-linux_aarch64.whl
安装这一步会在“Running setup.py bdist_wheel for numpy …”卡住很久,需要一点耐心
测试脚本testtf.py
#!/usr/bin/env python import tensorflow as tfhello = tf.constant('Hello, TensorFlow!')sess = tf.Session()print(sess.run(hello))
运行:
python3 testtf.py
结果:
nvidia@tegra-ubuntu:~$ python3 testtf.py 2017-12-26 02:30:00.977979: E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:879] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_nodeYour kernel may have been built without NUMA support.2017-12-26 02:30:00.978096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GP10Bmajor: 6 minor: 2 memoryClockRate (GHz) 1.3005pciBusID 0000:00:00.0Total memory: 7.67GiBFree memory: 3.97GiB2017-12-26 02:30:00.978144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-12-26 02:30:00.978174: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-12-26 02:30:00.978204: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GP10B, pci bus id: 0000:00:00.0)2017-12-26 02:30:00.978237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:642] Could not identify NUMA node of /job:localhost/replica:0/task:0/gpu:0, defaulting to 0. Your kernel may not have been built with NUMA support.2017-12-26 02:30:02.406679: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices2017-12-26 02:30:02.406746: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 4 visible devices2017-12-26 02:30:02.407489: I tensorflow/compiler/xla/service/service.cc:198] XLA service 0x29c9470 executing computations on platform Host. Devices:2017-12-26 02:30:02.407540: I tensorflow/compiler/xla/service/service.cc:206] StreamExecutor device (0): <undefined>, <undefined>2017-12-26 02:30:02.408398: I tensorflow/compiler/xla/service/platform_util.cc:58] platform CUDA present with 1 visible devices2017-12-26 02:30:02.408446: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 4 visible devices2017-12-26 02:30:02.409154: I tensorflow/compiler/xla/service/service.cc:198] XLA service 0x2a193b0 executing computations on platform CUDA. Devices:2017-12-26 02:30:02.409199: I tensorflow/compiler/xla/service/service.cc:206] StreamExecutor device (0): GP10B, Compute Capability 6.2b'Hello, TensorFlow!'
OpenCV3.2
参考文献:https://docs.opencv.org/3.2.0/d6/d15/tutorial_building_tegra_cuda.html
下载opencv3.2源码
wget https://github.com/opencv/opencv/archive/3.2.0.zipunzip 3.2.0.zipcd opencv-3.2.0/
安装依赖项
sudo apt-get install \ cmake \ libglew-dev \ libtiff5-dev \ zlib1g-dev \ libjpeg-dev \ libpng12-dev \ libjasper-dev \ libavcodec-dev \ libavformat-dev \ libavutil-dev \ libpostproc-dev \ libswscale-dev \ libeigen3-dev \ libtbb-dev \ libgtk2.0-dev \ pkg-config
配置、编译并安装
mkdir build cd buildcmake \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX=/usr \ -DBUILD_PNG=OFF \ -DBUILD_TIFF=OFF \ -DBUILD_TBB=OFF \ -DBUILD_JPEG=OFF \ -DBUILD_JASPER=OFF \ -DBUILD_ZLIB=OFF \ -DBUILD_EXAMPLES=ON \ -DBUILD_opencv_java=OFF \ -DBUILD_opencv_python2=ON \ -DBUILD_opencv_python3=ON \ -DBUILD_PYTHON_SUPPORT=ON \ -DENABLE_PRECOMPILED_HEADERS=OFF \ -DWITH_OPENCL=OFF \ -DWITH_OPENMP=OFF \ -DWITH_FFMPEG=ON \ -DWITH_GSTREAMER=OFF \ -DWITH_GSTREAMER_0_10=OFF \ -DWITH_CUDA=ON \ -DWITH_GTK=ON \ -DWITH_VTK=OFF \ -DWITH_TBB=ON \ -DWITH_1394=OFF \ -DWITH_OPENEXR=OFF \ -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0 \ -DCUDA_ARCH_BIN=6.2 \ -DCUDA_ARCH_PTX="" \ -DINSTALL_C_EXAMPLES=ON \ -DINSTALL_TESTS=OFF \ -DOPENCV_TEST_DATA_PATH=../opencv_extra/testdata \ ..#编译需要一个小时左右make -j4sudo make install
作者:yanjie
2017.12.26
阅读全文