TensorRT 加速 资源整理

来源:互联网 发布:mac安装 chrome 编辑:程序博客网 时间:2024/05/24 23:12

最近很想用TensorRT对一些目标检测的模型进行加速,之前也看过一些资料,但是都很零散,收藏夹也不方便,特此在此博客做一些记载,仅供参考。

marvis/pytorch-caffe-darknet-convert: convert between pytorch, caffe prototxt/weights and darknet cfg/weights

fengbingchun关于tensorRT的博客,举了一些sample的例子

dusty-nv/ros_deep_learning: Deep-learning nodes for ROS with support for NVIDIA Jetson TX1/TX2 and TensorRT 可以将它作为一个node置于ros中

juliebernauer/tx1-lab2 GTC2016lab用caffe在tx1上进行了目标检测,有可能有用

dkorobchenko-nv/tensorrt-demo: TensorRT demo 这个是用的tensorRT
3做的demo

JungmoKoo/Caffe_TensorRT 具体还是不太清楚,不过看他的src中的withGIE.cpp中有tensorRT的具体步骤

NVIDIA-Jetson/redtail: AI framework for autonomous mobile robotics. 在wiki中ros node节点中提到了yolo

JetPack 3.1 Doubles Jetson’s Low-Latency Inference Performance | Parallel Forall 提到了可以加速YOLO

AastaNV/Face-Recognition: Demonstrate Plugin API for TensorRT2.1 一个实例 可以参考 官方给出的加速demo

TensorRT YOLO inference error - NVIDIA Developer Forums 成功实施 但有些错误的

Trying out TensorRT on Jetson TX2

TensorRT 2 初探秘 (一) - CSDN博客

NVidia TensorRT 运行 Caffe 模型 - CSDN博客 有一个比较清晰的流程

Error with Concatenate Layer in TensorRT2 - NVIDIA Developer Forums 有一个关于设置网络输入为kHALF的方法

In compuatational mode=FP16, TensorRT can accept input or output data in either FP32 or FP16 mode.
You can change to use any combinations below for input and output:
• Input FP32, output FP32
• Input FP16, output FP32
• Input FP16, output FP16
• Input FP32, output FP16

setAllNetworkInputsToHalf(network);static void setAllNetworkInputsToHalf(INetworkDefinition* network){    for (int i = 0; i < network->getNbInputs(); i++)        network->getInput(i)->setType(DataType::kHALF);}

在jetson 上的例子所在位置:
You can refer to our tensorRT sample which is located at ‘/usr/src/gie_samples/’.
可以利用如下方法进行自定义层的设置
For example,
Separate your network to: input -> networkA -> networkSelf -> networkB -> output

NetworkA and networkB can inference directly via tensorRT.
NetworkSelf needs to be implemented via CUDA.

So, the flow will be:

IExecutionContext *contextA = engineA->createExecutionContext(); //create networkAIExecutionContext *contextB = engineB->createExecutionContext(); //create networkB<...>contextA.enqueue(batchSize, buffersA, stream, nullptr);  //inference networkAmyLayer(outputFromA, inputToB, stream);                  //inference networkSelf, your cuda code is here!contextB.enqueue(batchSize, buffersB, stream, nullptr);  //inference networkB
原创粉丝点击