Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
-
Updated
May 27, 2021 - C++
{{ message }}
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
PyTorch ,ONNX and TensorRT implementation of YOLOv4
Implementation of popular deep learning networks with TensorRT network definition API
An easy to use PyTorch to TensorRT converter
Deep Learning API and Server in C++14 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
Deep learning gateway on Raspberry Pi and other edge devices
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
HyperPose: Fast and Flexible Human Pose Estimation
Something like (trtorch.runtime.execute_engine) vs. (runtime.RunCudaEngine)
Fast and accurate object detection with end-to-end GPU optimization
TensorFlow models accelerated with NVIDIA TensorRT
Real-time pose estimation accelerated with NVIDIA TensorRT
This repo is implemented based on detectron2 and centernet
High-performance multiple object tracking based on YOLO, Deep SORT, and KLT
[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Image classification with NVIDIA TensorRT from TensorFlow models.
A simple, efficient, easy-to-use Nvidia TensorRT wrapper for CNN, support c++ and python.
YOLOv3 implementation in TensorFlow 2.3.1
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
A library for high performance deep learning inference on NVIDIA GPUs.
Adlik: Toolkit for Accelerating Deep Learning Inference
A Wide Range of Custom Functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny Implemented in TensorFlow, TFLite, and TensorRT.
GPU accelerated deep learning inference applications for RaspberryPi / JetsonNano / Linux PC using TensorflowLite GPUDelegate / TensorRT
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."
请问可以直接training tmfile出来吗? 因为tengine-convert-tool covert 会有error
tengine-lite library version: 1.4-dev

Get input tensor failed
或是有例子能training出下面tmfile 呢?
![Screenshot from 2021-05-27 07-01-46](https://user-images.githubusercontent.com/40915044/11