Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
-
Updated
Jul 16, 2020 - C++
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
Deep learning gateway on Raspberry Pi and other edge devices
An easy to use PyTorch to TensorRT converter
PyTorch ,ONNX and TensorRT implementation of YOLOv4
HyperPose: A Flexible Library for Real-time Human Pose Estimation
YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
TensorFlow models accelerated with NVIDIA TensorRT
Implementation of popular deep learning networks with TensorRT network definition APIs
Fast and accurate object detection with end-to-end GPU optimization
Image classification with NVIDIA TensorRT from TensorFlow models.
[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
Real-time pose estimation accelerated with NVIDIA TensorRT
Bonnet: An Open-Source Training and Deployment Framework for Semantic Segmentation in Robotics.
Explore the Capabilities of the TensorRT Platform
Reimplement RetinaFace use C++ and TensorRT
Deep Learning Benchmarking Suite
reference tensorflow code for named entity tagging
Optimized inference engine for OpenNMT models
darknet -> tensorrt. TensorRT7 yolov3 yolov4 use raw darknet *.weights and *.cfg fils. If the wrapper is useful to you,please Star it.
Fast Object Detector for the Jetson Nano
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."