Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
-
Updated
Aug 20, 2020 - C++
{{ message }}
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
Awesome Vulkan ecosystem
GameStream client for PCs (Windows, Mac, Linux, and Steam Link)
A community run, 5-day PyTorch Deep Learning Bootcamp
GameStream client for Android
AMD & NVIDIA eGPUs for all Thunderbolt Macs.
Unofficial implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions". Try at: www.fixmyphoto.ai
OpenCL integration for Python, plus shiny features
GameStream client for ChromeOS
TensorFlow models accelerated with NVIDIA TensorRT
Check for NVIDIA GPU driver updates!
bash script for managing NVIDIA web drivers on macOS
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.
OpenCL for Rust
All-in-one AI container for rapid prototyping
Display-agnostic acceleration of macOS applications using external GPUs.
Add a description, image, and links to the nvidia topic page so that developers can more easily learn about it.
To associate your repository with the nvidia topic, visit your repo's landing page and select "manage topics."
Report needed documentation
We do not have documentation specifying the different treelite Operator values that FIL supports. (https://github.com/dmlc/treelite/blob/46c8390aed4491ea97a017d447f921efef9f03ef/include/treelite/base.h#L40)
Report needed documentation
https://github.com/rapidsai/cuml/blob/branch-0.15/cpp/test/sg/fil_test.cu
There are multiple places in the fil_test.cu file