inference
Here are 804 public repositories matching this topic...
ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Jul 7, 2022 - C++
Example
-
Updated
Jul 7, 2022 - Jupyter Notebook
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Feb 10, 2022 - Python
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
-
Updated
Jul 7, 2022 - C++
Runtime type system for IO decoding/encoding
-
Updated
Apr 19, 2022 - TypeScript
Dear Colossal-AI team,
There are a few features in my mind that I thought would be helpful to the project, and I wanted to ask if there is any of them which might be more useful so I could start implementing them.
Loki-Promtail is a tool for monitoring distributed logs with Grafana. Connecting the Distributed Logger to it and extracting labels from the log structure would be a user-friendly sys
Description
If the Triton server build fails due to any reason, I have to delete the /tmp/citritonbuild/<backend> folders to prevent the next rebuild from throwing git repo already exists error.
Triton Information
r21.05
I am building the Triton server myself.
To Reproduce
uninstall one of the dependency needed by a backend.
run build.py with all the backends enabled
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
-
Updated
Jul 7, 2022 - C++
OpenVINO™ Toolkit repository
-
Updated
Jul 7, 2022 - C++
An easy to use PyTorch to TensorRT converter
-
Updated
Jun 22, 2022 - Python
TypeDB: a strongly-typed database
-
Updated
Jul 7, 2022 - Java
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Jul 7, 2022 - Python
-
Updated
Jul 6, 2022 - TypeScript
LightSeq: A High Performance Library for Sequence Processing and Generation
-
Updated
Jul 6, 2022 - Cuda
TensorFlow template application for deep learning
-
Updated
Jan 7, 2022 - Python
Acceleration package for neural networks on multi-core CPUs
-
Updated
Jul 16, 2021 - C
DELTA is a deep learning based natural language and speech processing platform.
-
Updated
May 26, 2022 - Python
Deploy a ML inference service on a budget in less than 10 lines of code.
-
Updated
Aug 23, 2021 - Python
Auto-Installer is currently not supported on Windows platforms. TVM and TensorRT in particular would need special care.
High-efficiency floating-point neural network inference operators for mobile, server, and Web
-
Updated
Jul 7, 2022 - C
Hi, I am so interesting in your project, and wonder if you need contributor and how could I make my own contribution?
Pytorch-Named-Entity-Recognition-with-BERT
-
Updated
May 6, 2021 - Python
Efficient, scalable and enterprise-grade CPU/GPU inference server for
-
Updated
Jul 6, 2022 - Python
Context
The user provided node_name_mapping is not used when running TF SavedModels. It's only used when running TF frozen graphs.
TF SavedModels have SignaureDefs. These are quite similar to what node_name_mapping is in Neuropod.
Since SavedModel already has this functionality (and other projects like TF serving use it), it made sense to use that data as the node name mapping. O
'max_request_size' seems to refer to bytes, not mb.
Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs
-
Updated
Jul 7, 2022 - Python
Improve this page
Add a description, image, and links to the inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the inference topic, visit your repo's landing page and select "manage topics."

Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.

I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h