Pinned issues
Please help us improve ONNX Runtime by participating in our s...
#5726
opened Nov 6, 2020 by
natke
Open
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
index: 1 Got: 3 Expected: 416 index: 3 Got: 416 Expected: 3 Please fix either the inputs or the model.
#5819
opened Nov 16, 2020 by
MuhammadAsadJaved
Memory usage with Cuda ExecutionProvider
ep:CUDA
type:performance
type:support
#5801
opened Nov 13, 2020 by
radikalliberal
Different output when running on CUDA (compared to CPU and keras)
type:bug
#5798
opened Nov 13, 2020 by
bazukas
memory keep increasing with dynamic input shape of network
ep:DNNL
type:support
#5796
opened Nov 13, 2020 by
yflv-yanxia
Building with "--minimal_build" fails
component:build
feature: mobile
type:bug
#5786
opened Nov 12, 2020 by
Linux13524
Any suggestions to speed up onnx model inference speed ?
type:performance
type:support
#5784
opened Nov 12, 2020 by
quant-science
Build with DNNL execution provider failing on macOS, but working on Linux
component:build
ep:DNNL
type:bug
#5783
opened Nov 12, 2020 by
j-paulus
Any support for double type tensor when loading pytorch onnx model ?
type:support
#5782
opened Nov 12, 2020 by
quant-science
How to loading a pytorch model with input shape of (None, 32) using the C# inference ?
api:C#
type:support
#5781
opened Nov 12, 2020 by
quant-science
IR_VERSION and Opset Versions support via C API
api:Java
type:enhancement
#5780
opened Nov 12, 2020 by
jji2019
The same input, sometimes the output is different
ep:CUDA
type:bug
#5769
opened Nov 11, 2020 by
feipxyz
Complete example to use yolov3.onnx model for inference
status:duplicate
type:support
#5765
opened Nov 11, 2020 by
MuhammadAsadJaved
U8S8 QLinearMatMul NOT_IMPLEMENTED
component:operator
feature:quantization
type:support
#5754
opened Nov 10, 2020 by
volcacius
openvino build failed nuget
component:build
ep:OpenVINO
type:support
#5749
opened Nov 10, 2020 by
connordouglas1
Quantization for LSTM
feature: mobile
feature:quantization
type:enhancement
#5747
opened Nov 10, 2020 by
slevental
Please help us improve ONNX Runtime by participating in our survey
#5726
opened Nov 6, 2020 by
natke
Op in first training step is much slower
component:training-core
type:performance
type:support
#5715
opened Nov 5, 2020 by
jupvfranco
TRT FP16 compilation fails on SSD model "could not build engine for fused node"
ep:TensorRT
type:bug
#5709
opened Nov 5, 2020 by
joba01
Runtime error when converting T5 decoder with multiple dynamic axes inputs
type:bug
#5646
opened Oct 30, 2020 by
amanpreet692
Previous Next
ProTip!
Adding no:label will show everything without a label.

