ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Oct 23, 2020 - C++
{{ message }}
ncnn is a high-performance neural network inference framework optimized for the mobile platform
The firrtl dialect does some fancy things with SSA names: it pulls the 'name' attribute into the result of the operation, and correctly round trips this through the parser. The RTL dialect should do something similar, but limited to rtl.wire specifically. Instead of printing:
%42 = rtl.wire { name = "foo" } : i8
We should print:
%foo = rtl.wire : i8
The real pay
Since #384 we can make use of the IR verification features of MLIR.
Our BConv op has a few parameter combinations where it can throw during Init or Prepare. In the converter we try to make sure that none of those cases will ev
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."
Currently Tracy assumes that users will be posting screenshots of frames via the
tracy::Profiler::SendFrameImageAPI. Because of this Tracy starts an additional thread and includes code for performing on-the-fly DXT1 texture compression. Since we never need this it'd be nice to be able to toggle all of that off to reduce the code size impact and extra runtime threads.