ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Oct 16, 2021 - C
{{ message }}
ncnn is a high-performance neural network inference framework optimized for the mobile platform
It needs to check that the contracting dimensions are the same size.
In test/create-cores/test_dma1.mlir, -aie-lower-memcpy convert
AIE.memcpy @token0(1, 2) (%t11 : <%buf0, 0, 256>, %t22 : <%buf1, 0, 256>) : (memref<256xi32>, memref<256xi32>)
AIE.memcpy @token1(1, 2) (%t11 : <%buf0, 0, 256>, %t33 : <%buf2, 0, 256>) : (memref<256xi32>, memref<256xi32>)
to (only shows the %t11 side)
%2 = AIE.mem(%0) {
%15 = AIE.dmaStart(MM2S0, ^bb1
a Halide language To MLIR compiler.
A tree-walker && virtual-machine && JIT interpreter for Lox language
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."
MLIR mainline added the ability to synthesize get/setters with a properly camel cased and get/set-prefixed name, see this patch and this announcement.
We should do a merge from MLIR mainline to get this, and then adopt it in the CIRCT dialects