gpu
Here are 2,607 public repositories matching this topic...
The fastai deep learning library
-
Updated
Dec 23, 2021 - Jupyter Notebook
When drawing particles in a 3D scene using the GGUI system:
scene.particles(vertices, radius, color, per_vertex_color),
currently, it can only draw a group of particles with the same radius, but I want to draw a bunch of particles where each particle has a different radius.
I wish a per_vertex_radius parameter could be added so that we can specify the radius for each individual partic
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Dec 9, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
Play with fluids in your browser (works even on mobile)
-
Updated
Nov 11, 2021 - JavaScript
Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
Updated
Dec 28, 2021 - Python
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
-
Updated
Dec 4, 2021 - Python
I am working on creating a WandbCallback for Weights and Biases. I am glad that CatBoost has a callback system in place but it would be great if we can extend the interface.
The current callback only supports after_iteration that takes info. Taking inspiration from XGBoost callback system it would be great if we can have before iteration that takes info, before_training, and `after
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
Open3D: A Modern Library for 3D Data Processing
-
Updated
Dec 28, 2021 - C++
I want to preemptively start this thread to survey for suggestions. A cursory search lead me to this promising repository https://github.com/enigo-rs/enigo
Since closing the window is a common point of failure, that will be the focus for the first pass of testing as I learn how to use the library.
Components for testing:
- bridge
- editor
- renderer
- settings
- wind
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
-
Updated
Dec 22, 2021 - Python
Somehow some of these names start with fmt_ocl_ while most start with fmt_opencl_. Is this intentional? It causes the fmt_ocl_ to be listed/tested/benchmarked first. Then there's fmt_opencl_1otus5 with a 1 (one) in there. So the first OpenCL formats become:
john_register_one(&fmt_ocl_cryptosafe);
john_register_one(&fmt_ocl_cryptsha1);
john_register_one(&fmt_ocl_KeePass);
johnH2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
-
Updated
Dec 28, 2021 - Jupyter Notebook
A flexible framework of neural networks for deep learning
-
Updated
Jun 10, 2021 - Python
Description
Change the signature of cupy.{percentile,quantile} to provide exactly the same API as NumPy.
I think it's ok to implement overwrite_input as nop (just ignore the option).
Additional Information
No response
a language for fast, portable data-parallel computation
-
Updated
Dec 28, 2021 - C++
Based on @karthikeyann's work on this PR rapidsai/cudf#9767 I'm wondering if it makes sense to consider removing the defaults for the stream parameters in various detail functions. It is pretty surprising how often these are getting missed.
The most common case seems to be in factory functions and various ::create functions. Maybe just do it for those?
PipelineAI Kubeflow Distribution
-
Updated
Apr 24, 2020 - Jsonnet
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
Deep Learning GPU Training System
-
Updated
Jun 13, 2020 - HTML
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."



there may be a log error in pytorch/aten/src/THC/THCGenerateByteTypes.h, in line2, #error "You must define THC_GENERIC_FILE before including THGenerateByteTypes.h", the "THGenerateByteTypes.h" should be "THCGenerateByteTypes.h".
bty, this file is a general file to generate files of different scalar_t, so i think, the name is better to be THGenerateTypes.h.