gpu
Here are 2,284 public repositories matching this topic...
The fastai deep learning library
-
Updated
Jul 27, 2021 - Jupyter Notebook
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Jun 23, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
Play with fluids in your browser (works even on mobile)
-
Updated
Jun 9, 2021 - JavaScript
Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
Updated
Jul 28, 2021 - Python
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
-
Updated
Jun 14, 2021 - Python
New Metric Request
It would be great to have FBeta, F2, or F0.5 metrics to be implemented without the need for a custom metric class defined by user.
catboost version: 0.26
A flexible framework of neural networks for deep learning
-
Updated
Jun 10, 2021 - Python
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
-
Updated
Jul 28, 2021 - Jupyter Notebook
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
-
Updated
Jun 16, 2021 - Python
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
The depth configuration defined on this line:
is not compatible with the RealSense L515 camera. The following exception is raised:
Traceback (most recent call last):
File "./examples/python/reconstruction_system/sensors/realsense_recorder.py", line 126, in <mNeovide should remember the last window dimension and open the new window with the same dimensions.
Could be a setting like g:neovide_remember_dimensions
a language for fast, portable data-parallel computation
-
Updated
Jul 27, 2021 - C++
PipelineAI Kubeflow Distribution
-
Updated
Apr 24, 2020 - Jsonnet
For feature engineering tasks, I'd like to be able to determine whether a datetime is the beginning or end of a year, like I can in pandas.
import pandas as pd
s = pd.Series(["2021-02-27", "2020-03-31"], dtype="datetime64[ms]")
s.dt.is_year_end
0 False
1 False
dtype: boolimport pandas as pd
s = pd.Series(["2021-01-01", "2020-04-01"], dtype="datetDeep Learning GPU Training System
-
Updated
Jun 13, 2020 - HTML
Describe the Problem
plot_model currently has the save argument which can be used to save the plots. It does not provide the functionality to decide where to save the plot and with what name. Right now it saves the plot with predefined names in the current working directory.
Describe the solution you'd like
We can have another argument save_path which is used whenever the `
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Next-generation HTML renderer for apps and games
-
Updated
Jul 15, 2021 - CMake
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
-
Updated
Jul 27, 2021 - C++
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."


Support to pickle a jitted function (or at least throw a
TypeErrorwhen using protocol 0 and 1).Motivation
Trying to pickle a jitted function either raises
TypeError: cannot pickle 'torch._C.ScriptFunction' objectwhenprotocol>1or far worse when usingprotocol=0orprotocol=1python 3.9.5 dies with: