gpu
Here are 1,791 public repositories matching this topic...
The fastai deep learning library, plus lessons and tutorials
-
Updated
Aug 28, 2020 - Jupyter Notebook
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Aug 21, 2020 - Makefile
Play with fluids in your browser (works even on mobile)
-
Updated
May 29, 2020 - JavaScript
Open deep learning compiler stack for cpu, gpu and specialized accelerators
-
Updated
Aug 29, 2020 - Python
A flexible framework of neural networks for deep learning
-
Updated
Aug 17, 2020 - Python
Problem:
catboost version: 0.23.2
Operating System: all
Tutorial: https://github.com/catboost/tutorials/blob/master/custom_loss/custom_metric_tutorial.md
Impossible to use custom metric (С++).
Code example
from catboost import CatBoost
train_data = [[1, 4, 5, 6],
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
-
Updated
Aug 27, 2020 - Python
Open Source Fast Scalable Machine Learning Platform For Smarter Applications: Deep Learning, Gradient Boosting & XGBoost, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
-
Updated
Aug 29, 2020 - Jupyter Notebook
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System
-
Updated
Aug 28, 2020 - Python
PipelineAI Kubeflow Distribution
-
Updated
Apr 24, 2020 - Jsonnet
Deep Learning GPU Training System
-
Updated
Jun 13, 2020 - HTML
a language for fast, portable data-parallel computation
-
Updated
Aug 29, 2020 - C++
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
We would like to forward a particular 'key' column which is part of the features to appear alongside the predictions - this is to be able to identify to which set of features a particular prediction belongs to. Here is an example of predictions output using the tensorflow.contrib.estimator.multi_class_head:
{"classes": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"scores": [0.068196
There are (at least) two ways in which the reinterpret_cast is misused:
- Used instead of
static_cast:
https://github.com/rapidsai/cudf/blob/f7fbc1160b17b969db2708e0cf6033d3db9fc1cf/cpp/src/io/avro/avro_gpu.cu#L95
static cast should be used to cast fromvoid*. - The use causes undefined behavior:
https://github.com/rapidsai/cudf/blob/f7fbc1160b17b969db2708e0cf6033d3db9fc1cf/cpp/src/io
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications
-
Updated
Aug 28, 2020 - C++
Fast, lightweight HTML UI engine for apps and games
-
Updated
Aug 28, 2020 - CMake
Hey everyone!
mapd-core-cpu is already available on conda-forge (https://anaconda.org/conda-forge/omniscidb-cpu)
now we should add some instructions on the documentation.
at this moment it is available for linux and osx.
some additional information about the configuration:
- for now, always install
omniscidb-cpuinside a conda environment (also it is a good practice), eg:
The Cross Platform Game Engine
-
Updated
Jul 13, 2020 - ActionScript
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."


