Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
-
Updated
Jul 23, 2020 - Jupyter Notebook
{{ message }}
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
Python for《Deep Learning》,该书为《深度学习》(花书) 数学推导、原理剖析与源码级别代码实现
Training neural models with structured signals.
Official Pytorch implementation of CutMix regularizer
Early stopping for PyTorch
Implementation of DropBlock: A regularization method for convolutional networks in PyTorch.
Code for reproducing Manifold Mixup results (ICML 2019)
Simple Implementation of many GAN models with PyTorch.
Image-processing software for cryo-electron microscopy
Starter code of Prof. Andrew Ng's machine learning MOOC in R statistical language
Deep Learning Specialization courses by Andrew Ng, deeplearning.ai
Generalized Linear Models in Sklearn Style
机器学习-Coursera-吴恩达- python+Matlab代码实现
a Ready-to-use PyTorch Extension of Unofficial CutMix Implementations with more improved performance.
The tools and syntax you need to code neural networks from day one.
Efficient Algorithms for L0 Regularized Learning
Ordered Weighted L1 regularization for classification and regression in Python
AI Learning Hub for Machine Learning, Deep Learning, Computer Vision and Statistics
Implementation of key concepts of neuralnetwork via numpy
MATLAB package of iterative regularization methods and large-scale test problems. This software is described in the paper "IR Tools: A MATLAB Package of Iterative Regularization Methods and Large-Scale Test Problems" that will be published in Numerical Algorithms, 2018.
Software for learning sparse Bayesian networks
A C++ toolkit for Convex Optimization (Logistic Loss, SVM, SVR, Least Squares etc.), Convex Optimization algorithms (LBFGS, TRON, SGD, AdsGrad, CG, Nesterov etc.) and Classifiers/Regressors (Logistic Regression, SVMs, Least Squares Regression etc.)
A set of machine learning experiments in Clojure
An implementation of DropConnect Layer in Keras
Statistical Models with Regularization in Pure Julia
Deep Learning Specialization Course by Coursera. Neural Networks, Deep Learning, Hyper Tuning, Regularization, Optimization, Data Processing, Convolutional NN, Sequence Models are including this Course.
Now on CRAN, bigKRLS combines bigmemory & RcppArmadillo (C++) for speed into a new Kernel Regularized Least Squares algorithm. Slides:
Optimization and Regularization variants of NMF
Add a description, image, and links to the regularization topic page so that developers can more easily learn about it.
To associate your repository with the regularization topic, visit your repo's landing page and select "manage topics."