Machine Learning、Deep Learning、PostgreSQL、Distributed System、Node.Js、Golang
-
Updated
Sep 5, 2020
{{ message }}
Machine Learning、Deep Learning、PostgreSQL、Distributed System、Node.Js、Golang
Proto Actor - Ultra fast distributed actors for Go, C# and Java/Kotlin
AutoGluon: AutoML Toolkit for Deep Learning
Fast, efficient, and scalable distributed map/reduce system, DAG execution, in memory or on disk, written in pure Go, runs standalone or distributedly.
Bare bone examples of machine learning in TensorFlow
Distributed Deep learning with Keras & Spark
Open-source software for volunteer computing and grid computing.
In our API docs we currently use
.. autosummary::
Client
Client.call_stack
Client.cancel
...
To generate a table of Client methods at the top of the page. Later on we use
.. autoclass:: Client
:members:
to display the docstrings for all the public methods on Client (here an example for
PySpark + Scikit-learn = Sparkit-learn
Proto Actor - Ultra fast distributed actors for Go, C# and Java/Kotlin
MapReduce, Spark, Java, and Scala for Data Algorithms Book
MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System
Distributed Computing for AI Made Simple
LizardFS is an Open Source Distributed File System licensed under GPLv3.
A Hashcat wrapper for distributed hashcracking
SmartSql = MyBatis in C# + .NET Core+ Cache(Memory | Redis) + R/W Splitting + PropertyChangedTrack +Dynamic Repository + InvokeSync + Diagnostics
Framework for large distributed pipelines
A long list of academic papers on the topic of distributed consensus
A full stack, reactive architecture for general purpose programming. Algebraic and monadically composable primitives for concurrency, parallelism, event handling, transactions, multithreading, Web, and distributed computing with complete de-inversion of control (No callbacks, no blocking, pure state)
A light-weight library for building distributed applications such as microservices
Fast Raft framework using the Redis protocol for Go
Thrill - An EXPERIMENTAL Algorithmic Distributed Big Data Batch Processing Framework in C++
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
distributed dataflows with functional list operations for data processing with C++14
Awesome list of distributed systems resources
A HTTP Ruby API for Consul
Distributed training framework with parameter server
Add a description, image, and links to the distributed-computing topic page so that developers can more easily learn about it.
To associate your repository with the distributed-computing topic, visit your repo's landing page and select "manage topics."
If you try to run experiment in system/docker container where missing git (in system) your code will crush at this line:
https://github.com/catalyst-team/catalyst/blob/master/catalyst/utils/pipelines.py#L4
with message like this: