A Flexible and Powerful Parameter Server for large-scale machine learning
-
Updated
Oct 14, 2021 - Java
{{ message }}
A Flexible and Powerful Parameter Server for large-scale machine learning
酷玩 Spark: Spark 源代码解析、Spark 类库等
基于Spark的电影推荐系统,包含爬虫项目、web网站、后台管理系统以及spark推荐系统
生产环境的海量数据计算产品,文档地址:
scala、spark使用过程中,各种测试用例以及相关资料整理
Wormhole is a SPaaS (Stream Processing as a Service) Platform
C# and F# language binding and extensions to Apache Spark
Streaming System 相关的论文读物
An open source framework for building data analytic applications.
Scala examples for learning to use Spark
Is your feature request related to a problem? Please describe.
Today the user needs to deploy udf jars and reference data csvs manually to the blob location
Describe the solution you'd like
Enable the user to choose a file on a local disk which the web portal will then upload to the right location
These files belong to the Gimel Discovery Service, which is still Work-In-Progress in PayPal & not yet open sourced. In addition, the logic in these files are outdated & hence it does not make sense to have these files in the repo.
https://github.com/paypal/gimel/search?l=Shell
Remove --> gimel-dataapi/gimel-core/src/main/scripts/tools/bin/hbase/hbase_ddl_creator.sh
Spark, Spark Streaming and Spark SQL unit testing strategies
Schema Registry
Simple yet powerful live data computation framework
A complete example of a big data application using : Kubernetes (kops/aws), Apache Spark SQL/Streaming/MLib, Apache Flink, Scala, Python, Apache Kafka, Apache Hbase, Apache Parquet, Apache Avro, Apache Storm, Twitter Api, MongoDB, NodeJS, Angular, GraphQL
Self-contained examples of Apache Spark streaming integrated with Apache Kafka.
Enabling Continuous Data Processing with Apache Spark and Azure Event Hubs
StreamLine - Streaming Analytics
Bitnami Docker Image for Apache Spark
A prototype project of big data platform, the source codes of the book Big Data Platform Architecture and Prototype
I am able to use consume the Kinesis stream using this jar as a normal consumer. When i updated the user account to Enhanced fan out consumer, i am unable to access the stream.
Do we have any way to access the stream as Enhanced fan out consumer?
Apache Spark and Apache Kafka integration example
Custom state store providers for Apache Spark
电影推荐系统、电影推荐引擎、使用Spark完成的电影推荐引擎
Add a description, image, and links to the spark-streaming topic page so that developers can more easily learn about it.
To associate your repository with the spark-streaming topic, visit your repo's landing page and select "manage topics."
This is to track implementation of the ML-Features: https://spark.apache.org/docs/latest/ml-features
Bucketizer has been implemented in dotnet/spark#378 but there are more features that should be implemented.