Apache Spark
Apache Spark is an open source distributed general-purpose cluster-computing framework. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Here are 5,819 public repositories matching this topic...
Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.
-
Updated
May 13, 2021 - Python
Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.
-
Updated
Jul 24, 2021 - Python
Learn and understand Docker technologies, with real DevOps practice!
-
Updated
Jul 4, 2021 - Go
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
-
Updated
Jul 23, 2021 - Python
Describe the bug
If I pass 'Next 7 days' into the dateRange in query, the response is not a range over the next 7 days but only the day IN 7 days.
To Reproduce
Steps to reproduce the behavior:
- Build a query like this:
const query: Query = {
measures: [
measure
],
timeDimensions: [
{
dimension: 'Your.Dimension,
dateRange: 'Next 7
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
-
Updated
Jun 24, 2021 - Java
List of Data Science Cheatsheets to rule the world
-
Updated
Oct 31, 2019
Open-source IoT Platform - Device management, data collection, processing and visualization.
-
Updated
Jul 24, 2021 - Java
编程电子书,电子书,编程书籍,包括C,C#,Docker,Elasticsearch,Git,Hadoop,HeadFirst,Java,Javascript,jvm,Kafka,Linux,Maven,MongoDB,MyBatis,MySQL,Netty,Nginx,Python,RabbitMQ,Redis,Scala,Solr,Spark,Spring,SpringBoot,SpringCloud,TCPIP,Tomcat,Zookeeper,人工智能,大数据类,并发编程,数据库类,数据挖掘,新面试题,架构设计,算法系列,计算机类,设计模式,软件测试,重构优化,等更多分类
-
Updated
Jun 3, 2021
A Flexible and Powerful Parameter Server for large-scale machine learning
-
Updated
Jul 26, 2021 - Java
macOS development environment setup: Easy-to-understand instructions with automated setup scripts for developer tools like Vim, Sublime Text, Bash, iTerm, Python data analysis, Spark, Hadoop MapReduce, AWS, Heroku, JavaScript web development, Android development, common data stores, and dev-based OS X defaults.
-
Updated
Dec 23, 2020 - Python
H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
-
Updated
Jul 26, 2021 - Jupyter Notebook
Alluxio, data orchestration for analytics and machine learning in the cloud
-
Updated
Jul 26, 2021 - Java
PipelineAI Kubeflow Distribution
-
Updated
Apr 24, 2020 - Jsonnet
BigDL: Distributed Deep Learning Framework for Apache Spark
-
Updated
Jul 12, 2021 - Scala
TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters.
-
Updated
Jul 14, 2021 - Python
- Delta Lake 1.0.0
- Spark 3.1.2
- Scala 2.12
- AdoptOpenJDK-11.0.11+9 (build 11.0.11+9)
The following code gives a NullPointerException. This is for a directory-based delta table that does not exist and uses a generated column.
import io.delta.tables.DeltaTable
DeltaTable.create
.addColumn(
DeltaTable.columnBuilder("value")
.generatedAlwaysAs("true")
.nullab
酷玩 Spark: Spark 源代码解析、Spark 类库等
-
Updated
May 26, 2019 - Scala
Interactive and Reactive Data Science using Scala and Spark.
-
Updated
Mar 31, 2021 - JavaScript
The Hunting ELK
-
Updated
May 12, 2021 - Jupyter Notebook
Used Spark version
Spark Version: 2.4.4
Used Spark Job Server version
SJS version: v0.11.1
Deployed mode
client on Spark Standalone
Actual (wrong) behavior
I can't get config, when post a job with 'sync=true'. I got it:
http://localhost:8090/jobs/ff99479b-e59c-4215-b17d-4058f8d97d25/config
{"status":"ERROR","result":"No such job ID ff99479b-e59c-4215-b17d-4058f8d97d25"
I have a simple regression task (using a LightGBMRegressor) where I want to penalize negative predictions more than positive ones. Is there a way to achieve this with the default regression LightGBM objectives (see https://lightgbm.readthedocs.io/en/latest/Parameters.html)? If not, is it somehow possible to define (many example for default LightGBM model) and pass a custom regression objective?
Created by Matei Zaharia
Released May 26, 2014
- Repository
- apache/spark
- Website
- spark.apache.org
- Wikipedia
- Wikipedia


At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.