Skip to content
#

Apache Spark

spark logo

Apache Spark is an open source distributed general-purpose cluster-computing framework. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.

Here are 6,036 public repositories matching this topic...

Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • Updated May 13, 2021
  • Python
cube.js
leogodin217
leogodin217 commented Sep 17, 2021

Describe the bug
Using a time dimension on a runningTotal measure on Snowflake mixes quoted and unquoted columns in the query. This fails the query, because Snowflake has specific rules about quoted columns. Specifically:

  • All unquoted column names are treated as upper case
  • Quoted column names are case sensitive.

So "date_from" <> date_from

To Reproduce
Steps to reproduce

flink-learning

flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》

  • Updated Oct 8, 2021
  • Java

编程电子书,电子书,编程书籍,包括C,C#,Docker,Elasticsearch,Git,Hadoop,HeadFirst,Java,Javascript,jvm,Kafka,Linux,Maven,MongoDB,MyBatis,MySQL,Netty,Nginx,Python,RabbitMQ,Redis,Scala,Solr,Spark,Spring,SpringBoot,SpringCloud,TCPIP,Tomcat,Zookeeper,人工智能,大数据类,并发编程,数据库类,数据挖掘,新面试题,架构设计,算法系列,计算机类,设计模式,软件测试,重构优化,等更多分类

  • Updated Oct 8, 2021

H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

  • Updated Oct 17, 2021
  • Jupyter Notebook

macOS development environment setup: Easy-to-understand instructions with automated setup scripts for developer tools like Vim, Sublime Text, Bash, iTerm, Python data analysis, Spark, Hadoop MapReduce, AWS, Heroku, JavaScript web development, Android development, common data stores, and dev-based OS X defaults.

  • Updated Dec 23, 2020
  • Python
jovanpop-msft
jovanpop-msft commented Aug 18, 2021

Could we clarify that delta-log files are JSON line-delimited files in https://github.com/delta-io/delta/blob/master/PROTOCOL.md#delta-log-entries ?

In the PROTOCOL.md file it is not clear what is the format of JSON. Every delta-log entry file is "new-line delimited json file", but this is not specified in this file. Protocol do not explicitly specify that every action is stored as a single-lin

wanshicheng
wanshicheng commented Jun 23, 2021

Used Spark version
Spark Version: 2.4.4
Used Spark Job Server version
SJS version: v0.11.1

Deployed mode
client on Spark Standalone

Actual (wrong) behavior
I can't get config, when post a job with 'sync=true'. I got it:
http://localhost:8090/jobs/ff99479b-e59c-4215-b17d-4058f8d97d25/config
{"status":"ERROR","result":"No such job ID ff99479b-e59c-4215-b17d-4058f8d97d25"

SynapseML
brunocous
brunocous commented Sep 2, 2020

I have a simple regression task (using a LightGBMRegressor) where I want to penalize negative predictions more than positive ones. Is there a way to achieve this with the default regression LightGBM objectives (see https://lightgbm.readthedocs.io/en/latest/Parameters.html)? If not, is it somehow possible to define (many example for default LightGBM model) and pass a custom regression objective?

Created by Matei Zaharia

Released May 26, 2014

Repository
apache/spark
Website
spark.apache.org
Wikipedia
Wikipedia

Related Topics

hadoop scala