stream-processing
Here are 779 public repositories matching this topic...
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
-
Updated
Mar 23, 2022 - Java
A curated list of awesome big data frameworks, ressources and other awesomeness.
-
Updated
Mar 8, 2022
Python Stream Processing
-
Updated
Jan 29, 2022 - Python
A curated list of awesome System Design (A.K.A. Distributed Systems) resources.
-
Updated
Sep 29, 2021
I have a use case where I need to create a new stream containing the bearing between two consecutive points in a pre-existing lat/lon stream. Normally bearing would be available in a standard lib but in a pinch can easily be implemented through sin, cos, and atan2 funcs, none of which are currently available in ksql.
Basic trig functions have a range of use cases in geometric and geographic co
Add --add-exports jdk.management/com.ibm.lang.management.internal only when OpenJ9 detected.
Otherwise we got WARNING: package com.ibm.lang.management.internal not in jdk.management in logs
Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
The Fastest Way to Build the Fastest Data Products. Build data-intensive applications and services in SQL — without pipelines or caches — using materialized views that are always up-to-date.
-
Updated
Mar 29, 2022 - Rust
Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
-
Updated
Mar 29, 2022 - C
to_dict() equivalent
I would like to convert a DataFrame to a JSON object the same way that Pandas does with to_dict().
toJSON() treats rows as elements in an array, and ignores the index labels. But to_dict() uses the index as keys.
Here is an example of what I have in mind:
function to_dict(df) {
const rows = df.toJSON();
const entries = df.index.map((e, i) => ({ [e]: rows[i] }));
Upserts, Deletes And Incremental Processing on Big Data.
-
Updated
Mar 29, 2022 - Java
High-performance time-series aggregation for PostgreSQL
-
Updated
Feb 20, 2022 - C
fastest JSON encoder/decoder with powerful stream API for Golang
-
Updated
Jan 4, 2022 - Go
a curated list of awesome streaming frameworks, applications, etc
-
Updated
Mar 28, 2022
A Python stream processing engine modeled after Yahoo! Pipes
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
Stream Processing and Complex Event Processing Engine
-
Updated
Mar 28, 2022 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
Wormhole is a SPaaS (Stream Processing as a Service) Platform
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
A lightweight stream processing library for Go
-
Updated
Feb 11, 2022 - Go
A microservices-based Streaming and Batch data processing in Cloud Foundry and Kubernetes
-
Updated
Mar 28, 2022 - Java
-
Updated
Mar 26, 2022 - Go
Framework for building Event-Driven Microservices
-
Updated
Mar 28, 2022 - Java
Lightweight real-time big data streaming engine over Akka
-
Updated
Mar 1, 2022 - Scala
A stream processing API for Go (alpha)
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."


I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h