stream-processing
Here are 794 public repositories matching this topic...
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
-
Updated
May 8, 2022 - Java
A curated list of awesome big data frameworks, ressources and other awesomeness.
-
Updated
May 4, 2022
Python Stream Processing
-
Updated
Apr 23, 2022 - Python
A curated list of awesome System Design (A.K.A. Distributed Systems) resources.
-
Updated
Sep 29, 2021
Is your feature request related to a problem? Please describe.
A user in the community Slack wanted to filter out/select items from a JSON array of objects. Given records in the following format:
'[{"type": "AAA", "timestamp": "2021-09-27"}, {"type": "BBB", "timestamp": "2021-09-27"}, {"type": "AAA", "tAdd --add-exports jdk.management/com.ibm.lang.management.internal only when OpenJ9 detected.
Otherwise we got WARNING: package com.ibm.lang.management.internal not in jdk.management in logs
Fancy stream processing made operationally mundane
-
Updated
May 11, 2022 - Go
The Fastest Way to Build the Fastest Data Products. Build data-intensive applications and services in SQL — without pipelines or caches — using materialized views that are always up-to-date.
-
Updated
May 11, 2022 - Rust
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows
-
Updated
May 11, 2022 - C
to_dict() equivalent
I would like to convert a DataFrame to a JSON object the same way that Pandas does with to_dict().
toJSON() treats rows as elements in an array, and ignores the index labels. But to_dict() uses the index as keys.
Here is an example of what I have in mind:
function to_dict(df) {
const rows = df.toJSON();
const entries = df.index.map((e, i) => ({ [e]: rows[i] }));
Upserts, Deletes And Incremental Processing on Big Data.
-
Updated
May 11, 2022 - Java
High-performance time-series aggregation for PostgreSQL
-
Updated
Feb 20, 2022 - C
high performance JSON encoder/decoder with stream API for Golang
-
Updated
Jan 4, 2022 - Go
a curated list of awesome streaming frameworks, applications, etc
-
Updated
May 10, 2022
A Python stream processing engine modeled after Yahoo! Pipes
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
Stream Processing and Complex Event Processing Engine
-
Updated
Mar 28, 2022 - Java
-
Updated
May 10, 2022 - Go
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
Wormhole is a SPaaS (Stream Processing as a Service) Platform
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
A microservices-based Streaming and Batch data processing in Cloud Foundry and Kubernetes
-
Updated
May 11, 2022 - Java
A lightweight stream processing library for Go
-
Updated
Feb 11, 2022 - Go
Framework for building Event-Driven Microservices
-
Updated
May 10, 2022 - Java
Lightweight real-time big data streaming engine over Akka
-
Updated
Mar 1, 2022 - Scala
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."


I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h