kafka
Here are 7,377 public repositories matching this topic...
一个涵盖六个专栏:Spring Boot 2.X、Spring Cloud、Spring Cloud Alibaba、Dubbo、分布式消息队列、分布式事务的仓库。希望胖友小手一抖,右上角来个 Star,感恩 1024
-
Updated
Mar 26, 2022 - Java
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
-
Updated
Apr 12, 2022 - Java
Open-source IoT Platform - Device management, data collection, processing and visualization.
-
Updated
Apr 26, 2022 - Java
CMAK is a tool for managing Apache Kafka clusters
-
Updated
Mar 17, 2022 - Scala
Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.
-
Updated
Apr 27, 2022 - Java
Python Stream Processing
-
Updated
Apr 23, 2022 - Python
The Apache Kafka C/C++ library
-
Updated
Apr 26, 2022 - C
Is your feature request related to a problem? Please describe.
A user in the community Slack wanted to filter out/select items from a JSON array of objects. Given records in the following format:
'[{"type": "AAA", "timestamp": "2021-09-27"}, {"type": "BBB", "timestamp": "2021-09-27"}, {"type": "AAA", "tKafka library in Go
-
Updated
Apr 23, 2022 - Go
【咕泡学院实战项目】-基于SpringBoot+Dubbo构建的电商平台-微服务架构、商城、电商、微服务、高并发、kafka、Elasticsearch
-
Updated
Feb 12, 2022 - Java
Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
-
Updated
Apr 26, 2022 - JavaScript
PipelineAI Kubeflow Distribution
-
Updated
Apr 24, 2020 - Jsonnet
The Fastest Way to Build the Fastest Data Products. Build data-intensive applications and services in SQL — without pipelines or caches — using materialized views that are always up-to-date.
-
Updated
Apr 26, 2022 - Rust
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
Version & Environment
Redpanda version: (use rpk version): v21.11.10
Kafka client: franz-go v1.4.0
一站式Apache Kafka集群指标监控与运维管控平台
-
Updated
Mar 17, 2022 - Java
A Microservice Toolkit from The New York Times
-
Updated
Aug 3, 2021 - Go
Kafka Web UI
-
Updated
Apr 15, 2022 - Java
Improve this page
Add a description, image, and links to the kafka topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the kafka topic, visit your repo's landing page and select "manage topics."


I have noticed when ingesting backlog(older timestamped data) that the "Messages per minute" line graph and "sources" data do not line up.
The Messages per minute appear to be correct for the ingest rate, but the sources breakdown below it only show messages for each type from within the time window via timestamp. This means in the last hour if you've ingested logs from 2 days ago, the data is