Python Stream Processing
-
Updated
Nov 17, 2020 - Python
{{ message }}
Python Stream Processing
Apache Kafka running on Kubernetes
Machine Learning Platform and Recommendation Engine built on Kubernetes
[DEPRECATED] Docker images for Confluent Platform.
Scripts and samples to support Confluent Platform talks. May be rough around the edges. For automated tutorials and QA'd code, see https://github.com/confluentinc/examples/
This project contains examples which demonstrate how to deploy analytic models to mission-critical, scalable production environments leveraging Apache Kafka and its Streams API. Models are built with Python, H2O, TensorFlow, Keras, DeepLearning4 and other technologies.
Learn Kafka Streams with several examples!
equivalent to kafka-streams
Go stream processing library
A list about Apache Kafka
A library that provides an in-memory Kafka instance to run your tests against.
Catch all is right now pushing error to sentry. Would like to publish the same to new relic as well. Planning to use noticeError provided by newrelic java agent.
Complex Event Processing on top of Kafka Streams
Is your feature request related to a problem? Please describe.
Some areas of the web portal have issues with screen readers. Here are a few examples
Describe the solution you'd like
Improve readability for screen readers across the web portal
Code samples for the Lightbend tutorial on writing microservices with Akka Streams, Kafka Streams, and Kafka
Thin Scala wrapper around Kafka Streams Java API
Scala DSL for Unit-Testing Processing Topologies in Kafka Streams
Real Time Big Data / IoT Machine Learning (Model Training and Inference) with HiveMQ (MQTT), TensorFlow IO and Apache Kafka - no additional data store like S3, HDFS or Spark required
StreamLine - Streaming Analytics
Kafka ecosystem ... but step by step!
Upgrade to .net 5 (core projects + samples + unit test + cross projects)
Currenlty a StreamsExecutionEnvironment instance can be registed directly through the AzkarraContext#addExecutionEnvironment method and from the configuration.
The following example shows how to define an environment using configuration :
azkarra {
// Create a environment for running the WordCountTopology
environments = [
{
name: "dev"
config = {}
jo
The sum and count Kafka tutorials instruct the user to create a directory called aggregate-, whereas the actual folder for them in GitHub are called aggregating-.
There is no functional impact to current behavior (it still works) but it would be better to be consistent.
A collection of kafka-resources
Kafka Streams + Java + gRPC + TensorFlow Serving => Stream Processing combined with RPC / Request-Response
Scalable stream processing platform for advanced realtime analytics on top of Kafka and Spark. LogIsland also supports MQTT and Kafka Streams (Flink being in the roadmap). The platform does complex event processing and is suitable for time series analysis. A large set of valuable ready to use processors, data sources and sinks are available.
Clojure transducers interface to Kafka Streams
OpenTracing Instrumentation for Apache Kafka Client
This is the central repository for all materials related to Kafka Streams : Real-time Stream Processing! Book by Prashant Pandey.
Add a description, image, and links to the kafka-streams topic page so that developers can more easily learn about it.
To associate your repository with the kafka-streams topic, visit your repo's landing page and select "manage topics."
Add some integration tests including the use of an avro schema registry