B's Corner

Let's make a Kinesis consumer

· brandon lee

Our initial AWS Kinesis implementation started modestly with just a few low-volume data streams. While this may have been overengineered for our immediate needs, we anticipated handling high-volume data sets in the near future and wanted to build expertise with the platform early.

The turning point came when we began ingesting real-time flight tracking data from every aircraft in the airport’s airspace. Our existing Python-based synchronous consumer quickly reached its limits. This led me to rebuild the consumer in Go, leveraging its exceptional concurrency capabilities.

Achieving high concurrency in Go is stupid easy. With very simple code we can consume each shard within every stream independently giving us very high consumption throughput.

Since switching to Go, the consumer has been continuously onboarding new data streams and we are now consuming 5+ million messages per day.

“Simplicity is prerequisite for reliability.” — Edsger W. Dijkstra