Kafka Summit 2023: Key takeaways
Sion — We joined this event as a startup sponsor, organised by Confluent, one of OSO’s key technology partners. We brought our technical Kafka experts to explain the importance of Apache Kafka best practices, and outlined how OSO professional services can increase the success of Kafka adoption in your organisation. Over two days, over 1,500 Kafka enthusiasts attended a combination of workshops, presentations and networking sessions – I’ll distil the opening keynote, outline what is currently happening and what is on the roadmap for the upcoming future.
Keynote: The incredible adoption of Apache Kafka across all industries
Opening Kafka Summit London 2023, Jay Kreps took to the stage to address the business trends for the need of real-time data. In an ever more competitive world, organisations are looking to data as their differentiating factor. Kreps cites hungry AI models as reasons why streaming has become essential for quickly processing high volumes of data. Machine learning models have proven to be a total game changer from personalised chatbots in e-commerce to increased security in financial services with intelligent fraud detection and Kafka is the engine powering all these.
Keynote Highlights:
- The Data Streaming Platform is now real – Kafka at the core, with other layers now being built on top of the Kafka core, connectors, stream processing, and tools to govern streaming data.
- Queues in Kafka are coming, is this a play to land-grab more use cases or will this actually be useful in day to day use cases? KIP-932
- Managed Flink is finally coming to Confluent Cloud. AWS and Avien have been supporting the Flink community over the last few years where Confluent has been lagging behind with its KsqlDB. Flink is now fast becoming the de facto standard for streaming processing.
- Stream sharing, first step in competing with AWS Data Exchange, where customers can consume / connect with over 3,500 unique data streams. Confluent Stream Sharing is the first step in simplifying the process of sharing data outside the organisation – think the ability to commercialise their data that is currently generating no revenue. Companies can now create completely new revenue streams based on integrated data from a combination of organisations.
- Data Quality Rules which allow inbuilt validation without the need for complex schemas is also coming to the Confluent platform.
OSO’s Insights from Breakout Sessions & Lightning Talks
A couple of sessions which caught my attention which provided practical insights included:
1. Apples and Oranges – Comparing Kafka Streams and Flink
An extremely popular session, focused on an interesting comparison between Kafka Streams and Flink given by Bill Bejeck, DevX Engineer at Confluent. What stood out to me as the greatest take away, its nearly impossible to recommend one over another, it ultimately depends on the use-case and technical skills within the organisation – there’s really quite a lot of overlap between the two technologies. The notable differences have been captured in an easy to use Kafka Streams vs Flink checklist, which matches stream processing requirements against which framework best meets your needs.
2. Designing a Data Mesh With Kafka
Another super interesting session presented by the team at Saxo Bank around “Designing a Data Mesh With Kafka” Covering how the team has implemented data mesh principles, and that the term can mean different things to different organisations. Leveraging open source tooling, like the Data Hub Project, Confluent Cluster Linking and the Apache Kafka Connector ecosystem, they have focused on operational data to deliver embedded analytics to optimise operational overheads, making it more discoverable, addressable, and trustworthy. This also ties into the open source project SpecMesh which OSO are supporting.
Need Kafka support or advice?
We love helping companies get the most out of Kafka and event streaming. Check out the Kafka Services we offer and get in touch to find out how we can support your business.