Current 2025: Watching Confluent Prepare for Sale in Real Time
Sion Smith13 November 2025
Blogs4 mins read
The entrance to Current 2025 in New Orleans had everything: jazz music, smoke machines, Instagram-worthy photo boards. It was vibrant, colourful, energetic! Honestly better than Austin 2024. Walking in at 8am, sessions already underway, I thought: “Right, this is going to be good.”
I was wrong.
What followed was only one and half days that felt less like a conference and more like watching a company going through the motions whilst waiting for acquisition paperwork to clear. The energy at the entrance? Pure theatre. The substance inside? Concerningly thin.
I’ve attended Confluent events for years. This felt VERY different!! And not in a good way. This wasn’t just a disappointing conference. It was a signal about where Confluent is heading, and where they’re failing to lead the real-time data ecosystem.
The Keynote Demo That Wasn’t Ready
Jay Kreps remains an exceptional speaker, polished, coherent, telling a clear story. The Real-Time Context Engine announcement makes genuine sense (more on this later).
Then Jay handed over to the Shauns, and it fell apart. Stuttering. Lost in their own presentations. Constantly referring back to notes below. These weren’t subjects of battle-tested solutions, this was vapourware presented by people who clearly hadn’t deployed it. No real-world reference points because there are none. This was cobbled together weeks before the event.
Keynote Day 2… was worse: a talk show format with children’s toys thrown around the audience. When they asked how many executives were in attendance, one hand went up in a room of 500 people. One.
This is what happens when you’re scrambling to stay relevant but haven’t figured out how. Confluent sees AI happening and knows real-time data should be crucial, but they’re throwing solutions at the wall without doing the hard work of understanding what customers actually need.
No One Wants Flink for Everything
Confluent’s answer to agentic AI is apparently Flink for every workflow. Fraud detection! Anomaly detection! The online discourse around this has been brutal, and rightly so.
Here’s the problem: those pre-built ML functions they announced serve maybe 5% of actual enterprise use cases. Why would I build fraud detection when that’s what Stripe is for? Why build anomaly detection when that’s DataDog’s job?
Customers don’t want another framework to learn. Companies taking their first steps into agentic AI need baby steps and points of reference, not a black box managed service they can’t control.
The Confluent Intelligence Platform might be technically impressive, but it’s asking customers to make a massive leap of faith with no way to test, evaluate, or control things on their own infrastructure. No local development. No proper eval frameworks. No transparency.
That’s not a good developer experience!! It’s a vendor lock-in dressed up as innovation. Now they’re just confusing customers by shoving Flink down everyone’s throats whilst wrapping everything in proprietary services.
Cutting Corners, Losing Credibility
Let’s talk about what a conference on a budget looks like.
You pay $600 for a ticket and get a packet of crisps (chips for you Americans), a can of Coke, and a turkey wrap that’s been sitting out for hours. One vegan option. Compare that to Austin’s amazing food trucks.
Temporary staff couldn’t tell you where sessions were happening. The after party required walking over a mile after a full conference day. In Austin, drinks were around vendor stands, people mingled organically. Now? Forced exodus to a distant bar.
Multiple vendors told me the same thing: “Not worth the money. Hardly any leads. Where is everyone?”
On its own, each of these is minor. Together? This looks exactly like a company trying to squeeze maximum profit whilst cutting every corner. This looks like a company preparing to sell.
Agentic workflows will need access to real-time data. When an agent is making decisions, it needs current contextual information about the business, not stale data from yesterday. Stream processing and real-time events from Kafka topics are the only way to provide that context.
Confluent’s onto something.
But this needs to be open source!! Companies need to run it on their own infrastructure, test it locally, build and iterate on prompts, run evaluations against different models, and govern how context is shared.
The concept is sound. The implementation approach – proprietary, closed, managed-only and completely misses where the market is heading.
This is the opportunity. There’s space for an open-source framework that sits on top of Apache Kafka and provides real-time context engine capability. Companies should be able to deploy this themselves, test properly, and integrate into existing governance frameworks. Something that ensures Customer A’s data can’t leak to Customer B through an MCP server. Proper Evals need to be in place to validate this context and the non deterministic nature of Gen AI.
What This Means For the Future Of Apache Kafka
Kafka has become a commodity. It’s the petrol (gas for you Americans again) in your car, you rely on it, but you’re not excited about it anymore. The real-time use cases that used to generate excitement are standard now. The market isn’t growing like it used to.
The mood is shifting on Confluent specifically. Customers don’t see the value for the price. “I can get everything I’m doing here in open source.” Trust is eroding.
Meanwhile, at OSO, we’ve noticed the questions have changed. We used to get loads of queries about how to run, debug, and troubleshoot Kafka. Those have dried up, people are asking ChatGPT instead.
The one standout session was “StreamLink: Real-Time Data Ingestion at OpenAI Scale” Practical, detailed, showing deep understanding of running Apache Kafka at scale. But they didn’t take Q&A.
Reading the Balance Sheet
My honest take: I don’t think Current 2026 happens. I think Confluent gets sold within the next 12 months, and this conference dies with the acquisition.
Everything about Current 2025 screamed “shop for sale”! From the cost-cutting to the lack of coherent vision to bringing the wrong laptop on stage for demos. These aren’t the mistakes of a confident market leader.
But here’s what matters: real-time data is more relevant than ever because of agentic AI. Confluent’s failure to seize this moment doesn’t mean the opportunity disappears, it means it’s up for grabs.
For OSO, the path forward is clear: help customers understand how real-time data fits into agentic AI, provide frameworks for deploying this capability in secure and auditable ways, and lead with open source rather than black boxes.
Kafka has become boring. Agentic AI makes it exciting again. Confluent’s too busy preparing for exit to notice.
The rest of us should get building.
Sion Smith is the co-founder and CTO of OSO, a consulting company based in the United Kingdom that provides companies with on-demand Kafka experts.