blog by OSO

7 Must-Have CLI Tools Every Apache Kafka Engineer Should Know in 2025

Sion Smith 7 August 2025
kafka cli tooling

The OSO engineers have worked with over 200 Kafka deployments, and there’s one pattern we see repeatedly – teams struggling with the basic kafka-console-consumer and kafka-topics shell scripts, spending hours on tasks that should take minutes. The difference between novice and expert Kafka operations often comes down to knowing the right CLI tools for the job.

While Apache Kafka’s official shell scripts provide basic functionality, a powerful ecosystem of specialised CLI tools has emerged that can transform your Kafka operations from tedious manual work into streamlined, professional workflows. This guide reveals the 7 essential CLI tools that separate expert Kafka engineers from the rest, showing you exactly when and how to use each one for maximum impact.

The Game-Changers: Universal Kafka Operations

Tool #1: kcat – The Swiss Army Knife

Why it’s essential: kcat (formerly kafkacat) is described as “netcat for Kafka” – the most versatile tool for producing, consuming, and inspecting Kafka data with advanced offset management and serialisation support. It’s the first tool every Kafka engineer should master.

Core capabilities:

  • High-performance message production and consumption
  • Advanced offset positioning including timestamp-based consumption
  • Schema Registry integration for Avro messages
  • Mock cluster creation for testing environments
  • Comprehensive metadata inspection

Essential commands:

# Basic consumption from beginning
kcat -C -b localhost:9092 -t my-topic -o beginning

# Consume with custom formatting showing metadata
kcat -C -b localhost:9092 -t my-topic \
     -f 'Topic: %t[%p], Offset: %o, Key: %k, Value: %s\n'

# Produce messages with keys
echo "user123:login_event" | kcat -P -b localhost:9092 -t events -K:

# Query cluster metadata
kcat -L -b localhost:9092

# Consume from specific timestamp (very useful for debugging)
kcat -C -b localhost:9092 -t my-topic -o s@1640995200000

# Create mock cluster for testing
kcat -M 3
# Output: BROKERS=localhost:12345,localhost:46346,localhost:23599

Expert usage: Mock cluster creation for testing, timestamp-based consumption for incident investigation, filtering with Unix pipes, and seamless integration with Schema Registry for Avro messages. The OSO engineers use kcat daily for rapid message inspection and debugging production issues.

Tool #2: kaf – The Context Master

Why it’s essential: kaf eliminates repetitive typing with context management, allowing seamless switching between development, staging, and production clusters with interactive selection. It transforms verbose Kafka commands into simple, intuitive operations.

Core capabilities:

  • Interactive cluster context switching
  • Simplified command syntax
  • Consumer group management with visual feedback
  • Built-in auto-completion
  • Consumer group offset manipulation

Essential commands:

# Add cluster configurations
kaf config add-cluster local -b localhost:9092
kaf config add-cluster prod -b prod-kafka:9092

# Interactive cluster selection
kaf config select-cluster
# Use arrow keys to navigate between: local, prod, dev

# Simple topic consumption (no need for bootstrap-server!)
kaf consume my-topic

# List topics with clean output
kaf topics

# Describe topic with partition details
kaf topic describe my-topic

# Consumer group operations
kaf groups
kaf group describe my-consumer-group

# Reset consumer group offsets
kaf group commit my-group -t my-topic --offset latest --all-partitions

Expert usage: The OSO engineers configure kaf contexts for all environments (dev, staging, prod) at the start of each project. The interactive cluster selection prevents accidentally running commands against the wrong environment – a common cause of production incidents.

Tool #3: kafkactl – The Enterprise Powerhouse

Why it’s essential: kafkactl provides comprehensive functionality including ACL management, consumer group operations, protobuf support, and direct Kubernetes integration. It’s designed for enterprise environments with complex security and operational requirements.

Core capabilities:

  • Comprehensive ACL management
  • Advanced consumer group operations
  • Protobuf and Avro message handling
  • Kubernetes native integration
  • Rate-limited message production
  • Dynamic auto-completion

Essential commands:

# Advanced consumption with protobuf deserialization
kafkactl consume my-topic --value-proto-type MyMessageType --proto-file schema.proto

# Produce messages with rate limiting
cat messages.json | kafkactl produce my-topic --rate=100

# Consumer group management with partition details
kafkactl describe consumer-group my-group --partitions

# ACL management (enterprise security)
kafkactl create acl --topic my-topic --operation read --principal User:alice
kafkactl get acl --topic my-topic

# Topic operations with detailed output
kafkactl get topics --output yaml

# Clone consumer group (useful for testing)
kafkactl clone consumer-group source-group target-group

Expert usage: Advanced features like rate limiting for message production, dynamic auto-completion, and cloud provider plugins for AWS and Azure environments. The OSO engineers particularly value the ACL management capabilities for enterprise security compliance.

The Specialists: Targeted Solutions

Tool #4: topicctl – The GitOps Champion

Why it’s essential: topicctl enables declarative management of Kafka topics using YAML configurations that integrate with version control and CI/CD pipelines. It brings infrastructure-as-code principles to Kafka topic management.

Core capabilities:

  • Declarative YAML-based topic definitions
  • GitOps workflow integration
  • Idempotent topic operations
  • Dry-run and diff capabilities
  • Advanced partition placement strategies

Essential commands and configurations:

# topic-config.yaml
meta:
  name: user-events
  cluster: production
  environment: prod
spec:
  partitions: 12
  replicationFactor: 3
  retentionMinutes: 10080  # 7 days
  placement:
    strategy: balanced
  settings:
    cleanup.policy: delete
    compression.type: snappy
# Apply topic configuration (GitOps style)
topicctl apply topic-config.yaml

# Dry run to see what changes would be made
topicctl apply --dry-run topic-config.yaml

# View differences before applying
topicctl diff topic-config.yaml

# Interactive REPL for cluster exploration
topicctl repl --cluster-config cluster.yaml

# Get topic information
topicctl get topics --cluster-config cluster.yaml

Expert usage: Infrastructure-as-code approach with guided, idempotent processes that make topic management self-service even for non-Kafka experts. The OSO engineers implement topicctl in CI/CD pipelines to prevent configuration drift and enable proper change management.

Tool #5: kcctl – The Kafka Connect Specialist

Why it’s essential: kcctl provides kubectl-like experience for Kafka Connect operations, built as a native Quarkus application for superior performance. It simplifies the complex world of Kafka Connect management.

Core capabilities:

  • kubectl-inspired command structure
  • Comprehensive connector lifecycle management
  • Configuration context management
  • Native binary performance
  • Intuitive connector inspection

Essential commands:

# Configure connection to Kafka Connect cluster
kcctl config set-context local --cluster http://localhost:8083

# Get cluster information
kcctl info

# List available connector plugins
kcctl get connector-plugins

# Deploy a connector from JSON config
kcctl apply -f debezium-postgres-connector.json

# Check connector status
kcctl get connectors
kcctl describe connector my-postgres-connector

# Manage connector lifecycle
kcctl pause connector my-postgres-connector
kcctl resume connector my-postgres-connector
kcctl restart connector my-postgres-connector

# Delete connector
kcctl delete connector my-postgres-connector

# Patch connector configuration
kcctl patch connector my-postgres-connector --set max.tasks=4

Expert usage: Comprehensive connector lifecycle management with intuitive commands for registering, examining, restarting, and deleting connectors. The OSO engineers use kcctl to standardise Kafka Connect operations across teams, reducing the learning curve for complex connector management.

Tool #6: kaskade – The Visual Explorer

Why it’s essential: kaskade offers a text user interface (TUI) with real-time monitoring, topic filtering, and Schema Registry support for interactive cluster exploration. It provides a visual, real-time view of your Kafka cluster.

Core capabilities:

  • Interactive terminal user interface
  • Real-time message streaming
  • Advanced filtering by key, value, header, and partition
  • Schema Registry integration for Avro and JSON Schema
  • Topic and consumer group management
  • Runtime auto-refresh

Essential commands:

# Basic TUI consumer mode
kaskade consumer -b localhost:9092 -t my-topic

# Consumer with Schema Registry support
kaskade consumer -b localhost:9092 -t avro-topic \
        -k registry -v registry \
        --registry url=http://localhost:8081

# Admin mode for topic management
kaskade admin -b localhost:9092

# Consumer with specific deserialization
kaskade consumer -b localhost:9092 -t my-topic \
        -k string -v json --from-beginning

# Secure connection example
kaskade consumer -b kafka:9093 -t my-topic \
        -c security.protocol=SASL_SSL \
        -c sasl.mechanism=PLAIN \
        -c sasl.username=user \
        -c sasl.password=pass

Configuration file example:

# kaskade.yaml
kaskade:
  debug: off
  refresh: on
  refresh-rate: 5

bootstrap-servers: localhost:9092
client-id: kaskade-client

security:
  protocol: PLAINTEXT

Expert usage: Advanced deserialization support for JSON, Avro, and Protobuf with runtime auto-refresh and comprehensive filtering capabilities. The OSO engineers use kaskade for real-time cluster monitoring during deployments and for training new team members on Kafka concepts through its intuitive visual interface.

Tool #7: kcli – The Minimalist’s Choice

Why it’s essential: A lightweight Go-based CLI tool focused on essential Kafka operations with minimal configuration overhead. Perfect for containerised environments and automated scripts where minimal dependencies and fast startup times are crucial.

Core capabilities:

  • Lightweight binary with fast startup
  • Essential Kafka operations without bloat
  • Container-friendly design
  • Minimal configuration requirements
  • Scripting-optimized interface

Essential commands:

# Basic topic operations
kcli -brokers localhost:9092 list topics
kcli -brokers localhost:9092 create topic my-topic --partitions 3

# Simple message production
echo "test message" | kcli -brokers localhost:9092 produce -topic my-topic

# Basic consumption
kcli -brokers localhost:9092 consume -topic my-topic -from-beginning

# Consumer group information
kcli -brokers localhost:9092 describe group my-group

# Container usage
docker run --rm kcli:latest -brokers kafka:9092 list topics

Expert usage: Perfect for containerised environments and automated scripts where minimal dependencies and fast startup times are crucial. The OSO engineers deploy kcli in monitoring containers and CI/CD pipelines where overhead must be minimised.

The Expert’s Toolkit: When to Use What

Daily Operations Matrix

Message inspection and debugging: Use kcat for its versatility and powerful filtering capabilities. It’s the go-to tool for investigating production issues and understanding message flows.

Multi-environment management: Use kaf for context switching and simplified commands. Its interactive cluster selection prevents costly mistakes across environments.

Enterprise operations: Use kafkactl for comprehensive feature sets and cloud integrations. Essential for organisations with complex security and compliance requirements.

Infrastructure management: Use topicctl for declarative, version-controlled operations. Implement this for any environment where change management and auditability matter.

Connect operations: Use kcctl for streamlined connector management. Simplifies the complexity of Kafka Connect operations significantly.

Visual exploration: Use kaskade for interactive monitoring and real-time insights. Excellent for training and real-time operational visibility.

Lightweight automation: Use kcli for minimal-overhead scripting scenarios. Perfect for containerised monitoring and CI/CD integration.

The Progressive Implementation Strategy

Week 1: Replace basic kafka-console-* usage with kcat. This single change will immediately improve your productivity and debugging capabilities.

Week 2: Implement context management with kaf or kafkactl. Set up configurations for all your environments and stop typing bootstrap servers manually.

Week 3: Establish declarative topic management with topicctl. Move your topic configurations into version control and implement proper change management.

Week 4: Deploy specialised tools (kcctl, kaskade, kcli) based on your specific operational needs and team requirements.

Pro Tips and Advanced Techniques

Tool Combination Strategies

The debugging workflow: Combine kcat + kaskade for comprehensive message investigation. Use kcat for precise queries and kaskade for real-time visual exploration.

The deployment pipeline: Integrate topicctl + kafkactl for infrastructure-as-code topic management with comprehensive operational capabilities.

The multi-cluster strategy: Use kaf + kcctl for seamless environment management, particularly useful for teams managing multiple Kafka Connect clusters.

Configuration Management

Standardise connection configurations across tools by using environment variables and configuration files. The OSO engineers maintain team-wide tool configurations in a shared repository, ensuring consistent setups across all team members.

Implement secure credential management using tools like HashiCorp Vault or AWS Secrets Manager, particularly important when using tools like kafkactl and kcctl in production environments.

Set up shell aliases and completion scripts for frequently used commands to maximise efficiency and reduce typing errors.

Actions

The OSO engineers have witnessed teams reduce their Kafka operational overhead by 60% simply by adopting the right CLI tools. What used to require complex shell scripts and manual coordination now becomes streamlined, repeatable processes.

Your next steps: Start with kcat and kaf as your foundation – these two tools alone will transform your daily Kafka operations. Then progressively add specialised tools based on your specific requirements. The investment in learning these tools pays immediate dividends in reduced errors, faster troubleshooting, and improved team productivity.

The competitive advantage: In 2025, knowing these CLI tools isn’t just about professional competency – it’s about operational excellence. Teams using modern Kafka tooling consistently outperform those stuck with basic shell scripts, delivering more reliable systems with less operational overhead.

Ready to revolutionise your Kafka operations? The OSO engineers have put together comprehensive CLI tools comparison guides and can help optimise your Kafka infrastructure. Contact us to learn how we can accelerate your team’s Kafka expertise and operational maturity.

Ready to Transform Your Kafka Operations?

Our Kafka experts help teams implement modern CLI toolchains that reduce operational overhead by 60%. Book a consultation today.

CONTACT US
OSO
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.