Messaging Systems Compared: Kafka, RabbitMQ, AWS SQS & SNS
A comprehensive guide to modern messaging systems. Learn the core concepts, best use cases, and hands-on implementation patterns for Apache Kafka, RabbitMQ, AWS SQS, and AWS SNS — and discover which tool to reach for in any situation.
Kafka Tutorial
Interactive Kafka app built with Next.js and KafkaJS. Create topics, produce messages, consume them, and inspect consumer groups.
View on GitHubRabbitMQ Tutorial
Interactive demos for all five exchange types built with Next.js and amqplib. Fanout, Direct, Topic, Headers, and Dead Letter Queues.
View on GitHubSQS & SNS Tutorial
Full messaging demo with Next.js 16 and AWS SDK v3. Covers SQS queuing and SNS pub/sub patterns with real AWS integration.
View on GitHubIntroduction
Modern distributed systems rely on messaging infrastructure to decouple components, handle asynchronous workloads, and scale reliably. But not all messaging systems are created equal — Apache Kafka, RabbitMQ, AWS SQS, and AWS SNS each solve different problems with fundamentally different architectures.
This guide consolidates three hands-on tutorials into one comprehensive reference. You will learn the core concepts behind each system, see real code examples, and walk away with a clear mental model for choosing the right tool for any scenario.
Summary Decision Matrix
Before diving into each system, here is a high-level comparison to orient your thinking:
| Feature | Apache Kafka | RabbitMQ | AWS SQS | AWS SNS |
|---|---|---|---|---|
| Primary Use Case | Massive event streaming, log aggregation, event replay | Complex message routing, traditional task queues | Simple serverless task queues, buffering/load-leveling | Broadcasting messages, Fan-out, User notifications |
| Architecture | Distributed Log / Append-only | Message Broker (Exchanges & Queues) | Simple Point-to-Point Queue | Pub/Sub (Push based) |
| Message Retention | Days, weeks, or indefinitely | Deleted immediately after consumption (Ack) | Up to 14 days | No retention (fire-and-forget unless queued) |
| Consumer Model | Pull (via Offsets) | Push (default) or Pull | Pull (Long-polling) | Push (to HTTP, Lambda, SQS, Email, SMS) |
| Replayability | ✅ Yes (Native offset resetting) | ❌ No (Once consumed, it's gone) | ❌ No | ❌ No |
| Routing Logic | Very basic (Topics/Partitions) | Highly Advanced (Regex, Headers, Routing Keys) | None (1 Queue = 1 destination) | Basic (Subscription filters) |
| Overhead/Ops | High (Requires KRaft, clustering, tuning) | Medium (Server management, Erlang VM, clustering) | Zero (Fully managed by AWS) | Zero (Fully managed by AWS) |
Part 1: Apache Kafka — The Event Streamer & Storage
Core Concept
Kafka is a distributed event streaming platform based on an append-only, immutable log. It acts as a highly scalable “dumb broker with smart consumers.” Messages are not deleted when consumed; they are retained for a configured time (e.g., 7 days), allowing multiple consumer groups to read the same data at their own pace using offsets.
Best Applications & Scenarios
High-Throughput Telemetry & Log Aggregation
Gathering millions of logs, metrics, or IoT sensor data points per second.
Website Activity Tracking
Tracking real-time user clicks, page views, and searches — the exact reason LinkedIn originally created Kafka.
Stream Processing & Real-Time Analytics
Filtering, aggregating, or joining streams of data on the fly (often using Kafka Streams or Apache Flink) before saving to a database.
Event Sourcing & Audit Logs
Scenarios where the “history of events” is the source of truth, and you need the ability to “replay” messages by resetting consumer offsets to rebuild database state or test new features.
Complex Data Pipelines (ETL)
Moving massive amounts of data from source databases to data warehouses while standardizing schemas.
Core Kafka Concepts
| Concept | Key Takeaway |
|---|---|
| Topics & Partitions | A topic is a named stream split into partitions. Each message gets an immutable offset per partition. |
| Producers & Message Keys | No key → round-robin across partitions. With a key → same key always lands on the same partition (ordering guarantee). |
| Consumers & Deserialization | Pull model. Reads messages in offset order within each partition. |
| Consumer Groups & Offsets | A group shares partitions among its consumers. Kafka tracks progress in __consumer_offsets. |
| Brokers & Replication | A cluster of brokers. Each partition has one leader broker. Replication factor > 1 = fault tolerance. |
| KRaft (No ZooKeeper) | Kafka 3.3.1+ uses the built-in Raft protocol. ZooKeeper is gone in Kafka 4.x. Never put ZooKeeper in your client config. |
Getting Started with Kafka
The tutorial requires Node.js v18+ and Docker. Start a single-broker Kafka instance with Docker Compose:
# docker-compose.yml — minimal single-broker Kafka (KRaft mode)
version: "3"
services:
kafka:
image: confluentinc/cp-kafka:7.6.0
container_name: kafka1
ports:
- "9092:9092"
environment:
KAFKA_NODE_ID: 1
KAFKA_PROCESS_ROLES: broker,controller
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_DIRS: /var/lib/kafka/data
CLUSTER_ID: MkU3OEVBNTcwNTJENDM2Qk
# Start Kafka
docker compose up -dAlternatively, use Option B for a visual UI: clone the conduktor-kafka-single.yml stack, which adds a Conduktor web UI at http://localhost:8080.
- 1Clone the repository:
git clone https://github.com/audoir/kafka-tutorial.git - 2Install dependencies:
npm install - 3Start the development server:
npm run dev
Tutorial Walkthrough — Five Tabs
The Kafka tutorial app has five tabs. Work through them in order for the best learning experience.
📚 Tab 1: Concepts
Read through the concept cards covering Topics & Partitions, Producers & Message Keys, Consumer Groups & Offsets, Brokers & Replication, and KRaft. The bottom of the page shows 8 common Kafka use cases.
📋 Tab 2: Topics
Create a topic (e.g., demo-topic with 3 partitions), then inspect it. More partitions = more parallelism, but you cannot reduce partition count after creation.
📤 Tab 3: Producer
Send messages with and without keys. No key → round-robin across partitions. With a key → same key always lands on the same partition (ordering guarantee). Try the “Send Demo Batch (truck GPS)” button to see key-based routing in action.
📥 Tab 4: Consumer
Read messages back from Kafka. Consume with group-a from beginning, then consume again — you get 0 messages (offsets already committed). Switch to group-b and consume from beginning — you get all messages again. Each consumer group tracks its own offsets independently.
👥 Tab 5: Consumer Groups
Understand partition assignment scenarios (3 partitions / 3 consumers = perfect; 3 partitions / 4 consumers = one idle consumer). Learn about eager vs. cooperative rebalancing and at-most-once vs. at-least-once vs. exactly-once delivery semantics.
How the Code Works — KafkaJS Client
// lib/kafka.ts — The KafkaJS Client singleton
import { Kafka } from 'kafkajs';
const kafka = new Kafka({
clientId: 'kafka-tutorial',
brokers: ['localhost:9092'],
});
// Shared admin client — used for topic management and cluster inspection
export const admin = kafka.admin();
// Shared producer — used to send messages to topics
export const producer = kafka.producer();
// Factory function — creates a new consumer with a given group ID
export function createConsumer(groupId: string) {
return kafka.consumer({ groupId });
}Why singletons for admin and producer? KafkaJS clients maintain persistent TCP connections to the broker. Reusing a single instance avoids the overhead of creating a new connection on every API request. Each API route calls .connect() before use and .disconnect() after. Why a factory for consumers? Each consumer must belong to a specific group, and different tutorial experiments use different group IDs.
Kafka Project Structure
kafka-tutorial/
├── app/
│ ├── api/
│ │ ├── health/route.ts # GET — ping Kafka, return broker/cluster info
│ │ ├── topics/route.ts # GET list, POST create, DELETE topic
│ │ ├── produce/route.ts # POST send messages
│ │ ├── consume/route.ts # POST consume messages
│ │ └── consumer-groups/route.ts # GET list groups
│ ├── layout.tsx
│ └── page.tsx # Main tabbed UI + connection status badge
├── components/
│ ├── ConceptsPanel.tsx # Theory reference with diagrams
│ ├── TopicsPanel.tsx # Create/list/delete topics
│ ├── ProducerPanel.tsx # Send messages with/without keys
│ ├── ConsumerPanel.tsx # Consume and inspect messages
│ └── ConsumerGroupsPanel.tsx # Group scenarios + live group list
└── lib/
└── kafka.ts # KafkaJS client singletonPart 2: RabbitMQ — The Smart Message Router
Core Concept
RabbitMQ is a traditional message broker that implements the Advanced Message Queuing Protocol (AMQP). It is a “smart broker with dumb consumers.” It uses Exchanges (Direct, Fanout, Topic, Headers) to apply complex routing rules before placing messages into Queues. Once a message is consumed and acknowledged, it is permanently deleted from the queue.
Best Applications & Scenarios
Complex Routing Requirements
When you need intricate rules for who gets what message. For example, a Topic exchange routing messages matching eu.payroll.* to one queue, and us.hr.* to another.
Task/Worker Queues (Background Jobs)
Distributing time-consuming tasks (like image processing or PDF generation) among a pool of workers with round-robin load balancing.
Legacy/Multi-Protocol Integration
Because it supports AMQP, MQTT, and STOMP, it is excellent for integrating legacy enterprise systems or IoT devices.
Microservices RPC
If you need an asynchronous Request/Reply pattern (Remote Procedure Call) between two microservices, RabbitMQ handles this natively and elegantly.
The AMQ Model
Exchange Types Overview
| Type | Routing Logic | Routing Key? | Best For |
|---|---|---|---|
| Fanout | All bound queues | ❌ Ignored | Broadcasting, notifications |
| Direct | Exact key match | ✅ Exact match | Log levels, task routing |
| Topic | Pattern matching | ✅ Wildcards (* #) | Flexible routing, microservices |
| Headers | Header attributes | ❌ Ignored | Attribute-based routing |
Getting Started with RabbitMQ
Start RabbitMQ locally with Docker:
docker run -d \
--name rabbitmq \
-p 5672:5672 \
-p 15672:15672 \
rabbitmq:managementOpen the Management UI at http://localhost:15672 (username: guest, password: guest).
- 1Clone the repository:
git clone https://github.com/audoir/rabbitmq-tutorial.git - 2Install dependencies:
npm install - 3Start the development server:
npm run dev
The Setup → Publish → Consume Pattern
Each exchange type has three dedicated API routes. Always call Setup first — in RabbitMQ, messages are routed to queues at the moment of publishing. If a queue does not exist yet, the message is silently dropped.
Exchange Type Walkthroughs
Fanout Exchange
A fanout exchange broadcasts every message to all bound queues. The routing key is ignored entirely.
// Fanout: one publish → all queues receive a copy
channel.publish('fanout.exchange', '', Buffer.from(message));
// Routing key is empty string — it's ignored by fanout exchangesDirect Exchange
A direct exchange routes messages to queues whose binding key exactly matches the routing key.
// Direct: exact routing key match
channel.publish('direct.exchange', 'error', Buffer.from(message));
// Only queues bound with key 'error' receive this messageTopic Exchange
A topic exchange routes using wildcard pattern matching on dot-separated routing keys. * matches exactly one word; # matches zero or more words.
// Topic: wildcard pattern matching
// Pattern 'logs.*' matches 'logs.error' but NOT 'logs.error.critical'
// Pattern 'logs.#' matches both 'logs.error' and 'logs.error.critical'
channel.bindQueue('queue.star', 'topic.exchange', 'logs.*');
channel.bindQueue('queue.hash', 'topic.exchange', 'logs.#');Headers Exchange
A headers exchange routes based on message header attributes. Use x-match: all for AND logic or x-match: any for OR logic.
// Headers: attribute-based routing
channel.publish('headers.exchange', '', Buffer.from(message), {
headers: { format: 'pdf', type: 'report' }
});
// Queue bound with x-match=all, format=pdf, type=report → MATCH ✓
// Queue bound with x-match=all, format=pdf, type=invoice → NO MATCH ✗
// Queue bound with x-match=any, format=pdf, type=invoice → MATCH ✓Dead Letter Queue (DLQ)
A DLQ captures messages that cannot be processed — when a consumer rejects them, they expire (TTL elapsed), or the queue exceeds its length limit.
// Setup: main queue with dead letter exchange configured
channel.assertQueue('main.queue', {
durable: true,
arguments: {
'x-dead-letter-exchange': 'dlx.exchange',
'x-message-ttl': 5000 // optional: auto-expire after 5s
}
});
// Reject a message → routes to DLX → lands in dead letter queue
channel.nack(msg, false, false); // requeue=false → dead letter
// x-death header added by RabbitMQ
{
queue: 'main.queue', // original queue name
reason: 'rejected', // 'rejected', 'expired', or 'maxlen'
time: <timestamp>,
exchange: 'main.exchange',
'routing-keys': ['key']
}Shared Connection Helper
// lib/rabbitmq.ts
import amqp from 'amqplib';
const RABBITMQ_URL = process.env.RABBITMQ_URL || 'amqp://guest:guest@localhost:5672';
export async function createConnection() {
const connection = await amqp.connect(RABBITMQ_URL);
const channel = await connection.createChannel();
return { connection, channel };
}
export async function closeConnection(
channel: amqp.Channel,
connection: amqp.Connection
) {
await channel.close();
await connection.close();
}RabbitMQ Project Structure
app/
├── page.tsx # Tutorial home page
├── api/
│ ├── status/route.ts # GET: check RabbitMQ connection
│ ├── fanout/
│ │ ├── setup/route.ts
│ │ ├── publish/route.ts
│ │ └── consume/route.ts
│ ├── direct/
│ │ ├── setup/route.ts
│ │ ├── publish/route.ts
│ │ └── consume/route.ts
│ ├── topic/
│ │ ├── setup/route.ts
│ │ ├── publish/route.ts
│ │ └── consume/route.ts
│ ├── headers/
│ │ ├── setup/route.ts
│ │ ├── publish/route.ts
│ │ └── consume/route.ts
│ └── deadletter/
│ ├── setup/route.ts
│ ├── publish/route.ts
│ ├── reject/route.ts
│ └── consume/route.ts
└── tutorial/
├── fanout/page.tsx
├── direct/page.tsx
├── topic/page.tsx
├── headers/page.tsx
└── deadletter/page.tsx
lib/
└── rabbitmq.ts # Shared: createConnection / closeConnectionPart 3: AWS SQS & SNS — The Serverless Messaging Duo
AWS SQS — Core Concept
Amazon Simple Queue Service (SQS) is a fully managed, point-to-point, pull-based message queue. It requires zero infrastructure maintenance. It is extremely simple compared to RabbitMQ: you put a message in, and a consumer pulls it out.
SQS Best Applications & Scenarios
Cloud-Native Task Queuing
If your infrastructure is already in AWS, SQS is the easiest way to decouple a web application from background workers (e.g., uploading an image to S3, putting a message in SQS, and having a Lambda function resize the image).
Load Leveling / Buffering
Protecting a fragile downstream database or third-party API. If traffic spikes randomly, SQS absorbs the massive influx of requests, allowing the backend to process them at a steady, manageable rate.
Strict Ordering & Exactly-Once Processing
By using SQS FIFO (First-In-First-Out) queues, you can guarantee the exact order of messages and ensure they are processed only once — useful for financial transactions or inventory updates.
AWS SNS — Core Concept
Amazon Simple Notification Service (SNS) is a fully managed Pub/Sub (Publish/Subscribe) messaging service. Unlike SQS or Kafka (where consumers pull data), SNS pushes messages directly to subscribers. It has no persistent storage — if a subscriber is unavailable when SNS pushes the message, the message is lost (unless backed by a queue).
SNS Best Applications & Scenarios
Fan-out Architecture (SNS + SQS)
The classic AWS pattern. Publish a single message to an SNS Topic (e.g., “Order Placed”). SNS immediately pushes a copy to 3 different SQS queues (Inventory, Shipping, Billing), which are then consumed independently.
System Alerts & Monitoring
Triggering alarms from AWS CloudWatch to notify DevOps teams via email or SMS.
Direct User Notifications
Sending SMS messages, Push Notifications to mobile apps, or Emails to end-users directly from the backend system.
Triggering Serverless Workflows
Using an SNS topic to directly trigger multiple AWS Lambda functions simultaneously.
SQS vs SNS Architecture
SQS — Message Queuing
Producer → Queue → ConsumerOne message, one consumerWork distribution patternReliable message delivery with guaranteed processing
SNS — Pub/Sub Notifications
Publisher → Topic → [
Email Subscriber,
SMS Subscriber,
HTTP Endpoint,
SQS Queue
]One message delivered to multiple subscribers
SQS Code Examples
// AWS SDK v3 Configuration
import { SQSClient } from '@aws-sdk/client-sqs';
import { SNSClient } from '@aws-sdk/client-sns';
const awsConfig = {
region: process.env.AWS_REGION || 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
};
export const sqsClient = new SQSClient(awsConfig);
export const snsClient = new SNSClient(awsConfig);// Sending a message to SQS
const params = {
QueueUrl: process.env.SQS_QUEUE_URL,
MessageBody: JSON.stringify({
orderId: '12345',
customerId: 'user-456',
items: ['product-1', 'product-2'],
timestamp: new Date().toISOString()
}),
MessageAttributes: {
'MessageType': {
DataType: 'String',
StringValue: 'ORDER_CREATED'
}
}
};
const result = await sqsClient.send(new SendMessageCommand(params));// Receiving messages with long polling
const receiveParams = {
QueueUrl: process.env.SQS_QUEUE_URL,
MaxNumberOfMessages: 10,
WaitTimeSeconds: 20, // Long polling
MessageAttributeNames: ['All']
};
const messages = await sqsClient.send(
new ReceiveMessageCommand(receiveParams)
);
// Process each message
for (const message of messages.Messages || []) {
try {
await processMessage(JSON.parse(message.Body));
// Delete the message after successful processing
await sqsClient.send(new DeleteMessageCommand({
QueueUrl: process.env.SQS_QUEUE_URL,
ReceiptHandle: message.ReceiptHandle
}));
} catch (error) {
console.error('Message processing failed:', error);
// Message will become visible again after visibility timeout
}
}SNS Code Examples
// Publishing a message to SNS topic
const publishParams = {
TopicArn: process.env.SNS_TOPIC_ARN,
Message: JSON.stringify({
event: 'USER_REGISTERED',
userId: 'user-789',
email: 'user@example.com',
timestamp: new Date().toISOString()
}),
MessageAttributes: {
'event_type': {
DataType: 'String',
StringValue: 'USER_REGISTERED'
},
'priority': {
DataType: 'String',
StringValue: 'high'
}
}
};
const result = await snsClient.send(new PublishCommand(publishParams));// Creating subscriptions for different protocols
const subscriptions = [
{
Protocol: 'email',
Endpoint: 'admin@example.com'
},
{
Protocol: 'sqs',
Endpoint: 'arn:aws:sqs:us-east-1:123456789012:notification-queue'
},
{
Protocol: 'https',
Endpoint: 'https://api.example.com/webhooks/notifications'
}
];
for (const subscription of subscriptions) {
await snsClient.send(new SubscribeCommand({
TopicArn: process.env.SNS_TOPIC_ARN,
Protocol: subscription.Protocol,
Endpoint: subscription.Endpoint
}));
}Custom Hook for SQS Operations
// Custom hook for SQS operations
export function useSQS() {
const [messages, setMessages] = useState<SQSMessage[]>([]);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<string | null>(null);
const sendMessage = async (messageBody: string) => {
setLoading(true);
setError(null);
try {
const response = await fetch('/api/sqs/send', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: messageBody }),
});
if (!response.ok) throw new Error('Failed to send message');
return await response.json();
} catch (err) {
setError(err instanceof Error ? err.message : 'Unknown error');
throw err;
} finally {
setLoading(false);
}
};
return { messages, loading, error, sendMessage };
}Getting Started with SQS & SNS
- 1Clone the repository:
git clone https://github.com/audoir/sqs-sns-tutorial.git - 2Install dependencies:
npm install - 3Configure AWS services:
Follow AWS_SETUP.md for detailed instructions - 4Set up environment variables:
Copy .env.example to .env.local and configure - 5Start the development server:
npm run dev
Learning Outcomes
By completing all three tutorials, you will have gained hands-on experience with:
Apache Kafka
- • Starting Kafka with Docker (KRaft mode)
- • Creating topics and partitions
- • Producing messages with and without keys
- • Consumer groups and offset tracking
- • Partition assignment and rebalancing
- • At-least-once vs. exactly-once delivery
RabbitMQ
- • Setting up RabbitMQ with Docker
- • All four core exchange types
- • The Setup → Publish → Consume workflow
- • Dead Letter Queues and TTL expiry
- • Reading x-death metadata
- • Building interactive demos with Next.js
AWS SQS & SNS
- • Configuring SQS queues and SNS topics
- • Message sending and receiving with error handling
- • Pub/sub patterns with SNS subscriptions
- • Integrating AWS SDK v3 with Next.js
- • Managing AWS credentials and IAM permissions
- • Fan-out architecture (SNS + SQS)
Conclusion
Kafka, RabbitMQ, SQS, and SNS are all excellent tools — but they excel in different scenarios. Use Kafka when you need massive throughput, event replay, or long-term event storage. Use RabbitMQ when you need sophisticated routing logic, multi-protocol support, or traditional task queues. Use SQS when you want zero-ops simplicity in an AWS environment with reliable point-to-point queuing. Use SNS when you need to broadcast a single event to multiple subscribers or trigger serverless workflows.
Often the most powerful architectures combine these tools — for example, the classic SNS + SQS fan-out pattern, or using Kafka for high-throughput ingestion with RabbitMQ for fine-grained task routing downstream.
About the Author
Wayne Cheng is the founder and AI app developer at Audoir, LLC. Prior to founding Audoir, he worked as a hardware design engineer for Silicon Valley startups and an audio engineer for creative organizations. He holds an MSEE from UC Davis and a Music Technology degree from Foothill College.
Further Exploration
Explore the complete tutorial repositories to experiment with advanced features:
- • Kafka Tutorial — Add stream processing with Kafka Streams or experiment with consumer group rebalancing
- • RabbitMQ Tutorial — Extend the exchange demos with message priorities or RabbitMQ Streams
- • SQS & SNS Tutorial — Implement dead letter queues, message filtering, and Lambda integration
For more AI-powered development tools and tutorials, visit Audoir .