NATS 2.12 introduces a powerful new feature called Atomic Batch Publishing that brings all-or-nothing multi-message operations to JetStream. When you need to ensure consistency across multiple related messages—like updating several fields in a user profile or recording a multi-step transaction—atomic batches guarantee that either all messages succeed together or none are persisted at all.
In this post, we’ll explore how atomic batch publishing works, show you how to use it with Go and the orbit.go library, and explain the communication patterns, limitations, and best practices.
Atomic batches provide all-or-nothing semantics: either all messages in the batch are successfully persisted, or none are. This is perfect for scenarios where you need consistency across multiple related messages.
For instance, when recording a transaction that spans several steps—such as order creation, inventory reservation, and payment processing—atomic batch publishing guarantees that all messages in the batch are either committed together or not at all. This prevents scenarios where only some updates persist, eliminating the risk of partial or inconsistent state. Please note that atomic batch publishing works only within a single stream, and cannot be used for messages spanning multiple streams.
The key characteristics are:
Before you can use atomic batch publishing, you need to enable it on your JetStream streams.
# Create a new stream with atomic batch publishing enablednats stream add ORDERS \ --subjects "orders.>" \ --defaults \ --allow-batchYou can verify atomic batch publishing is enabled by inspecting the stream:
nats stream info ORDERS
# Look for this field in the output:# Allow Atomic Publish: trueTo enable atomic batch publishing on an existing stream, use the update command:
# Update existing stream to enable atomic batch publishingnats stream update ORDERS --allow-batchOf course, you can also configure this when creating a stream programmatically:
cfg := jetstream.StreamConfig{ Name: "ORDERS", Subjects: []string{"orders.>"}, AllowAtomicPublish: true, // Enables atomic batch publishing}
js.CreateStream(context.Background(), cfg)Let’s look at how to use atomic batch publishing with the orbit.go library. We’ll create a practical example that records multiple related events atomically.
Imagine you’re recording a multi-step transaction where each step must be persisted together. You want to ensure all events succeed as a unit:
package main
import ( "context" "fmt" "log" "time"
"github.com/nats-io/nats.go" "github.com/nats-io/nats.go/jetstream" "github.com/synadia-io/orbit.go/jetstreamext")
func main() { // Connect to demo NATS server nc, err := nats.Connect("nats://demo.nats.io") if err != nil { log.Fatal(err) } defer nc.Close()
// Create JetStream context js, err := jetstream.New(nc) if err != nil { log.Fatal(err) }
// Create stream with atomic batch publishing enabled streamName := fmt.Sprintf("TRANSACTIONS_%d", time.Now().Unix()) _, err = js.CreateStream(context.Background(), jetstream.StreamConfig{ Name: streamName, Subjects: []string{"transactions.>"}, AllowAtomicPublish: true, }) if err != nil { log.Fatal(err) }
// Create batch publisher batch, err := jetstreamext.NewBatchPublisher(js) if err != nil { log.Fatal(err) }
// Add multiple related transaction events to the batch txnID := "txn-12345"
err = batch.AddMsg(&nats.Msg{ Subject: fmt.Sprintf("transactions.%s.order-created", txnID), Data: []byte(`{"order_id":"ORD-789","amount":99.99}`), }) if err != nil { log.Fatal(err) }
err = batch.AddMsg(&nats.Msg{ Subject: fmt.Sprintf("transactions.%s.inventory-updated", txnID), Data: []byte(`{"sku":"WIDGET-001","quantity":-1}`), }) if err != nil { log.Fatal(err) }
err = batch.AddMsg(&nats.Msg{ Subject: fmt.Sprintf("transactions.%s.payment-processed", txnID), Data: []byte(`{"payment_id":"PAY-456","status":"success"}`), }) if err != nil { log.Fatal(err) }
// Commit the batch with a final message ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel()
ack, err := batch.Commit(ctx, fmt.Sprintf("transactions.%s.completed", txnID), []byte(`{"timestamp":"2025-11-05T10:00:00Z"}`)) if err != nil { log.Fatal(err) }
fmt.Printf("âś“ Transaction batch committed successfully!\n") fmt.Printf(" Stream: %s\n", ack.Stream) fmt.Printf(" Batch ID: %s\n", ack.BatchID) fmt.Printf(" Batch Size: %d messages\n", ack.BatchSize) fmt.Printf(" Final Sequence: %d\n", ack.Sequence)}AllowAtomicPublish on the streamNewBatchPublisher() creates a batch context with a unique batch IDAddMsg() call publishes the message immediately to the server with batch headers, but the server stages them without committingCommit() call sends the final message with the commit header, triggering atomic storage of all staged messagesYou can add expectations to ensure consistency and prevent race conditions:
// Only commit if the transaction stream's last sequence is 100batch.AddMsg(&nats.Msg{ Subject: "transactions.txn-456.order-created", Data: []byte(`{"order_id":"ORD-999"}`),}, jetstreamext.WithBatchExpectLastSequence(100))
// Or ensure a specific subject hasn't been updated since sequence 42batch.AddMsg(&nats.Msg{ Subject: "transactions.txn-456.payment-processed", Data: []byte(`{"payment_id":"PAY-789"}`),}, jetstreamext.WithBatchExpectLastSequencePerSubject(42))By default, the batch publisher waits for an acknowledgment on the first message (AckFirst: true) to ensure the server accepts the batch, but doesn’t wait for intermediate acks during batch building. For large batches with many messages, you can configure additional flow control to receive acknowledgments during the batch process:
// Create a batch publisher with flow controlbatch, err := jetstreamext.NewBatchPublisher(js, jetstreamext.WithBatchFlowControl( jetstreamext.BatchFlowControl{ AckFirst: true, // Wait for ack on the first message (default) AckEvery: 100, // Wait for ack every 100 messages AckTimeout: 5 * time.Second, // Timeout for waiting for acks },))if err != nil { log.Fatal(err)}
// Add many messages to the batchfor i := 0; i < 500; i++ { err := batch.AddMsg(&nats.Msg{ Subject: fmt.Sprintf("events.%d", i), Data: []byte(fmt.Sprintf("Event %d", i)), }) if err != nil { log.Fatal(err) }}
// Commit the batchack, err := batch.Commit(ctx, "events.final", []byte("Batch complete"))Flow Control Options:
AckFirst (default: true): Waits for acknowledgment on the first message in the batch, ensuring the server is accepting the batch before continuing. This is enabled by default.AckEvery (default: 0): Waits for an acknowledgment every N messages. By default this is 0 (disabled), meaning no intermediate acks are requested during batch building. Set to a positive number (e.g., 100) to get feedback every N messages.AckTimeout: Timeout for waiting for acknowledgments when flow control is enabled. Defaults to the timeout from your JetStream context.When to Configure Flow Control: For large batches (e.g., hundreds of messages), configure AckEvery to detect issues early rather than waiting until the commit. It provides feedback during the batch process without sacrificing atomicity — if any message fails, the entire batch is still abandoned.
Batch publishing uses special NATS headers to coordinate between client and server. Understanding these headers helps you troubleshoot and monitor batches.
When you send messages in an atomic batch, the orbit.go library automatically adds these headers:
Nats-Batch-Id: <uuid>: Unique identifier for the batch (max 64 characters)Nats-Batch-Sequence: <n>: Incrementing sequence number within the batch (starts at 1)Nats-Batch-Commit: 1: Marks the final message AND includes it in the batchNats-Batch-Commit: eob: Marks the final message but does NOT include it (end-of-batch marker only)You can also create atomic batches using the NATS CLI by manually adding the required headers. This is useful for testing and understanding how batches work:
# Generate a unique batch IDBATCH_ID="txn-$(date +%s)"
# Create a stream with atomic batch publishing enablednats stream add ORDERS --subjects "orders.>" --allow-batch --defaults
# Publish messages with batch headers# Message 1: Start the batchnats pub orders.step1 "Order created" \ -J \ -H "Nats-Batch-Id:$BATCH_ID" \ -H "Nats-Batch-Sequence:1"
# Message 2: Continue the batchnats pub orders.step2 "Inventory reserved" \ -J \ -H "Nats-Batch-Id:$BATCH_ID" \ -H "Nats-Batch-Sequence:2"
# Check stream - no messages yet! The batch is still stagingnats stream info ORDERS# Output shows: Messages: 0
# Message 3: Commit the batch (final message)nats pub orders.step3 "Payment processed" \ -J \ -H "Nats-Batch-Id:$BATCH_ID" \ -H "Nats-Batch-Sequence:3" \ -H "Nats-Batch-Commit:1"
# Now check again - all 3 messages appearnats stream info ORDERS# Output shows: Messages: 3
# View the messages with their batch headersnats stream get ORDERS 1 # First messagenats stream get ORDERS 3 # Final message with commit headerAfter publishing messages 1 and 2, the stream shows zero messages. The batch is being staged on the server but not yet committed. Only when message 3 arrives with the Nats-Batch-Commit:1 header do all three messages appear atomically in the stream. This demonstrates the all-or-nothing semantics in action.
Note: The -J flag tells the NATS CLI to use JetStream publishing. Intermediate messages (sequence 1 and 2) may show JSON parsing errors in the CLI output because they receive zero-byte acknowledgments — so we can ignore those errors.
The server responds differently to intermediate vs. final messages:
The NATS 2.12 release focuses heavily on correctness and atomicity rather than raw performance. As stated in the release notes:
This feature currently focuses heavily on correctness and atomicity, ensuring there can be no partially committed batches. Later iterations of this feature will put more focus on the performance aspect that batching can enable, so stay tuned!
The foundation is solid — let’s see what the future brings in terms of performance optimization.
Understanding the limitations helps you design robust applications:
If you exceed the per-stream or per-server concurrency limits, new batch operations will be rejected until some in-flight batches complete.
The Orbit libraries from Synadia provide extensions and utilities for NATS across multiple programming languages. Besides Go, atomic batch publishing is currently supported in orbit.java, and Rust with the orbit.rs jetstream-extra crate.
If you’re working with other languages like JavaScript/TypeScript or .NET, you can still use atomic batch publishing by manually constructing the batch headers (Nats-Batch-Id, Nats-Batch-Sequence, Nats-Batch-Commit) as demonstrated in the CLI examples above.
NATS 2.12’s atomic batch publishing brings powerful new capabilities to JetStream. While the current release focuses on correctness and atomicity, future NATS releases will optimize batch publishing for performance. The 2.12 release establishes a solid foundation with reliable, predictable behavior.
Here are some interesting resources related to this feature: