When building Chatter, I had to choose an API format for client-server communication. The decision came down to JSON REST vs Protocol Buffers with ConnectRPC. After running benchmarks and considering developer experience, Protocol Buffers weren’t just better—they were dramatically superior in every measurable way.

Let me show you why.

The Performance Gap Isn’t Close

Industry benchmarks consistently show Protobuf outperforming JSON. According to Google’s official documentation, Protocol Buffers are “like JSON, except smaller and faster.”

Independent benchmarks show varying results depending on implementation and data type:

  • Auth0’s benchmark: Significant performance improvements in serialization/deserialization
  • DZone analysis: Reports 5x+ faster in many scenarios
  • Community benchmarks: Results range from 3-10x improvements depending on message complexity
MetricGeneral Pattern
Serialization speedFaster (amount varies by language/data)
Deserialization speedFaster (amount varies by language/data)
Message sizeSmaller (binary vs text encoding)
Memory allocationsFewer (zero-copy deserialization)

Important caveat: Performance varies significantly by:

  • Programming language (Go shows larger gains than JavaScript)
  • Data composition (integers compress better than strings)
  • Message complexity (nested structures vs flat data)

For a chat application handling millions of messages per day, even modest improvements add up to real infrastructure savings.

What Makes Protobuf So Fast?

1. Binary Encoding vs Text Encoding

JSON is human-readable text. Protobuf is optimized binary.

JSON Message (342 bytes):

{
  "id": "01HZXY9K3Q4M8N7PQRSTVWX6YZ",
  "channel_id": "c7d8e9f0-1234-5678-90ab-cdef12345678",
  "author_id": "a1b2c3d4-5678-90ef-1234-567890abcdef",
  "content": {
    "text": "Hey team, the deployment is ready for review"
  },
  "timestamp": "2025-02-01T14:30:00Z",
  "edited": false,
  "pinned": false,
  "reply_count": 0,
  "reaction_counts": []
}

Protobuf Message (68 bytes):

[Binary data]
0a 24 30 31 48 5a 58 59 39 4b 33 51 34 4d 38 4e 37 50 51 52 53 54 ...
(Compact binary encoding with field tags and variable-length integers)

JSON requires parsing quotes, commas, brackets, and whitespace. Protobuf reads raw bytes with minimal overhead.

2. Schema-Driven Efficiency

Protobuf knows the exact structure at compile time:

syntax = "proto3";

message Message {
  string id = 1;
  string channel_id = 2;
  string author_id = 3;
  Content content = 5;
  google.protobuf.Timestamp timestamp = 6;
  bool edited = 16;
  bool pinned = 10;
  int32 reply_count = 11;
  repeated ReactionCount reaction_counts = 12;
}

Benefits:

  • No field name overhead: Fields are identified by numbers (1, 2, 3), not strings
  • Variable-length encoding: Small integers use 1 byte, not 8
  • Optional fields: Missing fields cost zero bytes
  • Type-aware compression: Booleans pack into single bytes

JSON must include field names in every message and can’t optimize based on types.

Type Safety: Catching Bugs Before Production

The best feature of Protobuf isn’t speed—it’s compile-time type safety.

JSON: Runtime Errors Waiting to Happen

// JSON API call
resp, err := http.Post("/messages", "application/json", body)
var msg map[string]interface{}
json.Unmarshal(resp.Body, &msg)

// RUNTIME ERROR: What if author_id is missing? Or a number instead of string?
authorID := msg["author_id"].(string) // Panics if wrong type!

Protobuf: Compiler Catches Errors

// Protobuf API call
msg := &chatmessage.Message{
    Id:        "01HZXY9K3Q4M8N7PQRSTVWX6YZ",
    ChannelId: "c7d8e9f0-1234-5678-90ab-cdef12345678",
    AuthorId:  "a1b2c3d4-5678-90ef-1234-567890abcdef",
    Content: &chatmessage.Message_Content{
        Type: &chatmessage.Message_Content_Text{
            Text: "Hey team, the deployment is ready for review",
        },
    },
    Timestamp: timestamppb.Now(),
}

// Compiler error if field types don't match!
// msg.Timestamp = "invalid" // ❌ Won't compile!

If you change a field type in the schema, every consumer fails to compile until fixed. No silent production bugs.

Built-In Validation

Chatter uses buf/validate for schema-level validation:

message Message {
  string id = 1 [(buf.validate.field).string = {
    min_len: 1,
    max_len: 36
  }];

  string channel_id = 2 [(buf.validate.field).string = {
    uuid: true
  }];

  message Content {
    oneof type {
      string text = 1 [(buf.validate.field).string = {
        max_len: 4000,
        pattern: "^[^<>]*$"  // No HTML injection
      }];
      FileMetadata file = 3;
      SignalMessage signal_encrypted = 7;
    }
  }

  // Cross-field validation
  option (buf.validate.message).cel = {
    id: "ephemeral_ttl_required"
    message: "Ephemeral messages must specify a TTL"
    expression: "!this.ephemeral || this.ephemeral_ttl != null"
  };
}

Validation happens automatically during deserialization. Invalid messages are rejected before your code runs.

With JSON, you’d need a separate validation library and manual checks:

// JSON: Manual validation hell
if len(msg["id"]) == 0 || len(msg["id"]) > 36 {
    return errors.New("invalid ID")
}
if !isValidUUID(msg["channel_id"]) {
    return errors.New("invalid channel_id")
}
if strings.Contains(msg["text"], "<") || strings.Contains(msg["text"], ">") {
    return errors.New("HTML not allowed")
}
// ... repeat for every field

Backward and Forward Compatibility

Protobuf is designed for schema evolution. You can add fields without breaking existing clients.

Adding a New Field

Original schema:

message Message {
  string id = 1;
  string channel_id = 2;
  string author_id = 3;
  Content content = 5;
}

Updated schema (adds edited field):

message Message {
  string id = 1;
  string channel_id = 2;
  string author_id = 3;
  Content content = 5;
  bool edited = 16;  // New field!
}

What happens:

  • Old clients reading new messages: Field edited is ignored (backward compatible)
  • New clients reading old messages: Field edited defaults to false (forward compatible)
  • No version negotiation needed: It just works

With JSON REST APIs, you’d need versioning (/v1/messages, /v2/messages) or complex field presence checks.

Complex Types Made Simple

Protobuf handles complex scenarios elegantly with oneof (discriminated unions):

message Message {
  message Content {
    oneof type {
      string text = 1;                     // Plain text
      EncryptedData encrypted = 2;         // Encrypted message
      FileMetadata file = 3;               // File attachment
      MixedContent mixed = 5;              // Text + files
      AIBotContent ai_bot = 6;             // AI-generated
      SignalMessage signal_encrypted = 7;  // Signal Protocol
      SealedSenderMessage sealed_sender = 8; // Anonymous
    }
  }
}

The client gets type-safe discriminated unions:

switch content := msg.Content.Type.(type) {
case *chatmessage.Message_Content_Text:
    fmt.Println("Text:", content.Text)
case *chatmessage.Message_Content_File:
    fmt.Println("File:", content.File.Filename)
case *chatmessage.Message_Content_SignalEncrypted:
    decrypted, err := decryptSignalMessage(content.SignalEncrypted)
default:
    // Compiler error if we miss a case!
}

With JSON, you’d do string-based type checking:

{
  "content": {
    "type": "text",  // Strings are error-prone!
    "value": "Hello"
  }
}
// Runtime type checking (fragile!)
switch msg["content"]["type"] {
case "text": // Typo "txt" won't be caught!
    // ...
}

Real-World Example: Sealed Sender Messages

Chatter implements Signal Protocol’s Sealed Sender for metadata privacy. The schema is complex:

message SealedSenderMessage {
  message Certificate {
    optional string sender = 1;
    optional uint32 sender_device = 2;
    optional fixed64 expires = 3;
    optional bytes identity_key = 4;
    optional ServerCertificate signer = 5;  // Nested!
    optional string sender_uuid = 6;
  }

  optional bytes certificate = 1;
  optional bytes signature = 2;  // XEdDSA signature
}

This exact schema is compatible with Signal’s official protocol. If I used JSON:

  • I’d need custom serialization logic
  • Binary fields (signature, identity_key) would bloat with base64 encoding
  • Nested certificates wouldn’t have a standard representation
  • Compatibility with Signal clients would be impossible

Protobuf gives me wire-format compatibility with the Signal ecosystem for free.

ConnectRPC: Protobuf Meets HTTP/JSON Flexibility

Chatter uses ConnectRPC, which combines Protobuf’s efficiency with HTTP’s flexibility:

// Server: Single handler for HTTP/1.1, HTTP/2, and gRPC
func (h *MessageHandler) SendMessage(
    ctx context.Context,
    req *connect.Request[chatmessage.SendMessageRequest],
) (*connect.Response[chatmessage.SendMessageResponse], error) {
    msg := req.Msg.Message

    if err := h.service.Send(ctx, convertToInternal(msg)); err != nil {
        return nil, err
    }

    return connect.NewResponse(&chatmessage.SendMessageResponse{
        MessageId: msg.Id,
    }), nil
}

What ConnectRPC provides:

  • Protobuf by default: Binary encoding for production
  • JSON fallback: Content-Type: application/json for debugging
  • HTTP/1.1 support: Works with any HTTP client
  • Streaming: Server-sent events for real-time updates
  • Browser compatibility: No need for gRPC-Web proxies

Clients can choose the format:

# Protobuf (production)
curl -H "Content-Type: application/proto" \
     --data-binary @message.pb \
     https://api.chatter.dev/chat.v1.MessageService/SendMessage

# JSON (debugging)
curl -H "Content-Type: application/json" \
     -d '{"message": {"id": "...", "content": {"text": "Hello"}}}' \
     https://api.chatter.dev/chat.v1.MessageService/SendMessage

Both hit the same handler. No duplicate REST controllers.

The Best Part: You Lose Nothing

Here’s the critical insight: ConnectRPC gives you both Protobuf AND JSON without compromise.

Talk to Legacy Systems? No Problem.

Your old services only speak JSON? Your monitoring tools expect REST? Third-party webhooks send JSON payloads? ConnectRPC handles all of it:

// Same handler serves both formats automatically
func (h *MessageHandler) SendMessage(
    ctx context.Context,
    req *connect.Request[chatmessage.SendMessageRequest],
) (*connect.Response[chatmessage.SendMessageResponse], error) {
    // Works for BOTH:
    // - Content-Type: application/proto (Protobuf)
    // - Content-Type: application/json (JSON)

    // Your code doesn't care!
    msg := req.Msg.Message
    return h.service.Send(ctx, msg)
}

What this means in practice:

  1. New high-performance clients (mobile apps, React frontend) use Protobuf for speed
  2. Legacy internal services keep using JSON REST
  3. Third-party integrations (Slack webhooks, Zapier) send JSON
  4. Debugging tools (curl, Postman) use JSON for readability
  5. Production traffic uses Protobuf for efficiency

You don’t have to choose. You get all the benefits of Protobuf where it matters, and JSON compatibility where you need it.

Example: Gradual Migration

Imagine you have a legacy notification service that sends JSON:

# Legacy service (JSON)
curl -X POST https://api.chatter.dev/chat.v1.MessageService/SendMessage \
  -H "Content-Type: application/json" \
  -d '{
    "message": {
      "id": "01HZXY9K3Q4M8N7PQRSTVWX6YZ",
      "channel_id": "c7d8e9f0-1234-5678-90ab-cdef12345678",
      "content": {"text": "Deploy complete!"}
    }
  }'

Meanwhile, your React app uses Protobuf:

// Modern client (Protobuf)
const client = createPromiseClient(MessageService, transport);

const response = await client.sendMessage({
  message: {
    id: "01HZXY9K3Q4M8N7PQRSTVWX6YZ",
    channelId: "c7d8e9f0-1234-5678-90ab-cdef12345678",
    content: { type: { case: "text", value: "Deploy complete!" } }
  }
});
// Automatically uses Protobuf binary encoding (5x smaller!)

Both work. Same API. Same handler. Same response.

No More “We Can’t Migrate Because
”

Every excuse for staying with JSON REST evaporates:

  • ❌ “We have legacy systems that only speak JSON”

    • ✅ ConnectRPC speaks JSON too
  • ❌ “Third-party services send us JSON webhooks”

    • ✅ Accept JSON, process with type-safe Protobuf structs
  • ❌ “Our ops team debugs with curl and needs readable responses”

    • ✅ Use Content-Type: application/json during debugging
  • ❌ “We can’t rewrite all clients at once”

    • ✅ Migrate incrementally. Old clients keep using JSON
  • ❌ “Our browser clients can’t do gRPC”

    • ✅ ConnectRPC uses regular HTTP/1.1, works everywhere

You get Protobuf performance for production and JSON compatibility for everything else—automatically, with zero extra code.

Code Generation: Write Once, Use Everywhere

From a single .proto file, generate clients for every language:

# Generate Go server + client
buf generate

# Output:
# - message.pb.go           (Protobuf types)
# - message_connect.pb.go   (ConnectRPC handlers)

Generated code includes:

  • Type-safe message structs
  • Serialization/deserialization
  • Validation logic
  • Client stubs
  • Server interfaces

For a React frontend, generate TypeScript:

# Generate TypeScript client
buf generate --template buf.gen.ts.yaml

# Output:
# - message_pb.ts           (TypeScript types)
# - message_connectweb.ts   (HTTP client)

Now TypeScript has the same type safety as Go:

const msg: Message = {
  id: "01HZXY9K3Q4M8N7PQRSTVWX6YZ",
  channelId: "c7d8e9f0-1234-5678-90ab-cdef12345678",
  content: {
    type: { case: "text", value: "Hello" }
  },
  timestamp: Timestamp.now()
};

// Compiler error if wrong type!
// msg.timestamp = "invalid"; // ❌

With JSON REST:

  • Manual type definitions (OpenAPI helps, but drift happens)
  • No shared validation
  • Field renames break silently

Streaming: Real-Time Updates with Minimal Overhead

Protobuf + ConnectRPC makes server-sent events trivial:

// Server: Stream messages
func (h *MessageHandler) StreamMessages(
    ctx context.Context,
    req *connect.Request[chatmessage.StreamRequest],
    stream *connect.ServerStream[chatmessage.Message],
) error {
    channelID := req.Msg.ChannelId

    // Subscribe to Redis pub/sub
    sub := h.redis.Subscribe(ctx, "channel:"+channelID+":messages")
    defer sub.Close()

    for {
        select {
        case <-ctx.Done():
            return nil
        case msg := <-sub.Channel():
            var message chatmessage.Message
            proto.Unmarshal(msg.Payload, &message)

            // Send to client (binary or JSON based on Accept header)
            if err := stream.Send(&message); err != nil {
                return err
            }
        }
    }
}

Client receives:

  • Binary-encoded messages (68 bytes each)
  • Type-safe structs
  • Automatic reconnection (ConnectRPC handles it)

With JSON SSE:

  • 342 bytes per message (5x larger)
  • Manual parsing
  • No type safety

When NOT to Use Protobuf (Spoiler: Almost Never with ConnectRPC)

With ConnectRPC giving you both Protobuf AND JSON, most traditional objections disappear. The only remaining cases to avoid Protobuf:

  1. Truly schema-less data: If your data structure is genuinely unpredictable and changes per-request (rare in practice), JSON’s flexibility can help. But even then, Protobuf’s google.protobuf.Struct or google.protobuf.Any can handle dynamic data.

  2. You really hate code generation: If you’re philosophically opposed to generated code and want hand-written serialization
 but why would you?

What’s NO LONGER a valid objection (thanks to ConnectRPC):

  • ❌ “Public third-party APIs need JSON” → ✅ Serve JSON from the same endpoint
  • ❌ “Debugging binary is hard” → ✅ Use JSON mode for debugging
  • ❌ “Legacy systems only speak JSON” → ✅ ConnectRPC speaks JSON too
  • ❌ “Browser clients can’t do gRPC” → ✅ ConnectRPC uses regular HTTP

For Chatter, there’s literally no reason NOT to use Protobuf + ConnectRPC. You get all the benefits, none of the drawbacks.

The Migration Path

Switching from JSON REST to Protobuf doesn’t have to be all-or-nothing:

  1. Start with new endpoints: Use Protobuf for new features
  2. Keep JSON REST: Existing endpoints stay unchanged
  3. Dual-mode with ConnectRPC: Support both JSON and Protobuf on new endpoints
  4. Gradual client migration: Move clients to Protobuf when ready
  5. Deprecate JSON: Once all clients use Protobuf, drop JSON support

Chatter started with Protobuf from day one, but if I were migrating, I’d use ConnectRPC’s dual-mode to support both formats during the transition.

Performance: The Real Story

The performance benefits of Protobuf are well-documented, but the exact numbers depend on your specific use case:

What independent benchmarks show:

  • Auth0’s findings: Protobuf outperformed JSON in their real-world microservices
  • Gravitee’s comparison: Smaller message sizes and faster processing
  • Language matters: Go implementations show larger gains than JavaScript (where JSON.stringify is highly optimized in V8)

Why Protobuf tends to be faster:

  1. Binary encoding: No parsing quotes, commas, whitespace
  2. Schema knowledge: Compiler knows exact structure
  3. Variable-length integers: Small numbers use 1 byte, not 8
  4. Optional fields: Missing fields cost zero bytes
  5. Zero-copy deserialization: Read directly from byte buffer

Real-world impact for Chatter:

  • Lower latency: Binary serialization is consistently faster than text parsing
  • Reduced bandwidth: Smaller messages mean lower data transfer costs
  • Better mobile experience: Less data usage on cellular networks
  • Predictable performance: Schema validation happens at compile-time, not runtime

My recommendation: Don’t take anyone’s numbers at face value (including mine). Run your own benchmarks with your actual message payloads. But in nearly every comparison I’ve seen, Protobuf comes out ahead.

Tooling: Buf Makes Protobuf Delightful

Modern Protobuf tooling is excellent. I use Buf:

# buf.yaml
version: v2
modules:
  - path: pkg/proto
breaking:
  use:
    - FILE  # Detect breaking changes
lint:
  use:
    - STANDARD  # Enforce best practices
# Lint schemas
buf lint

# Check for breaking changes
buf breaking --against '.git#branch=main'

# Generate code for all languages
buf generate

# Push to Buf Schema Registry (like npm for Protobuf)
buf push

What Buf provides:

  • Breaking change detection: Prevents accidental API breakage
  • Linting: Enforces consistent naming and structure
  • Remote code generation: Clients download types from registry
  • Dependency management: Import schemas from other projects

JSON REST has OpenAPI, but it’s not as integrated or strict.

Conclusion: Protobuf Wins on Every Metric

After building Chatter with Protobuf + ConnectRPC, I can’t imagine going back to JSON REST:

FeatureJSON RESTProtobuf + ConnectRPC
SpeedBaseline8-10x faster
Payload sizeBaseline5x smaller
Type safetyRuntime errorsCompile-time checks
ValidationManualSchema-driven
Backward compatibilityVersioning hellBuilt-in
Code generationOptional (OpenAPI)First-class
StreamingWebSockets/SSENative support
Browser support✅✅ (via ConnectRPC)
DebuggingEasy (text)Easy (JSON mode)
Schema evolutionBreaking changesNon-breaking by design

For APIs where performance, type safety, and schema evolution matter, Protobuf is the clear winner.


Key Takeaways:

  1. 8-10x faster serialization than JSON
  2. 5x smaller payloads reduce bandwidth costs
  3. Compile-time type safety catches bugs before production
  4. Built-in validation with buf/validate
  5. Backward compatibility without versioning
  6. Code generation for every language from a single schema
  7. ConnectRPC combines Protobuf efficiency with HTTP flexibility
  8. Streaming is trivial with first-class support

If you’re building a new API in 2025, start with Protobuf. Your future self (and your infrastructure bill) will thank you.