What Is Serialization?#

When two programs communicate — a Go backend and a TypeScript frontend, or a Go server and a C++ desktop app — they need to send data over the network. But data in memory (structs, objects, arrays) can’t be sent directly over a socket. It needs to be converted to bytes first.

That conversion is serialization. The reverse — converting bytes back into usable data structures — is deserialization.

Different serialization formats make different tradeoffs. JSON is human-readable but verbose. Protobuf is compact and fast but requires a schema. FlatBuffers is even faster than Protobuf in specific situations. Understanding why they’re different helps you pick the right one for each job.

Protobuf: The Industry Standard#

Protobuf (Protocol Buffers) is Google’s serialization format. You write a .proto schema:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
// proto/collector/v1/collector.proto
syntax = "proto3";

message GetCandlesRequest {
    string symbol       = 1;
    int64  interval_ms  = 2;
    int64  from         = 3;
    int64  to           = 4;
}

message Candle {
    int64  open_time = 1;
    double open      = 2;
    double high      = 3;
    double low       = 4;
    double close     = 5;
    double buy_vol   = 6;
    double sell_vol  = 7;
}

From this schema, code generators produce Go structs and TypeScript types. The wire format is compact binary — much smaller than JSON. And with gRPC, you get bidirectional streaming, request/response typing, and generated client libraries in most languages.

The catch for browsers: gRPC requires HTTP/2, which browsers don’t support directly. That’s where ConnectRPC comes in — it implements the same .proto service but serves over plain HTTP/1.1, which browsers do support. Same schema, same generated TypeScript types, works in a browser.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Auto-generated from the .proto file via buf generate
import { useQuery } from '@connectrpc/connect-query';
import { getCandles } from './gen/collector/v1/collector-CollectorService_connectquery';

function CandleChart() {
    const { data } = useQuery(getCandles, {
        symbol: 'BTCUSDT',
        intervalMs: 60000n,  // 1-minute candles
        from: BigInt(startMs),
        to:   BigInt(endMs),
    });
    // data.candles is fully typed — TypeScript knows every field
}

This is Protobuf’s strength: a rich ecosystem. buf for schema linting and code generation. connectrpc for browser-compatible gRPC. React Query hooks generated automatically from service definitions. One .proto file drives the entire API contract for both Go and TypeScript.

FlatBuffers: Zero-Copy for Performance-Critical Paths#

FlatBuffers is Google’s other serialization format, designed for a different constraint: no deserialization step.

With Protobuf, to read a field you must first parse the binary buffer into a struct. Every field gets copied from the buffer into heap-allocated memory. For a message with 20 fields, that’s 20 allocations.

FlatBuffers eliminates this. The binary buffer is the data structure. Generated accessor functions compute byte offsets and read directly from the buffer:

1
2
3
4
5
6
7
// FlatBuffers in C++ — reading without deserialization
const uint8_t* buffer = zmq_frame.data();
auto trade = GetTrade(buffer);

double price    = trade->price();        // reads bytes at computed offset
int64_t id      = trade->aggregate_id(); // same — no copying
bool is_maker   = trade->is_buyer_maker();

Compare this to Protobuf where you’d need trade.ParseFromArray(buffer, size) first. FlatBuffers skips that step entirely.

This matters for the native C++ chart application that renders at 60 frames per second. Live trades arrive from a ZMQ socket — potentially hundreds per second. Allocating and copying a struct for each one adds up. FlatBuffers lets the application read trade data directly from the socket buffer.

ZMQ: A Different Transport for Native Clients#

The web frontend communicates over HTTP (the standard web transport). The native C++ chart communicates over ZMQ (ZeroMQ) — a lightweight messaging library that bypasses HTTP entirely.

ZMQ gives you socket patterns like pub/sub and request/reply without the overhead of HTTP. For a desktop application that’s already on the same machine or local network as the server, this is faster and simpler than HTTP.

PathTransportFormatConsumer
Port 50051gRPC / HTTP/2ProtobufAny gRPC client
Port 50052Connect / HTTP/1.1ProtobufWeb browser (React)
ZMQ ROUTERCustom TCPFlatBuffersNative C++ chart (requests)
ZMQ PUBCustom TCPFlatBuffersNative C++ chart (live stream)

Two transports, two serialization formats, one backend. The server’s business logic is shared — only the outer serialization/transport layer differs per consumer.

The Schema Stability Rule for FlatBuffers#

FlatBuffers has a constraint you must never violate: field IDs are fixed and can never be reassigned.

// fbs/trade.fbs
table Trade {
    aggregate_id:  int64;   // field 0 — THIS NUMBER IS PERMANENT
    price:         double;  // field 1 — PERMANENT
    quantity:      double;  // field 2 — PERMANENT
    timestamp:     int64;   // field 3 — PERMANENT
    is_buyer_maker: bool;   // field 4 — PERMANENT
}

Why? Because FlatBuffers doesn’t store field names in the binary format — it stores offsets. A client reading field 2 reads bytes at a specific offset. If you remove field 2 and add a new field, the new field gets offset 2. The old client now reads the new field thinking it’s quantity. Silent data corruption.

What you can do: add new fields at the end (field 5, 6, 7…). Old clients ignore unknown fields — this is forward compatibility.

What you cannot do: remove, reorder, or change the type of existing fields.

Protobuf has the same rule on field numbers (= 1, = 2, etc.). The difference: it’s more prominently documented in the Protobuf ecosystem. With FlatBuffers, it’s easy to forget. Document it explicitly in your schema files.

ConnectRPC: The Bufbuild v2 Detail#

One specific gotcha when setting up ConnectRPC with the latest @bufbuild/protobuf v2 library.

The v2 library bakes the service descriptor directly into the generated _pb.ts file. The legacy buf.build/connectrpc/es plugin produces a separate _connect.ts file with types incompatible with v2.

If you mix them, you get TypeScript errors like:

Type 'ServiceType' is not assignable to type 'DescService'

Fix: use only @connectrpc/connect-query (v2-compatible) and remove any _connect.ts files from legacy codegen. Your buf.gen.yaml should use a single consistent plugin version.

Which Should You Use?#

ScenarioFormat
REST/gRPC API served to web clientsProtobuf + ConnectRPC
Mobile or web TypeScript clientProtobuf (best ecosystem)
Native C++ desktop application over ZMQFlatBuffers
Embedded systems with very tight memoryFlatBuffers
Need browser supportProtobuf (FlatBuffers has no browser story)
Read-heavy, latency-criticalFlatBuffers
Cross-language, cross-team API contractProtobuf

Using both in one system is fine when you have genuinely different consumers with genuinely different needs. The key: don’t introduce the complexity of two formats unless you actually need it.