FlatBuffers and Protobuf in One System: Picking the Right Serialization Format
What Is Serialization?#
When two programs communicate — a Go backend and a TypeScript frontend, or a Go server and a C++ desktop app — they need to send data over the network. But data in memory (structs, objects, arrays) can’t be sent directly over a socket. It needs to be converted to bytes first.
That conversion is serialization. The reverse — converting bytes back into usable data structures — is deserialization.
Different serialization formats make different tradeoffs. JSON is human-readable but verbose. Protobuf is compact and fast but requires a schema. FlatBuffers is even faster than Protobuf in specific situations. Understanding why they’re different helps you pick the right one for each job.
Protobuf: The Industry Standard#
Protobuf (Protocol Buffers) is Google’s serialization format. You write a .proto schema:
| |
From this schema, code generators produce Go structs and TypeScript types. The wire format is compact binary — much smaller than JSON. And with gRPC, you get bidirectional streaming, request/response typing, and generated client libraries in most languages.
The catch for browsers: gRPC requires HTTP/2, which browsers don’t support directly. That’s where ConnectRPC comes in — it implements the same .proto service but serves over plain HTTP/1.1, which browsers do support. Same schema, same generated TypeScript types, works in a browser.
| |
This is Protobuf’s strength: a rich ecosystem. buf for schema linting and code generation. connectrpc for browser-compatible gRPC. React Query hooks generated automatically from service definitions. One .proto file drives the entire API contract for both Go and TypeScript.
FlatBuffers: Zero-Copy for Performance-Critical Paths#
FlatBuffers is Google’s other serialization format, designed for a different constraint: no deserialization step.
With Protobuf, to read a field you must first parse the binary buffer into a struct. Every field gets copied from the buffer into heap-allocated memory. For a message with 20 fields, that’s 20 allocations.
FlatBuffers eliminates this. The binary buffer is the data structure. Generated accessor functions compute byte offsets and read directly from the buffer:
| |
Compare this to Protobuf where you’d need trade.ParseFromArray(buffer, size) first. FlatBuffers skips that step entirely.
This matters for the native C++ chart application that renders at 60 frames per second. Live trades arrive from a ZMQ socket — potentially hundreds per second. Allocating and copying a struct for each one adds up. FlatBuffers lets the application read trade data directly from the socket buffer.
ZMQ: A Different Transport for Native Clients#
The web frontend communicates over HTTP (the standard web transport). The native C++ chart communicates over ZMQ (ZeroMQ) — a lightweight messaging library that bypasses HTTP entirely.
ZMQ gives you socket patterns like pub/sub and request/reply without the overhead of HTTP. For a desktop application that’s already on the same machine or local network as the server, this is faster and simpler than HTTP.
| Path | Transport | Format | Consumer |
|---|---|---|---|
| Port 50051 | gRPC / HTTP/2 | Protobuf | Any gRPC client |
| Port 50052 | Connect / HTTP/1.1 | Protobuf | Web browser (React) |
| ZMQ ROUTER | Custom TCP | FlatBuffers | Native C++ chart (requests) |
| ZMQ PUB | Custom TCP | FlatBuffers | Native C++ chart (live stream) |
Two transports, two serialization formats, one backend. The server’s business logic is shared — only the outer serialization/transport layer differs per consumer.
The Schema Stability Rule for FlatBuffers#
FlatBuffers has a constraint you must never violate: field IDs are fixed and can never be reassigned.
// fbs/trade.fbs
table Trade {
aggregate_id: int64; // field 0 — THIS NUMBER IS PERMANENT
price: double; // field 1 — PERMANENT
quantity: double; // field 2 — PERMANENT
timestamp: int64; // field 3 — PERMANENT
is_buyer_maker: bool; // field 4 — PERMANENT
}
Why? Because FlatBuffers doesn’t store field names in the binary format — it stores offsets. A client reading field 2 reads bytes at a specific offset. If you remove field 2 and add a new field, the new field gets offset 2. The old client now reads the new field thinking it’s quantity. Silent data corruption.
What you can do: add new fields at the end (field 5, 6, 7…). Old clients ignore unknown fields — this is forward compatibility.
What you cannot do: remove, reorder, or change the type of existing fields.
Protobuf has the same rule on field numbers (= 1, = 2, etc.). The difference: it’s more prominently documented in the Protobuf ecosystem. With FlatBuffers, it’s easy to forget. Document it explicitly in your schema files.
ConnectRPC: The Bufbuild v2 Detail#
One specific gotcha when setting up ConnectRPC with the latest @bufbuild/protobuf v2 library.
The v2 library bakes the service descriptor directly into the generated _pb.ts file. The legacy buf.build/connectrpc/es plugin produces a separate _connect.ts file with types incompatible with v2.
If you mix them, you get TypeScript errors like:
Type 'ServiceType' is not assignable to type 'DescService'
Fix: use only @connectrpc/connect-query (v2-compatible) and remove any _connect.ts files from legacy codegen. Your buf.gen.yaml should use a single consistent plugin version.
Which Should You Use?#
| Scenario | Format |
|---|---|
| REST/gRPC API served to web clients | Protobuf + ConnectRPC |
| Mobile or web TypeScript client | Protobuf (best ecosystem) |
| Native C++ desktop application over ZMQ | FlatBuffers |
| Embedded systems with very tight memory | FlatBuffers |
| Need browser support | Protobuf (FlatBuffers has no browser story) |
| Read-heavy, latency-critical | FlatBuffers |
| Cross-language, cross-team API contract | Protobuf |
Using both in one system is fine when you have genuinely different consumers with genuinely different needs. The key: don’t introduce the complexity of two formats unless you actually need it.