Middleware Landscape

Apr 18, 2026 | Tech Software

Index iiToggle
Index iiToggle

Quick Picker

If one of these matches what you're building, jump straight to that section. The rest of the post is the vocabulary and the trade-offs behind these picks.

  • Same-host, zero-copy, fight for every microsecond. Sensor fusion, AV perception inside one box.iceoryx
  • Glue together processes / nodes with custom topologies. No broker, no IDL, no opinions, just sockets that work.ZeroMQ
  • Robotics / vehicle / avionics LAN, real-time, rich QoS. Strict IDL, automatic discovery, P2P over multicast.Fast DDS / Cyclone DDS
  • One protocol from MCU to cloud, lossy WAN included. Pub/sub + query, no IDL required, peer-to-peer or routed.Zenoh
  • IoT devices, intermittent network, a broker is fine. Many small clients sending telemetry over flaky links.MQTT (Mosquitto / EMQX)
  • Persistent, replayable event log for an event-driven backend. New consumers must read history; producers and analytics decoupled.Apache Kafka
  • Strongly-typed cross-language service calls, server-to-server. Microservices that need contracts and good tooling.gRPC
  • External / partner / browser-callable API on the public internet. Where "curl works" matters more than raw performance.REST + OpenAPI
  • Browser-server real-time two-way messaging. Chat, dashboards, signalling, multiplayer.WebSocket

Comparison Dimensions

Vocabulary used in the landscape below. Whether a dimension matters more or less is case-by-case — depends entirely on what you're trying to build.

Dimension Concepts Purpose
Communication Paradigm pub/sub, request/reply, long-running action (cancelable + feedback), streaming Fit Which communication shapes does my use case actually need?
Topology Model brokerless, broker-centric, hybrid, client-server Deploy Am I willing to run a central process? What dies if it goes down?
Network Reach intra-host (SHM/IPC), LAN (multicast/mDNS), WAN (NAT/TLS/edge) Topology Where do producers and consumers live, and how far apart on the network?
Schema & Contract Style Strict IDL with codegen (.proto, .idl) vs. free-form payload by documentation; protobuf, IDL/CDR, JSON, MessagePack, raw bytes Contract Is the wire contract enforced by the compiler, or only by docs and discipline?
Cross-Language Support Number of officially supported bindings; API and feature parity across them Reach Can a polyglot team all speak it natively?
Real-Time Behavior Latency floor, jitter, kernel bypass, lock-free queues, predictability under load Timing Can it close a control loop or carry uncompressed video?
Deployment Complexity Brokers, IDL/codegen, TLS/certs, service discovery, network setup needed Ops How many steps from zero to a working hello-world across two machines?
Delivery & QoS at-most/at-least/exactly-once, ordering, deadline, durability, history Reliability What happens when a message is lost, late, or duplicated?
Cross-Platform Support Linux/Windows/macOS, RTOS, browser/wasm, embedded MCU, mobile Reach Will every device form-factor in my fleet run it?
Footprint / Lightweight Binary size, runtime memory, dependency depth, optional or mandatory daemon Cost Does it fit on the smallest target in the system?
Discovery Multicast (SPDP), mDNS, registry / Discovery Server, DNS, static config Bootstrap How do nodes find each other when they come online?
Persistence & Replay On-disk log, message TTL, history depth, late-joiner catch-up Recovery Can a new subscriber receive past data, or only what arrives after?
Security TLS/mTLS, ACL/RBAC, payload encryption, signed identity Compliance Is it safe to run across organizations or the public internet?
Ecosystem & Tooling Recorders, visualizers, sniffers, CLI, monitoring/observability integration Debug When something is wrong, can I see it without writing code?

Landscape

For each category we name the most popular framework today, plus notable alternatives that differ on something you might care about. The Reach symbols mean:

  • native.
  • via wrappers / conventions.
  • effectively unsupported.
Zero-Copy Shared Memory IPC
  • · Same machine only — pass data between processes at the speed of memory
  • iceoryx Eclipse zero-copy publish/subscribe over shared memory most popular
    Reach: intra-host · LAN · WAN
    Languages: C, C++ (official); Rust (via iceoryx2); Python via community bindings
    Strong at: Real-Time Behavior, Delivery & QoS, Footprint / Lightweight
    Best for: AV perception pipelines · DDS shared-memory transport · ROS 2 zero-copy · intra-host sensor fusion
    Producers write into shared memory pools and subscribers receive a pointer; payload is never copied or serialized. A central RouDi daemon coordinates pools and discovery. Often paired with Cyclone DDS or Fast DDS as the shared-memory transport so the higher-level API stays DDS while the on-host hop is zero-copy.
Other notable in this category:
  • iceoryx2 — Rust rewrite of iceoryx, decentralized. No central RouDi daemon (peer-to-peer service discovery in shared memory); type-safe Rust API with C/C++ bindings; cleaner deps, MIT/Apache licensed.
Lightweight Messaging Library
  • · Sockets on steroids — you get patterns, you build the system
  • ZeroMQ Brokerless socket-style messaging patterns for any language most popular
    Reach: intra-host · LAN · WAN
    Languages: 40+ bindings — C, C++, Python (pyzmq), Java (JeroMQ), Go, Rust, .NET, Node.js, Ruby, Erlang, Lua, Haskell, Perl, ...
    Strong at: Footprint / Lightweight, Cross-Language Support, Cross-Platform Support, Deployment Complexity
    Best for: intra-process glue · custom brokerless topologies · high-perf prototyping · embedded message bus
    Not a complete middleware — a library of 8+ socket patterns (REQ/REP, PUB/SUB, PUSH/PULL, ROUTER/DEALER, ...). Anything higher-level (durability, schema, discovery, security) is the application's job. CurveZMQ provides authenticated encryption when needed.
Other notable in this category:
  • nng — BSD-licensed nanomsg successor. Same scalability protocols as ZeroMQ, but thread-safe sockets, simpler async API, MIT-licensed; smaller community and fewer language bindings.
Data-Centric Pub/Sub (DDS)
  • · Brokerless real-time pub/sub with rich QoS — born for robotics, avionics, vehicles
  • Fast DDS eProsima's OMG DDS implementation; default RMW for ROS 2 most popular
    Reach: intra-host · LAN · WAN
    Languages: C++ (primary), Python (fastdds-python); Java and others via community / ROS 2 client libraries
    Strong at: Schema & Contract Style, Real-Time Behavior, Delivery & QoS, Discovery, Cross-Language Support
    Best for: robotics control loop · ROS 2 default RMW · vehicle E/E architecture · industrial automation
    Implements OMG DDS-RTPS. Strict IDL with CDR-encoded payloads, automatic SPDP/SEDP discovery over multicast, very fine-grained QoS (reliability, durability, deadline, history, liveliness, ...). Same-host traffic auto-uses SHM transport. WAN bridging needs Discovery Server or Routing Service since multicast doesn't survive routers.
Other notable in this category:
  • Cyclone DDS — Eclipse DDS implementation. Lighter footprint than Fast DDS, default RMW in some ROS 2 distros (Galactic and onward); first-class iceoryx integration for shared memory.
  • RTI Connext DDS — Commercial DDS flagship. Best-in-class tooling (Admin Console, Recording Service); Connext Micro/Cert for safety-critical / DO-178C workloads; widely used in defense and avionics.
  • OpenDDS — Object Computing's open-source DDS. Pluggable transports including TCP and multicast RTPS; common in defense and distributed simulation.
Unified Pub/Sub + Query Fabric
  • · One protocol that scales from MCU to cloud, with both publish and query semantics
  • Zenoh Eclipse Zenoh: pub/sub + query + storage in one protocol most popular
    Reach: intra-host · LAN · WAN
    Languages: Rust (canonical), C, C++, Python, Java, Kotlin, JS/TS, .NET; zenoh-pico (C) for MCU and wasm
    Strong at: Network Reach, Topology Model, Footprint / Lightweight, Schema & Contract Style, Cross-Platform Support
    Best for: cloud–edge–device data fabric · ROS 2 RMW for low-bandwidth links · constrained-network telemetry · cross-site IoT aggregation
    Single protocol designed to span device, edge, and cloud — runs in P2P, client-router, or routed-mesh modes. Payload is opaque bytes, no IDL required (the contract is by documentation). Queryables let consumers ask routers/peers for current values. The zenoh-pico build runs on MCUs and wasm.
IoT Broker Pub/Sub
  • · MQTT — the de-facto protocol for telemetry over flaky networks
  • Eclipse Mosquitto Lightweight, single-binary MQTT 3.1 / 3.1.1 / 5 broker most popular
    Reach: intra-host · LAN · WAN
    Languages: Broker in C; client libs in essentially every language — C, C++, Python (paho-mqtt), Java (Paho), JS, Go, Rust, .NET, Swift, Objective-C, Erlang, ...
    Strong at: Footprint / Lightweight, Cross-Platform Support, Delivery & QoS, Network Reach, Security
    Best for: IoT telemetry · smart home · industrial sensor uplink · intermittent-network mobile
    Minimal C broker; 3 QoS levels (0/1/2), retained messages, last-will. MQTT 5 adds response-topic + correlation-data, so request/reply is workable. Single-broker by default; horizontal scale needs other brokers (e.g. EMQX). Long-lived TCP keeps mobile and NAT'd devices reachable.
Other notable in this category:
  • EMQX — Erlang-based scalable MQTT broker. Built-in clustering for millions of connections, rule engine for in-broker stream processing, MQTT-over-QUIC, cloud/K8s-native deployment.
  • HiveMQ — Enterprise MQTT broker. Commercial offering with HA, extensive monitoring, strong client SDKs; heavily used in connected vehicles.
Distributed Log / Event Streaming
  • · Append-only log of events, persistent and replayable — the spine of event-driven systems
  • Apache Kafka Distributed commit log for streams of records most popular
    Reach: intra-host · LAN · WAN
    Languages: Java / Scala (official); via librdkafka — C, C++, Python (confluent-kafka-python), Go, .NET, Node.js, Rust, PHP, Ruby
    Strong at: Persistence & Replay, Delivery & QoS, Schema & Contract Style, Ecosystem & Tooling, Discovery
    Best for: event sourcing · log aggregation pipelines · stream analytics · change-data-capture · microservices event backbone
    Records are appended to partitioned, replicated topics on disk and kept for a configurable retention window. Consumers track their own offsets, so the same data can be re-read by new consumers later. Schema Registry adds compile-time-ish enforcement on top of the byte payload. KRaft mode replaces ZooKeeper since 3.x.
Other notable in this category:
  • Apache Pulsar — Streaming + queueing with separated compute and storage. Brokers (compute) and BookKeeper (storage) scale independently; multi-tenant by design; supports both queue and stream semantics natively.
  • NATS JetStream — Persistence layer on top of NATS core. Much lighter to operate than Kafka (single Go binary, no JVM/ZK); native K8s integration; smaller throughput ceiling than Kafka or Pulsar.
  • Redpanda — C++ Kafka-API drop-in. Kafka-protocol compatible but no JVM and no ZooKeeper; thread-per-core architecture; lower tail latency, simpler ops.
Modern RPC
  • · Strongly-typed cross-language service calls over HTTP/2
  • gRPC HTTP/2 + protobuf RPC framework, polyglot most popular
    Reach: intra-host · LAN · WAN
    Languages: C++, Java, Python, Go, Ruby, Node.js, C#, Objective-C, PHP, Dart, Kotlin, Swift (12 officially supported)
    Strong at: Cross-Language Support, Schema & Contract Style, Ecosystem & Tooling, Security, Deployment Complexity
    Best for: internal microservices RPC · polyglot service backends · low-overhead service-to-service calls · long-lived streaming feeds
    Service contracts in .proto are compiled into client + server stubs for ~12 languages. Four call modes (unary, server-stream, client-stream, bidi-stream); streaming + Context cancellation can simulate long-running actions. Built-in mTLS, deadlines, retries. Browsers need gRPC-Web or Connect.
Other notable in this category:
  • Connect-RPC — Buf's gRPC-compatible protocol over HTTP/1.1 + JSON. Same .proto contracts, but works over plain HTTP/1.1 with JSON or protobuf; first-class browser and curl support; no separate gRPC-Web proxy needed.
  • tRPC — End-to-end type-safe RPC for TypeScript. Single-language (TS) but no IDL — types flow directly from server functions to client; popular in full-stack TypeScript apps.
Web RPC Baseline
  • · The universal default — anything that speaks HTTP can call it
  • REST + OpenAPI Stateless HTTP request/reply with optional schema spec most popular
    Reach: intra-host · LAN · WAN
    Languages: Anything that speaks HTTP (i.e. every language). OpenAPI codegen supports Java, JS/TS, Python, Go, Rust, C#, Swift, Kotlin, PHP, Ruby, ...
    Strong at: Cross-Language Support, Cross-Platform Support, Ecosystem & Tooling, Discovery, Security
    Best for: external/public APIs · web and mobile backends · cross-organization integrations · anywhere a curl works
    Not really a framework but a style — JSON over HTTP, identified by URL, verbs by HTTP methods. OpenAPI gives a schema after the fact, with codegen for many languages, but it's documentation-first, not enforced-at-the-wire. Long-running tasks faked by polling or Server-Sent Events.
Other notable in this category:
  • GraphQL — Query-shaped data fetching over HTTP. Client declares the exact response shape; one endpoint, schema-strict; subscriptions for push; great when clients have varied data needs.
  • JSON-RPC 2.0 — Minimal JSON-based RPC protocol. Tiny spec, transport-agnostic (HTTP, WebSocket, stdio); the basis of LSP, WebRTC signalling, Ethereum APIs.
Browser-Friendly Bidirectional
  • · Long-lived two-way messaging over a single TCP connection
  • WebSocket RFC 6455 full-duplex frames over an HTTP upgrade most popular
    Reach: intra-host · LAN · WAN
    Languages: Native in all browsers (JS/TS); server libs in C, C++, Python (websockets, aiohttp), Java, Go, Rust, .NET, Node.js, Ruby, ...
    Strong at: Cross-Platform Support, Network Reach, Footprint / Lightweight, Real-Time Behavior
    Best for: browser dashboards / chat / games · live notifications · multiplayer signalling · server-to-browser push
    Just a transport — a framed pipe with no message types, schema, routing, or reconnection. Anything higher-level (channels, subscribe semantics, retries) is the application's responsibility. Browser-native, no plugins.
Other notable in this category:
  • Socket.IO — WebSocket + namespaces + rooms + auto-reconnect. Adds rooms/namespaces, automatic reconnection, transport fallback (long-polling), broadcast helpers; Node-centric but has clients in many languages.
  • SignalR — Microsoft's real-time messaging for .NET. Same idea as Socket.IO but in the .NET ecosystem; multi-transport with WebSocket primary; tight ASP.NET integration.
  • Server-Sent Events (SSE) — One-way server→browser push over HTTP. Half-duplex (server only), but works over plain HTTP/2 with auto-reconnect built in; great when you only need push.

Concepts Discussion

Pick a concept from the dropdown for further discussion.

These two paradigms look similar — "data flowing from producer to consumer" — but they answer different questions.

  • Pub/sub is a routing pattern: a producer publishes one discrete message and the system fans it out to whoever is currently subscribed to the topic.
  • Streaming is a transport pattern: producer and consumer hold a long-lived ordered channel where flow control and position tracking are first-class.
Pub/sub — fan-out by topic
flowchart LR
  P[Publisher] -->|"topic /tick"| Bus(("Bus / Topic"))
  Bus --> S1[Subscriber A]
  Bus --> S2[Subscriber B]
  Bus --> S3[Subscriber C]
    
Streaming — long-lived channel with backpressure
flowchart LR
  P[Producer] -->|"frame n, n+1, n+2 ..."| C[Consumer]
  C -.->|"ack / credit"| P
    
Pub/sub strengths
  • Producers and consumers are completely decoupled — neither knows the other exists.
  • Natural one-to-many fan-out; new subscribers just join.
  • Each message is logically independent — easy to reason about, easy to drop.
Pub/sub weaknesses
  • No built-in flow control between a specific producer/consumer pair — slow consumers either drop messages or crash the system.
  • Recovering missed messages requires explicit durability (history depth, replay), which not every middleware offers.
  • Doesn't model continuous data well — a 60Hz camera in pub/sub is technically N independent messages, not "a video stream".
Streaming strengths
  • Built-in ordering, backpressure, and ack/credit — slow consumers naturally throttle the producer instead of being silently dropped.
  • Resumable: the channel remembers the consumer's position (offset / cursor).
  • Models continuous flows naturally — video, audio, log tailing, gRPC streaming RPC.
Streaming weaknesses
  • Producer is more coupled to the consumer (or to the broker that holds the cursor).
  • Fan-out costs more — a stream is typically point-to-point; serving N consumers means N streams or a broker that materializes them.
  • Long-lived per-stream state means more memory and more failure modes (dead streams, half-open connections).

The question is whether a separate process — the broker — sits in the middle of every conversation.

  • Brokered systems (MQTT, Kafka, RabbitMQ) route every message through that central node.
  • Brokerless systems (DDS, ZeroMQ, Zenoh in P2P mode) let peers talk directly after a discovery handshake.
Brokered — every message hops through the broker
flowchart LR
  P1[Publisher 1] --> B((Broker))
  P2[Publisher 2] --> B
  B --> S1[Subscriber 1]
  B --> S2[Subscriber 2]
    
Brokerless — peers talk directly after discovery
flowchart LR
  P1((Peer 1)) --> P2((Peer 2))
  P2 --> P1
  P1 --> P3((Peer 3))
  P3 --> P1
  P2 --> P3
  P3 --> P2
    
Brokered strengths
  • One place to enforce auth, ACL, rate limits, monitoring, and audit logs.
  • Persistence and replay are natural — the broker stores the log/queue.
  • Discovery is trivial — every client just connects to the broker's address.
  • Crosses NAT and firewalls easily — only the broker needs an open port.
Brokered weaknesses
  • Extra network hop adds latency (typically a few ms; matters for control loops).
  • Single point of failure unless you run an HA cluster, and HA is non-trivial to operate.
  • Operational burden: another process to deploy, scale, version, and secure.
  • Throughput ceiling: the broker is a bottleneck at very high message rates.
Brokerless strengths
  • Lowest possible latency — direct path, no intermediate hop.
  • No single point of failure; nothing extra to deploy or pay for.
  • Cleanest deployment story for closed networks (one binary per node, done).
Brokerless weaknesses
  • Discovery is your problem — multicast/mDNS works great on a flat LAN but breaks across subnets, on Wi-Fi, and in containers.
  • Hostile to NAT and firewalls — every peer needs reachability to every other peer.
  • Persistence and replay are the application's responsibility.
  • Security and auth are distributed across N peers — harder to audit and rotate.
  • Doesn't gracefully scale past dozens to a few hundred peers (full-mesh discovery cost).

Most real systems are hybrid. Zenoh and DDS run brokerless on a LAN and add routers / discovery servers to bridge across WAN. Kafka has cluster brokers but clients talk directly to the leader broker per partition. The two designs aren't a religious choice — they're a knob you tune per network segment.

Granularity

One support matrix per Comparison Dimension above. Pick a dimension from the dropdown to view its matrix.

  • native — first-class, designed for it.
  • via wrappers / conventions / careful tuning.
  • effectively unsupported (tri-state matrices only).
  • Blank cell — not a fit for this dimension; binary matrices simply omit the mark.

Which communication shapes — pub/sub, request/reply, long-running action, streaming — the framework treats as first-class versus something you build on top with conventions.

Framework pub/sub request/reply long-running action streaming
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Whether the framework is happiest with peers talking directly (brokerless), a central process in the middle (broker-centric), a mix of the two (hybrid), or a classic client-server split.

Framework brokerless broker-centric hybrid client-server
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

How far across the network the framework realistically reaches without bolting on extra gateways or routers — same host, same LAN, or across the public WAN.

Framework intra-host LAN WAN
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Whether the framework forces you to define a separate wire-format file (and run a code generator) before any two parties can talk, or just moves opaque bytes and leaves the contract up to you.

  • strict typing required — separate IDL file with codegen, or strict in-language struct shared by both sides.
  • protocol is bytes, but a schema layer (Avro, OpenAPI, …) is the de-facto convention.
  • 🆓 no schema concept — payload is opaque bytes; you're free to use whatever format you want.
Framework IDL Approach
iceoryx Producer and consumer share the same C/C++ POD struct definition. No separate IDL file or codegen — the language type is the contract; the compiler enforces it.
iceoryx2 Producer and consumer share the same Rust / C++ / C type. No separate IDL file — the language type is the contract.
ZeroMQ 🆓 Opaque byte payload. You pick the serialization (JSON, protobuf, MsgPack, …) and document the wire format yourself.
Fast DDS .idl file + fastddsgen generates typed accessors and CDR serialization. Strict, compile-time enforced across languages.
Cyclone DDS .idl file + idlc generates typed accessors and CDR serialization. Strict, compile-time enforced across languages.
Zenoh 🆓 Opaque byte payload. Conventions only; pair with protobuf / CBOR / your own format if you want strictness.
MQTT (Mosquitto) 🆓 Opaque byte payload per topic. Documentation contracts only — many MQTT deployments use ad-hoc JSON.
Apache Kafka Bytes by default, but the Confluent Schema Registry (Avro / Protobuf / JSON Schema) is the de-facto contract layer in production deployments.
gRPC .proto file + protoc generates typed stubs and protobuf serialization. Strict, compile-time enforced across languages.
REST + OpenAPI JSON over HTTP by default; OpenAPI is widely used as an external schema and codegen source, but the runtime does not enforce it — you can always send arbitrary JSON.
WebSocket 🆓 Opaque text or binary frames. No schema layer in the protocol; bring your own (Socket.IO, JSON-RPC, custom).

Whether each framework has an officially supported or widely-used mature library in the language. Niche or community-only bindings are not counted. For MQTT the row reflects the protocol's client ecosystem (paho-mqtt etc.), not the Mosquitto broker binary specifically.

Framework C++ Python JS/TS Rust Java Go C# C Swift
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Which class of timing the framework is engineered for. "Hard real-time" means bounded worst-case latency suitable for closing a control loop; "soft" means typical latency is low but the tail is not bounded; "best-effort" means latency is whatever the network and the GC give you.

Framework hard real-time soft real-time best-effort
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Pieces typically required to run the framework in production. Read this matrix inverted from the others:

  • this piece is required — more checks ≈ more deployment burden.
  • Blank — you don't have to deal with that piece.
Framework extra daemon central broker / cluster IDL codegen step TLS / cert setup
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Delivery guarantees the framework offers natively. Opting in usually still requires a config flag (e.g. RELIABLE in DDS, QoS 1/2 in MQTT, transactional producer in Kafka).

Framework at-most-once at-least-once exactly-once ordering history / durability deadline / TTL
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Whether the framework can be deployed on the platform without significant porting work. "Embedded RTOS" covers QNX, FreeRTOS, Zephyr, and similar.

Framework Linux Windows macOS Android iOS Embedded RTOS Browser / wasm
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Form-factors the framework realistically fits into — RAM, dependencies, build system all considered. "MCU / RTOS class" implies bare-metal or RTOS without a full POSIX environment; "embedded Linux" is e.g. Yocto / BuildRoot images.

Framework MCU / RTOS class embedded Linux edge / mobile server / cloud
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

How nodes find each other on bring-up. Multicast / mDNS works on a flat LAN but typically breaks across subnets and on Wi-Fi; static config and DNS scale further but require operator setup.

Framework static config DNS multicast / mDNS discovery server / registry
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Whether the framework lets a late-joining subscriber receive past data, and how far back it can replay — in-memory only, on-disk durable log, or none at all.

Framework in-memory history on-disk durable log replay to late joiner TTL / message expiry
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Security primitives the framework provides out of the box. Many request/response stacks pick up "ACL / RBAC" and "signed identity" from a canonical companion layer (mTLS + OAuth/JWT) rather than from the wire protocol itself.

Framework TLS / mTLS ACL / RBAC payload-level encryption signed identity
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket

Officially supported (or de-facto community standard) tools you can install today — what you reach for when something breaks and you need eyes on the wire.

Framework CLI GUI viewer / monitor recorder / replay metrics / observability official SDK docs
iceoryx
iceoryx2
ZeroMQ
Fast DDS
Cyclone DDS
Zenoh
MQTT (Mosquitto)
Apache Kafka
gRPC
REST + OpenAPI
WebSocket