Skip to content

imakris/sintra

Repository files navigation

Sintra

Platform Build Tests
Linux Linux Build Linux Tests
macOS macOS Build macOS Tests
Windows Windows Build Windows Tests
FreeBSD FreeBSD Build & Test
Coverage
Linux (gcc, gcov) Codecov GCC
Linux (clang, llvm-cov gcov) Codecov Clang

Header-only C++20 License: BSD-2-Clause

Sintra is a C++20 library for type-safe interprocess communication on a single host. It lets independent processes exchange typed messages, broadcast events, and invoke RPC-style calls with a compile-time-checked API, avoiding string-based protocols and external brokers. It also provides coordination primitives such as named barriers, as well as typed publish/subscribe, synchronous and asynchronous RPC, crash detection, and opt-in worker respawning.

Sintra targets low-latency, crash-resilient local IPC where shared-memory transport and coordination need to be integrated rather than assembled from multiple layers. Common alternatives such as ZeroMQ or nanomsg provide local transports, but those are socket-based, cross the kernel boundary, and still copy data. Sintra uses memory-mapped shared rings so data stays in user space and readers access published messages directly, which is suitable for latency-sensitive workloads.

Table of contents

Key features

  • Type-safe APIs across processes - interfaces are expressed as C++ types, so mismatched payloads are detected at compile time instead of surfacing as runtime protocol errors.
  • Signal bus and RPC in one package - publish/subscribe dispatch, targeted fire-and-forget messages, and blocking or async remote procedure calls share the same primitives, allowing programs to mix patterns as the architecture requires.
  • Header-only distribution - integrate the library by adding the headers to a project; no separate build step or binaries are necessary.
  • No RTTI required - type ids are derived from compile-time signatures (or explicit ids when pinned).
  • Cross-platform design - shared-memory transport on Linux, macOS, Windows, and FreeBSD.
  • Opt-in crash recovery - mark critical workers with sintra::enable_recovery() so the coordinator automatically respawns them after an unexpected exit.
  • Lifeline ownership for spawned processes - child processes monitor a lifeline pipe/handle and hard-exit if the owner disappears (timeout and exit code are configurable).

Typical use cases include plugin hosts coordinating work with out-of-process plugins, GUI front-ends that need to communicate with background services, and distributed test harnesses that must keep multiple workers in sync while exchanging strongly typed data.

Quick example

// Publisher process: announce a shared struct Ping to everyone listening.
sintra::world() << Ping{};

// Receiver process: register a slot so cross-process Pings show up locally.
sintra::activate_slot([](const Ping&) {
    sintra::console() << "Received Ping from another process" << '\n';
});

Getting started

  1. The include/ directory must be on the project's include path.
  2. A C++20 compliant compiler is required (GCC, Clang, or MSVC are supported).
  3. Start with example/sintra/ for focused samples covering publish/subscribe, ping-pong, RPC, recovery, barriers, and targeted messaging.

Because everything ships as headers, Sintra works well in monorepos or projects that prefer vendoring dependencies as git submodules or fetching them during configuration.

CMake integration

If Sintra is already available to your build, link the interface target:

target_link_libraries(my_app PRIVATE sintra::sintra)

For an in-tree dependency, make the target available first:

add_subdirectory(external/sintra)
target_link_libraries(my_app PRIVATE sintra::sintra)

For installed consumers, top-level builds export CMake package metadata. After cmake --install, use:

find_package(sintra CONFIG REQUIRED)
target_link_libraries(my_app PRIVATE sintra::sintra)

Reference and guide

For a browser view, build the static reference site and serve it locally:

python scripts/build_reference_site.py
python -m http.server 8000 --directory docs/reference_site

Then open http://localhost:8000/.

The Markdown sources remain available for symbol lookup in docs/reference/index.md, and the narrative guide is docs/guide.md.

Supported platforms and architectures

  • Linux, macOS, Windows, FreeBSD - shared-memory transport is supported on all four.

  • CPU architectures - Sintra targets x86/x64 and ARM/AArch64 CPUs. Builds on other architectures still succeed, but they emit a warning and fall back to a simple no-op spin pause for the interprocess primitives, so those primitives may run with a very basic implementation and performance is not guaranteed.

  • macOS requirement - Sintra requires macOS 15.0 or newer with the Command Line Tools for Xcode 15 (or newer) installed (the full Xcode IDE is not required). The build fails if <os/os_sync_wait_on_address.h> or <os/clock.h> is missing.

    Older macOS versions are not supported because Sintra's adaptive reader policy relies on os_sync_wait_on_address to wait on a specific atomic with timeouts and ordered/unordered wakeup semantics. The pre-15 alternatives all force compromises that hurt the whole library: dispatch_semaphore_t loses fine-grained timeouts and the wakeup-coalescing hooks; pthread_cond_t requires a kernel transition per wake even on the fast path, defeating the spin/precision-sleep/block phases in config.h; Mach semaphore_t is per-task rather than per-address and would force rebuilding the wakeup layer on top. After evaluating these options, requiring os_sync_wait_on_address is the cleanest trade-off.

Interprocess Communication Patterns

Broadcast a Ping and listen from another process

// Sender process: announce a shared struct Ping to everyone listening.
sintra::world() << Ping{};

// Receiver process: register a slot so cross-process Pings show up locally.
sintra::activate_slot([](const Ping&) {
    sintra::console() << "Received Ping from another process" << '\n';
});

Send a targeted fire-and-forget message

struct Unicast_receiver : sintra::Derived_transceiver<Unicast_receiver>
{
    void handle_unicast(const Ping& msg) {
        sintra::console() << "Got targeted ping\n";
    }

    SINTRA_UNICAST(handle_unicast)
};

// Send to a specific instance id (e.g., exchanged out-of-band or via a broadcast).
Unicast_receiver::rpc_handle_unicast(target_instance_id, Ping{});

For full flows, see example/sintra/sintra_example_0_basic_pubsub.cpp, example/sintra/sintra_example_6_unicast_send_to.cpp, and example/sintra/sintra_example_2_rpc_append.cpp.

Block until a specific message arrives

// Wait for a Stop signal (synchronous receive).
sintra::receive<Stop>();

// Wait for a message and capture its payload.
auto msg = sintra::receive<DataMessage>();
sintra::console() << "value=" << msg.value << '\n';

Note: call receive<T>() from main/control threads only; do not call it from a message handler. Debug builds abort if this is violated.

Export a transceiver method for RPC

struct Remotely_accessible: sintra::Derived_transceiver<Remotely_accessible>
{
    std::string append(const std::string& s, int v) {
        return std::to_string(v) + ": " + s;
    }

    SINTRA_RPC(append); // generates Remotely_accessible::rpc_append(...)
};

Usage example:

// Callee process: create and name the instance.
Remotely_accessible ra;
ra.assign_name("instance name");

// Caller process: invoke the RPC.
auto value = Remotely_accessible::rpc_append("instance name", "Hi", 43);
sintra::console() << value << '\n';

Async RPC variants (rpc_async_<method>(...)) are also available when a caller needs to start a request and collect the result later. For the async-handle API and same-process export details, see the RPC overview comments in include/sintra/sintra.h and tests/rpc_async_lifecycle_test.cpp.

Handle a Remote Exception

// Remote exceptions thrown inside append() propagate back across the process boundary.
try {
    sintra::console() << Remotely_accessible::rpc_append("instance", "Hi", 43) << '\n';
}
catch (const std::exception& e) {
    sintra::console() << "Remote RPC failed in callee: " << e.what() << '\n';
}

Observe abnormal exits from managed peers

auto crash_monitor = sintra::activate_slot(
    [](const sintra::Managed_process::terminated_abnormally& crash) {
        sintra::console()
            << "Process "
            << sintra::process_of(crash.sender_instance_id)
            << " crashed with status " << crash.status << '\n';
    },
    sintra::Typed_instance_id<sintra::Managed_process>(sintra::any_remote));

Lifeline process ownership

Sintra spawns managed processes with a lifeline pipe/handle. The child watches the read end; if the parent process exits or unpublishes, the pipe breaks and the child shuts down, then hard-exits after a timeout.

You can configure the policy per spawn:

sintra::Spawn_options options;
options.binary_path = binary_path;
options.lifetime.hard_exit_code = 99;
options.lifetime.hard_exit_timeout_ms = 100;
sintra::spawn_swarm_process(options);

Note: spawned processes require a lifeline by default. To launch a process manually into an existing swarm, create an external process invitation in the coordinator and pass External_process_invitation::sintra_args() to that process. See docs/reference/external_process_invitation.md and docs/process_lifecycle_notes.md.

Qt cursor sync example

For a Qt widget example that forwards Qt signals through sintra, see example/qt_basic/README.md.

Advanced topics

Threading model and barriers

Asynchronous message dispatch

Sintra uses dedicated reader threads to process incoming messages from shared memory rings. When a message arrives:

  1. A reader thread pulls the message from the ring buffer.
  2. The reader thread invokes the matching slot or RPC handler asynchronously.
  3. Handlers (and their post-handler continuations) execute on the reader thread, not the thread that published the message or called the barrier.

Concurrency reminder: Slot handlers that touch shared state must still synchronize with other threads in the process (via mutexes, atomics, etc.). The barriers described below coordinate when handlers run; they do not eliminate the need for thread-safe data structures.

Barrier semantics

sintra::barrier() coordinates progress across processes and comes in three flavors that trade off strength for cost. The template defaults to delivery_fence_t, so a plain barrier("name") is already stronger than a bare rendezvous. The lightest-weight barrier whose guarantees match the code's requirements is preferred:

  • Rendezvous barriers (barrier<sintra::rendezvous_t>(name)) simply ensure that every participant has reached the synchronization point. Messages published before the barrier might still be in flight or waiting to be handled, so this mode is appropriate when only aligned phase progression is needed - for example, coordinating the simultaneous start of a workload whose logic does not depend on the effects of earlier messages.

    Warning: Two peers can both reach the rendezvous while still missing each other's prior messages (A sends x, B sends y, both call rendezvous, neither is guaranteed to have received the other). Prefer delivery or processing fences when correctness depends on pre-barrier messages being observed.

  • Delivery-fence barriers (barrier(name) or barrier<sintra::delivery_fence_t>(name)) guarantee that all pre-barrier messages have been pulled off the shared-memory rings by each process's reader thread and are queued locally for handling, though the handlers may still be running. This is a local guarantee for the caller; it does not add a second rendezvous proving that peers have also drained their readers. The default delivery fence is suitable when the next step requires the complete set of incoming work to be staged locally, such as inspecting an inbox before taking action.

  • Processing-fence barriers (barrier<sintra::processing_fence_t>(name)) wait until every handler (and any continuations) for messages published before the barrier has finished executing. This mode is appropriate when subsequent logic must observe the completed side effects - for instance, reading shared state that earlier handlers updated or applying a configuration change only after all peers processed preparatory updates.

Delivery fences cost the same as rendezvous plus a short wait for readers to catch up. Processing fences add a single control message per process and an extra rendezvous to allow deterministic observation of handler side effects.

Barrier names beginning with _sintra_ are reserved for internal runtime coordination and now fail fast.

// Wait until everyone reaches the same point and any prior messages are queued locally.
sintra::barrier("phase-1"); // delivery fence

// Later: ensure the side effects from earlier messages are visible before reading shared data.
sintra::barrier<sintra::processing_fence_t>("apply-updates");

Barrier rounds track processes, not calling threads. For a single (barrier_name, group_name, BarrierMode) round, each process should have one in-flight caller; when several threads in the same process must wait for the same phase, coordinate them with normal threading primitives and have one representative enter sintra::barrier.

Processing fences are best issued from control threads when the fence must include all pre-barrier handler work. A processing fence called from a request-reader handler or post-handler is reentrancy-aware: it skips the currently executing reader and may run queued post-handlers while waiting, so it does not wait for the current handler/post-handler or for messages queued behind it on that same request-reader stream.

Coordinated shutdown

Most multi-process Sintra programs follow this top-level shape:

sintra::init(argc, argv, process_a, process_b, process_c);

// branch-specific work
// ...

sintra::shutdown();

shutdown() is the recommended teardown call when every live participant is expected to finish together. It first performs the library's standard all-process shutdown handoff, making sure earlier interprocess work has been fully processed, and then tears down the local runtime.

If one process is intentionally departing while peers continue running, use sintra::leave(). If the coordinator must run a bounded final local action before raw teardown, use sintra::shutdown(options).

In other words:

  • Use shutdown() for the ordinary "all processes are done, now exit cleanly" case.
  • Use leave() for intentional unilateral departure.

Low-level lifecycle escape hatches and shutdown internals are documented in docs/barriers_and_shutdown.md and docs/process_lifecycle_notes.md.

Optional explicit type ids

Most users do not need explicit type ids as long as every process is built with the same toolchain and flags. When toolchains are mixed or there is a need to remove any doubt about type id stability, ids can be pinned explicitly for both transceivers and messages. The ids must remain unique and consistent across every process in the swarm.

struct Explicit_bus : sintra::Derived_transceiver<Explicit_bus>
{
    SINTRA_TYPE_ID(0x120)
    SINTRA_MESSAGE_EXPLICIT(ping, 0x121, int value)
};

Explicit_bus bus;
sintra::activate_slot([](const Explicit_bus::ping& msg) {
    sintra::console() << "ping value=" << msg.value << '\n';
});
bus.emit_global<Explicit_bus::ping>(73);

See example/sintra/sintra_example_7_explicit_type_ids.cpp for a full example.

Tests and continuous integration

The library includes a comprehensive test suite covering publish/subscribe, RPC, barriers, and crash recovery. Tests are controlled by tests/active_tests.txt.

cmake -B build -DSINTRA_BUILD_TESTS=ON
cmake --build build
cd tests && python3 run_tests.py --build-dir ../build --config Release

See TESTING.md for detailed documentation.

CI runs on Linux, macOS, Windows (GitHub Actions), and FreeBSD (Cirrus CI).

License

The source code is licensed under the Simplified BSD License.

About

A C++ library for type-safe interprocess messaging and remote procedure calls

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors