Skip to content

lixiasky-back/coroTracer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

coroTracer: Cross-Language, Zero-Copy Coroutine Observability

Go Engine SDK C++ Arch License

UDSWakeupMechanics.gif

Why I built this: while debugging one of my own M:N schedulers, I ran into an especially nasty failure mode. Under heavy load, throughput would suddenly collapse to zero, but ASAN and TSAN stayed silent because nothing was corrupt in the usual memory-safety sense. It turned out to be a classic lost wakeup: the coroutine had become logically unreachable, but traditional tooling was terrible at surfacing that kind of state-machine break. coroTracer was built for exactly this class of problem.

coroTracer is an out-of-process coroutine trace collector.
It is designed for M:N coroutine schedulers, with a very specific goal:

  • capture coroutine state transitions
  • minimize interference with the target process
  • emit reusable raw traces
  • provide a reliable low-level foundation for later offline analysis and database export

It is not positioned as an APM product or an online analysis platform.
At the moment, this repository is focused on two things:

  1. safely collecting coroutine state into JSONL
  2. exporting an existing JSONL trace into SQLite / MySQL / PostgreSQL / CSV

The core safety properties of the collection protocol have also been modeled and proved in Lean 4. Relevant files:

Project status: at this point the project is already usable end to end. The collection, persistence, and export pipeline is working as a closed loop. If I had to point out the one remaining obvious limitation, it is that collection capacity is still based on a fixed finite coroutine count, rather than a dynamically growing capacity. Aside from that, the project is already usable in practice. Updates will continue, but the pace will likely slow down significantly, probably much more than before. This update was focused on data format conversion and export, and did not touch the core collection path. Codex genuinely improved iteration speed a lot here, which helped this release land much faster.


Architecture

+-----------------------+                               +-----------------------+
|   Target Application  |                               |    Go Tracer Engine   |
|  (C++, Rust, Zig...)  |                               |                       |
|                       |       [ Lock-Free SHM ]       |                       |
|  +-----------------+  |      +-----------------+      |  +-----------------+  |
|  |  cTP SDK Probe  |=======> | StationData [N] | <=======|  Harvester Loop   |  |
|  +-----------------+  |  Write +-----------------+ Read |  +-----------------+  |
|                       |               ^               |                       |
|       [ Socket ]      |---(Wakeup)---UDS---(Listen)---|      [ File I/O ]     |
+-----------------------+                               +-----------------------+
                                                                        |
                                                                        v
                                                               +------------------+
                                                               |  trace_output    |
                                                               |     .jsonl       |
                                                               +------------------+
                                                                        |
                                                                        v
                                                          +----------------------------------+
                                                          | SQLite / MySQL / PostgreSQL / CSV |
                                                          +----------------------------------+

Current Capabilities

1. Trace Collection Mode

The Go engine is responsible for:

  • creating shared memory
  • creating the Unix Domain Socket
  • launching the target process
  • continuously harvesting coroutine events from shared memory
  • writing the result as JSONL

Each JSONL line looks roughly like this:

{"probe_id":123,"tid":456,"addr":"0x0000000000000000","seq":2,"is_active":true,"ts":123456789}

Those fields correspond to the source-level TraceRecord:

  • probe_id: unique coroutine probe identifier
  • tid: real OS thread ID
  • addr: suspension address or related coroutine address
  • seq: slot sequence number
  • is_active: whether the coroutine is currently active
  • ts: timestamp

2. Export Mode

The repository now includes an export/ directory that supports converting an existing JSONL trace into:

  • a SQLite database
  • a MySQL database
  • a PostgreSQL database
  • a DataFrame-friendly CSV file

This is explicitly a second-stage export from an existing JSONL trace.
It is not "trace and write to a database at the same time."

3. SDKs

The repository currently ships a C++20 header-only SDK:

It also now ships a framework-free Rust poll-model SDK:

Their responsibilities are:

  • attaching to shared memory
  • attaching to the UDS wakeup channel
  • writing coroutine state on suspend / resume
  • obeying the cTP memory contract

Core Mechanism

The central design idea is simple:

physically separate the execution plane from the observation plane.

The target process only writes state into shared memory.
The Go collector harvests those states asynchronously from outside the process, instead of pushing complicated tracing logic back into the target.

1. Shared Memory Protocol (cTP)

The protocol-level document is here:

There are three essential ideas:

  1. GlobalHeader and StationData are forced into fixed layouts
  2. Epoch is aligned to a 64-byte cache line
  3. the writer and reader coordinate through a lock-free seq discipline

2. The C++ Write Protocol

The writer does not simply blast fields into memory without structure.
It follows a strict order:

  1. first make seq odd to mark "write in progress"
  2. then write the payload
  3. finally make seq even to mark "write complete"

This corresponds to PromiseMixin::write_trace in SDK/c++/coroTracer.h.

3. The Go Read Protocol

The Go reader also does not trust a slot just because data is present.
It follows three steps:

  1. read seq once
  2. only if seq is even and newer than local lastSeen does it copy the payload
  3. read seq again after the copy
  4. only if the two seq values match does it write JSONL

This is implemented in:

4. Smart UDS Wakeup

To avoid wasting CPU cycles when traffic is low:

  • the Go side sets TracerSleeping = 1 while idle
  • once the C++ side finishes a write and notices the tracer is sleeping, it sends a 1-byte UDS wakeup signal

This avoids syscall storms under heavy throughput while also avoiding a pure busy-spin under light throughput.


Quick Start

1. Build

go build -o coroTracer main.go

2. Trace a Target Program

./coroTracer -n 256 -cmd "./your_target_app" -out trace.jsonl

This does the following:

  • preallocates 256 stations
  • launches ./your_target_app
  • writes the trace into trace.jsonl

One important constraint:

  • -cmd mode is collection-only
  • it does not export into a database in the same run

So collection and export are two separate stages.

3. Integrate the C++ SDK

The target program inherits IPC configuration through environment variables.

The smallest possible integration looks like this:

#include "coroTracer.h"

int main() {
    corotracer::InitTracer();
    // ... start your scheduler
}

For coroutine promises, you can inherit from PromiseMixin:

struct promise_type : public corotracer::PromiseMixin {
    // your business logic
};

The SDK records the state transitions associated with await_suspend and await_resume.


Exporting JSONL

Export mode only works on an already existing JSONL file.
It cannot be used together with -cmd.

So this is allowed:

./coroTracer -export sqlite -in trace.jsonl

But this is not:

./coroTracer -cmd "./your_target_app" -export sqlite

1. Export to SQLite

./coroTracer -export sqlite -in trace.jsonl -sqlite-out trace.sqlite

Notes:

  • by default the output filename is derived as <input>.sqlite
  • runtime requires a local sqlite3 binary

2. Export to CSV (DataFrame-Friendly)

./coroTracer -export csv -in trace.jsonl -csv-out trace.csv

That CSV can be consumed directly by:

  • pandas
  • polars
  • DuckDB
  • R

3. Export to MySQL

./coroTracer \
  -export mysql \
  -in trace.jsonl \
  -db-host 127.0.0.1 \
  -db-port 3306 \
  -db-user root \
  -db-password your_password \
  -db-name coro_tracer \
  -db-table coro_trace_events

If you use a Unix socket, you can also do:

./coroTracer \
  -export mysql \
  -in trace.jsonl \
  -db-user root \
  -db-password your_password \
  -mysql-socket /tmp/mysql.sock

Notes:

  • runtime requires a local mysql CLI
  • the exporter creates the database and table automatically, then inserts the data

4. Export to PostgreSQL

./coroTracer \
  -export postgresql \
  -in trace.jsonl \
  -db-host 127.0.0.1 \
  -db-port 5432 \
  -db-user postgres \
  -db-password your_password \
  -db-name coro_tracer \
  -db-table coro_trace_events \
  -pg-sslmode disable

Notes:

  • runtime requires a local psql CLI
  • the exporter checks whether the target database exists and creates it when needed
  • by default it uses postgres as the maintenance database; you can override that with -pg-maintenance-db

5. Common Export Flags

The current export-related flags are:

  • -export
  • -in
  • -sqlite-out
  • -csv-out
  • -db-cli
  • -db-host
  • -db-port
  • -db-user
  • -db-password
  • -db-name
  • -db-table
  • -mysql-socket
  • -pg-maintenance-db
  • -pg-sslmode

In particular:

  • -db-password is intended for the user's own database password
  • -db-cli overrides the default CLI command name
    • MySQL defaults to mysql
    • PostgreSQL defaults to psql

For the full parameter reference, see:


Lean 4 Proof

One of the more important aspects of this project is that the collection protocol is not justified by intuition alone.
It has been formally modeled.

A good reading order is:

  1. proof/proof.lean
  2. proof.md
  3. proof_en.md

The proof covers the following core properties:

  • Go does not commit half-written dirty data into the log
  • if the writer leaves a short non-interfering window, Go is guaranteed to complete one successful harvest

The main source-level correspondence is in:


Current Boundaries

To avoid confusion, here are the current project boundaries.

1. This Repository Is Not an Analysis Platform

What it provides today is:

  • low-level collection
  • JSONL persistence
  • export into databases / CSV

It no longer follows the old built-in "report generator / HTML analyzer" direction.

2. The Current Focus Is the C++20 / Rust SDKs

Although the protocol itself is language-agnostic, the repository currently ships official SDKs for:

  • C++20 coroutine integration
  • Rust Future::poll integration

Zig and C are still possible in principle because the foundation only depends on:

  • mmap
  • fixed ABI layout
  • atomic read/write discipline

3. Runtime External Dependencies

If you use export mode, the current implementation depends on local CLI tools:

  • SQLite: sqlite3
  • MySQL: mysql
  • PostgreSQL: psql

This is intentional. It keeps the Go dependency set light and avoids pulling in extra database drivers.


Repository Layout

The most important files and directories right now are:


Testing

The test suite is fully automated. A single shell script covers all layers:

bash tests/run_tests.sh

What it runs

Phase Content
1 Go unit tests — go test -race ./... across all packages
2 Rust SDK unit tests — cargo test in SDK/rust/
3 Build the Go tracer binary
4 Build the Rust integration tracee
5 Rust tracee unit tests — cargo test in tests/rust_tracee/
6 Integration run — Go engine + Rust tracee under coroTracer, 12 async scenarios
7 JSONL output invariant checks (SeqLock even-seq, addr format, both event types, nanosecond clock)
8 CSV export round-trip
9 SQLite export round-trip (skipped automatically if sqlite3 is not in PATH)

Go unit test coverage

  • structure/GlobalHeader / Epoch / StationData sizes and offsets, all SeqLock Harvest paths (empty, single write, no-repeat, odd-seq skip, torn-read discard, ring wrap)
  • engine/TracerEngine init, shm file size, doScan with/without data, clamped allocation count
  • export/StreamJSONL (all edge cases), CSV export, SQLite export, schema SQL, all escape / quote helpers
  • mainderiveOutputPath, resolveExportInput

Rust unit test coverage

  • SDK/rust/ (3 tests) — protocol layout compile-time assertions, TracedFuture poll semantics, Send bounds
  • tests/rust_tracee/ (14 tests) — PollTrace lifecycle (new, pending/resume cycle, idempotent mark-dead, drop), TracedFuture output preservation and pending semantics, Send bounds, multi-thread concurrent futures

Integration scenarios (Rust tracee)

12 async scenarios are run end-to-end under the Go engine:

  1. Single sleep
  2. 20 concurrent tasks
  3. Multiple suspensions in one future
  4. Oneshot channel producer/consumer
  5. mpsc channel (N producers, 1 consumer)
  6. Barrier rendezvous
  7. yield_now suspensions
  8. Mixed active/suspend events
  9. Stress — 100 concurrent tasks
  10. Nested future chain
  11. PollTrace low-level API
  12. TracedFuture dropped before completion

Dependencies

Requirement Needed for
Go toolchain Go build + unit tests
Rust / cargo Rust SDK tests + tracee build
sqlite3 binary Phase 9 (SQLite export) — optional

Output artefacts

All logs and generated files land in tests/output/:

tests/output/
  trace.jsonl          # raw captured events
  trace.csv            # CSV export
  trace.sqlite         # SQLite export (if sqlite3 available)
  go_unit_tests.log
  rust_sdk_tests.log
  rust_tracee_tests.log
  integration_run.log

Contact

lixia.chat@outlook.com

About

A cross-language, zero-copy coroutine observability framework based on the cTP shared-memory protocol, utilizing lock-free ring buffers for ultra-low overhead state tracing.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors