```markdown # ARCHITECTURE.md — SatSim System Overview This document gives the **system-level architecture** for SatSim. It is intended to provide a complete “sight picture” for anyone implementing a subproject (e.g., the Geometry/RF Engine) so they understand how their component fits into the larger simulator. --- ## 1) Purpose and guiding idea SatSim is a **hybrid satellite networking simulator** that combines: 1) A swappable **Geometry/RF/Link-Budget Engine** (physics + propagation + link feasibility) 2) A **packet-level discrete-event simulation lane 'OMNeT++/INET'** (scale + protocol behavior) 3) A **real SDN emulation lane 'Mininet/OVS'** (controller-in-the-loop + real Linux networking) 4) An **Orchestrator** that provides a single scenario/timebase and keeps all parts consistent The fundamental design choice is that SatSim is **layered and composable**: we reuse mature simulators/emulators and treat satellite physics as an external service with a stable interface. --- ## 2) Key design decisions (why this looks the way it does) ### 2.1 Why two lanes (simulation vs emulation) We intentionally run two different lanes because they answer different questions: - **OMNeT++/INET lane (Discrete-Event Simulation)** - Best for: scaling up to many nodes, protocol studies, routing and congestion behavior, reproducibility. - Not best for: running real SDN controllers and real Linux TCP stacks. - **Mininet/OVS lane (Network Emulation)** - Best for: real SDN controllers (ONOS/Ryu), real forwarding behavior (OpenFlow/OVS), real apps/traffic tools. - Not best for: scaling to thousands of nodes with full protocol stacks. Trying to “pipe packets” between them is possible but usually not worth it early, because it introduces hard time synchronization problems (DES time vs wall-clock time) and packet bridging complexity. Instead we connect both lanes to the same **state oracle** (the Geo/RF engine) through the Orchestrator. ### 2.2 Where the lanes *do* meet today They meet at: - **Scenario definition** (same nodes, same constraints, same time window) - **LinkState/Event timeline** (same “truth” about which links exist and their properties) - **Metrics and artifacts** (comparable outputs; shared logging/PCAP strategy) Optionally, they also meet via: - **Shared SDN decision logic** (same ONOS/Ryu app used to compute routes, then applied in both lanes through adapters) ### 2.3 Future “stacking” (OMNeT++ feeding Mininet) In the future, OMNeT++ may “feed” Mininet in two practical ways: 1) **Trace-driven replay (recommended future path)** - OMNeT++ generates a curated set of traces (topology/failure schedules, traffic demands, baseline routing decisions). - Mininet replays those traces in real-time to validate controller behavior under identical conditions. 2) **Hard co-simulation / packet bridging (advanced, optional)** - Some nodes simulated in OMNeT++, others emulated in Mininet at the same time. - Requires strict time coupling and a gateway that transforms/timeshifts packets. - Not a v1 target. ### 2.4 Locked decisions (2026-02-18) - **Tick authority:** `StreamLinkDeltas` is the control-plane source of truth for lane updates. - **Events contract:** event streaming is retained, but aligned to the same requested `dt` and `selector`, and each event carries `tick_index`. - **Orchestrator behavior:** streaming-driven execution is canonical; any scheduler is pacing-only. - **Scenario translation:** orchestrator must fail-fast when it cannot produce a valid Geo/RF `ScenarioSpec`. - **Python tooling:** Python workflows use `uv`; OMNeT++/INET workflows use `opp_env`. --- ## 3) Top-level components ### 3.1 Geometry/RF/Link-Budget Engine (black box, replaceable) **Role:** The authoritative “physics layer” that translates orbital/propagation reality into network-usable link state. **Key properties** - Replaceable implementation (Skyfield + ITU-R today, could be STK import or other later) - Stable interface (the rest of SatSim depends only on its API) - Produces time-indexed: - Link feasibility (up/down) - Link properties (delay/capacity/loss proxies) - Discrete events (link up/down, handover, failures if modeled) **Location:** `geomrf-engine/` --- ### 3.2 Orchestrator (system conductor) **Role:** Owns the simulation lifecycle and timebase. It is the “brain” that coordinates all lanes. **Responsibilities** - Load scenario config → create/initialize the Geo/RF engine scenario - Choose execution mode: - OMNeT-only, Mininet-only, or both in parallel - Drive execution pacing: - offline apply-fast or real-time apply-paced, while consuming authoritative engine stream ticks - Consume Geo/RF LinkState stream and distribute it to: - OMNeT adapter - Mininet adapter - logging/metrics - Collect artifacts (PCAPs, timeseries metrics, configs, run manifests) - Provide reproducible run IDs and version stamping **Location:** `orchestrator/` --- ### 3.3 OMNeT++/INET Lane (packet-level discrete-event) **Role:** Packet-level simulation of protocols, queuing, routing, traffic at scale. **Responsibilities** - Build the network node models (routers, hosts, queues) using INET components - Apply dynamic link updates (delay/capacity/loss/up-down) based on Geo/RF output - Run deterministic experiments rapidly (sweeps) - Export artifacts: - logs + metrics - optional PCAP outputs (where supported) **Custom SatSim additions** - A lightweight **LinkState Adapter Module** that subscribes to orchestrator/Geo output - A mechanism to apply link changes at simulation timestamps **Environment and install management** - Use `opp_env` as the standard way to install/manage OMNeT++ and INET. - Avoid ad-hoc/manual OMNeT++/INET installs in project workflows. **Location:** `lanes/omnet/` --- ### 3.4 Mininet/OVS Lane (SDN emulation) **Role:** Real SDN controller + real forwarding plane under dynamic link conditions. **Responsibilities** - Build an emulated topology with Mininet (or Containernet) - Use OVS as the dataplane switch/router substrate - Run a real SDN controller (ONOS or Ryu) - Apply dynamic link shaping based on Geo/RF output: - `tc/netem` for delay/loss/jitter - `tbf/htb` for rate control - interface up/down to emulate link drops - Generate traffic using real tools: - iperf3, D-ITG, SIPp, tcpreplay, custom apps **Location:** `lanes/mininet/` --- ### 3.5 Observability, artifacts, and visualization **Role:** Make runs inspectable, comparable, and reproducible. **Artifacts** - Scenario config snapshot + run manifest (versions, seeds, git SHAs) - LinkState/Event traces (optional export) - Metrics time-series (throughput/delay/loss/path changes) - PCAP captures (Mininet tcpdump; OMNeT if enabled) **Tools** - Prometheus + Grafana for dashboards - Wireshark for PCAP analysis **Location:** `observability/` and `artifacts/` --- ## 4) System boundaries and data ownership ### 4.1 The Geo/RF engine owns *physics truth* - It is the source of truth for which links can exist and their physical/network properties. - Other components must not invent geometry/rf; they only consume the engine’s output. ### 4.2 The Orchestrator owns *time and execution* - It defines run window requests, pacing mode, and synchronization rules. - For v1/v1.1, tick production comes from Geo/RF stream output rather than orchestrator-generated ticks. - It routes updates to the lanes and standardizes artifacts. ### 4.3 Each lane owns *packet/control behavior* - OMNeT owns packet-level behavior inside DES. - Mininet owns real SDN and Linux networking behavior. --- ## 5) Core data flows (end-to-end) ### 5.1 Initialization flow 1. User provides `ScenarioConfig` (YAML/JSON). 2. Orchestrator validates config and creates a new run ID. 3. Orchestrator calls Geo/RF engine: - `CreateScenario` (returns scenario ref) 4. Orchestrator initializes selected lane(s): - OMNeT: compile/load model, start run - Mininet: build topology, start controller 5. Orchestrator subscribes to Geo/RF streaming output for LinkState and Events. ### 5.2 Runtime (parallel lane mode) At each time tick: 1. Geo/RF produces `LinkDeltaBatch` + optional events. 2. Orchestrator receives it and distributes: - OMNeT adapter: update channel/link state in simulator time - Mininet adapter: apply tc/netem shaping and link toggles - (optional) Event recorder: store aligned `EngineEvent` stream for analysis/observability - Observability: record metrics and store link traces 3. Lanes generate traffic and produce metrics/PCAPs. ### 5.3 Completion flow 1. Orchestrator stops lane processes. 2. Orchestrator closes the Geo/RF scenario. 3. All artifacts are written under the run ID. --- ## 6) Timebase and execution modes SatSim supports multiple execution modes, controlled by the Orchestrator: ### Mode A — OMNeT-only (offline DES) - Orchestrator consumes Geo/RF ticks and applies them to OMNeT without wall-clock pacing. - Highest scalability and repeatability. ### Mode B — Mininet-only (real-time emulation) - Orchestrator consumes Geo/RF ticks and applies wall-clock pacing while updating Mininet shaping. - Best for SDN/controller realism and app-level testing. ### Mode C — Parallel (OMNeT + Mininet simultaneously) - Both lanes consume the same LinkState stream. - Used to compare “simulated protocol outcomes” vs “real controller outcomes” under the same link dynamics. ### Mode D — Trace-driven replay (future/optional) - Geo/RF and/or OMNeT exports a trace. - Mininet replays trace deterministically. --- ## 7) Interfaces between components (high-level) ### 7.1 Geo/RF Engine interface (v1) - gRPC service, Protobuf messages - Scenario lifecycle + streaming link deltas/events - Output is **NetworkView** link properties (up/down, delay, capacity, loss proxy) - Optional debug scalars for validation (SNR margin, elevation, range) - Event-stream alignment target: events use same requested window/selector/dt semantics as deltas and expose `tick_index`. ### 7.2 Orchestrator ↔ OMNeT interface - OMNeT subscribes to orchestrator updates via: - gRPC client inside a C++ adapter module, OR - file/trace ingestion for offline runs - Applies updates to INET channel/link parameters and toggles connectivity ### 7.3 Orchestrator ↔ Mininet interface - Orchestrator controls Mininet via: - Python Mininet API calls - Linux `tc` and interface management commands - SDN controller is external (ONOS/Ryu), connected in the standard Mininet way --- ## 8) Reproducibility rules Each run must record: - ScenarioConfig snapshot - Seeds - Engine versions: - Geo/RF engine version + schema version - orchestrator version - OMNeT/INET versions and model git SHA - `opp_env` environment definition/metadata used for OMNeT/INET - controller version and app git SHA - LinkState trace hash (if stored) - Toolchain/container image tags (if containerized) --- ## 9) Suggested monorepo layout ``` satsim/ ARCHITECTURE.md orchestrator/ ... subprojects/ geomrf-engine/ ARCHITECTURE.md # subproject-specific (the streaming API spec lives here) proto/ src/ tests/ lanes/ omnet/ models/ adapter/ scripts/ ``` mininet/ topo/ driver/ controllers/ scripts/ ``` observability/ grafana/ prometheus/ dashboards/ artifacts/ runs/ / scenario.yaml manifest.json linkstate.parquet (optional) metrics/ pcaps/ logs/ ``` --- ## 10) What an implementer of the Geo/RF engine must know - The Geo/RF engine must be treated as **the physics oracle**. - Its output must be: - time-indexed - sparse (selector-driven) - stable and deterministic - expressed in consistent units - The Orchestrator will use it in both: - offline sampling (for OMNeT) - real-time streaming (for Mininet) - The lanes do not need to know how link budgets are computed—only how to consume the streaming LinkDelta/Event outputs. --- ## 11) Roadmap hooks (explicit future extensions) - Add a richer PHY view (optional fields) without breaking NetworkView consumers. - Add trace import/replay for deterministic mininet runs. - Add “shared SDN decision interface” so ONOS/Ryu path computation can be applied inside OMNeT. - Add advanced co-simulation only if required (packet bridging). --- ```