Semantiva framework

An HPC-ready, domain-driven, type-oriented framework that delivers semantic transparency to advanced scientific computing.

Semantiva Runtime - Transport & Execution Model

1. Big Picture

┌──────────────────────────────────────────────────────────────────────────┐
│                        Semantiva Execution Plane                         │
├──────────────────────────────────────────────────────────────────────────┤
│  Orchestrator Layer (control‑plane)                                      │
│  ─────────────────────────────────────────────────────────────────────── │
│  • LocalSemantivaOrchestrator        – single‑process DAG runner         │
│  • QueueSemantivaOrchestrator        – FIFO job dispatcher (master)      │
│                                                                          │
│  Executor Layer (data‑plane, in‑node parallelism)                        │
│  ─────────────────────────────────────────────────────────────────────── │
│  • SequentialSemantivaExecutor       – sync / default                    │
│  • (pluggable ThreadPool / Ray / GPU in roadmap)                         │
│                                                                          │
│  Transport Layer (message fabric)                                        │
│  ─────────────────────────────────────────────────────────────────────── │
│  • InMemorySemantivaTransport        – dev / unit tests                  │
│  • (pluggable NATS / Kafka / gRPC in roadmap)                            │
└──────────────────────────────────────────────────────────────────────────┘

Why three layers?

Clear separation lets us swap any layer without touching the others.


2. Core Interfaces

Layer Key Interface Essence
Transport SemantivaTransport publish(channel, BaseDataType, ContextType)
subscribe(pattern) -> Subscription
Executor SemantivaExecutor submit(callable, *args) -> Future
Orchestrator SemantivaOrchestrator execute(nodes, data, context, transport, logger)

They live in semantiva/execution_tools/transport/, executor/, orchestrator/.
Concrete defaults (InMemorySemantivaTransport, SequentialSemantivaExecutor, LocalSemantivaOrchestrator) give out‑of‑the‑box behaviour with zero infra.


3. New Capabilities Unlocked

Capability Enabled by Impact
Message‑driven pipelines SemantivaTransport in Pipeline._process() Every node publish/subscribe is audit‑ready; easy to add NATS for cross‑host comms.
In‑node parallelism SemantivaExecutor inside PipelineNode Slice a tensor across CPU cores today; switch to GPU tomorrow without code change.
Job queue & workers QueueSemantivaOrchestrator + worker_loop() Fire thousands of independent pipelines across a farm; Futures return full results.
Per‑process logging & timers _setup_log() helper + existing stop_watch Master/worker logs (master_<pid>.log, worker_<pid>.log) + node / pipeline timings.
Pluggable infrastructure 3‑layer split Drop‑in Kafka, Ray, or Kubernetes Jobs without API churn.

4. How It Runs — Example Flow

  1. Enqueue a job
from semantiva.execution_tools.job_queue.queue_orchestrator import QueueSemantivaOrchestrator
master = QueueSemantivaOrchestrator(InMemorySemantivaTransport())
future = master.enqueue(pipeline_cfg, return_future=True)
  1. Master publishes jobs.<id>.cfg on transport.
  2. Workers subscribe, build Pipeline, call Pipeline.process().
  3. Each node executes via its SemantivaExecutor; outputs published on transport.
  4. Worker publishes jobs.<id>.status with full (data, context) once done.
  5. Future in master resolves → result is immediately available to caller.

All messages are typed (BaseDataType, ContextType) for semantic introspection.


5. Deployment Options

Scale Transport Orchestrator Executors Notes
Local dev / CI In‑Memory Local Orch Sequential No external services.
Single VM In‑Memory Queue Orch ThreadPool Parallel batch jobs.
Multi‑node NATS (road‑map) Queue Orch Sequential / Ray True distribution; add NATS JetStream.
GPU farm NATS Queue Orch GPUSemantivaExecutor Heavy ML inference at scale.

6. Long‑Term Maintainability