An HPC-ready, domain-driven, type-oriented framework that delivers semantic transparency to advanced scientific computing.
┌──────────────────────────────────────────────────────────────────────────┐
│ Semantiva Execution Plane │
├──────────────────────────────────────────────────────────────────────────┤
│ Orchestrator Layer (control‑plane) │
│ ─────────────────────────────────────────────────────────────────────── │
│ • LocalSemantivaOrchestrator – single‑process DAG runner │
│ • QueueSemantivaOrchestrator – FIFO job dispatcher (master) │
│ │
│ Executor Layer (data‑plane, in‑node parallelism) │
│ ─────────────────────────────────────────────────────────────────────── │
│ • SequentialSemantivaExecutor – sync / default │
│ • (pluggable ThreadPool / Ray / GPU in roadmap) │
│ │
│ Transport Layer (message fabric) │
│ ─────────────────────────────────────────────────────────────────────── │
│ • InMemorySemantivaTransport – dev / unit tests │
│ • (pluggable NATS / Kafka / gRPC in roadmap) │
└──────────────────────────────────────────────────────────────────────────┘
Clear separation lets us swap any layer without touching the others.
Layer | Key Interface | Essence |
---|---|---|
Transport | SemantivaTransport |
publish(channel, BaseDataType, ContextType) subscribe(pattern) -> Subscription |
Executor | SemantivaExecutor |
submit(callable, *args) -> Future |
Orchestrator | SemantivaOrchestrator |
execute(nodes, data, context, transport, logger) |
They live in semantiva/execution_tools/transport/
, executor/
, orchestrator/
.
Concrete defaults (InMemorySemantivaTransport
, SequentialSemantivaExecutor
, LocalSemantivaOrchestrator
) give out‑of‑the‑box behaviour with zero infra.
Capability | Enabled by | Impact |
---|---|---|
Message‑driven pipelines | SemantivaTransport in Pipeline._process() |
Every node publish/subscribe is audit‑ready; easy to add NATS for cross‑host comms. |
In‑node parallelism | SemantivaExecutor inside PipelineNode |
Slice a tensor across CPU cores today; switch to GPU tomorrow without code change. |
Job queue & workers | QueueSemantivaOrchestrator + worker_loop() |
Fire thousands of independent pipelines across a farm; Futures return full results. |
Per‑process logging & timers | _setup_log() helper + existing stop_watch |
Master/worker logs (master_<pid>.log , worker_<pid>.log ) + node / pipeline timings. |
Pluggable infrastructure | 3‑layer split | Drop‑in Kafka, Ray, or Kubernetes Jobs without API churn. |
from semantiva.execution_tools.job_queue.queue_orchestrator import QueueSemantivaOrchestrator
master = QueueSemantivaOrchestrator(InMemorySemantivaTransport())
future = master.enqueue(pipeline_cfg, return_future=True)
jobs.<id>.cfg
on transport.Pipeline
, call Pipeline.process()
.SemantivaExecutor
; outputs published on transport.jobs.<id>.status
with full (data, context)
once done.All messages are typed (BaseDataType
, ContextType
) for semantic introspection.
Scale | Transport | Orchestrator | Executors | Notes |
---|---|---|---|---|
Local dev / CI | In‑Memory | Local Orch | Sequential | No external services. |
Single VM | In‑Memory | Queue Orch | ThreadPool | Parallel batch jobs. |
Multi‑node | NATS (road‑map) | Queue Orch | Sequential / Ray | True distribution; add NATS JetStream. |
GPU farm | NATS | Queue Orch | GPUSemantivaExecutor | Heavy ML inference at scale. |
Pipeline.process()
— nothing breaks.(BaseDataType, ContextType)
usage everywhere.