Architectural note

Federated operational graph architecture

This page describes the structural model behind Axonex: explicit operational graphs, distributed execution, domain-scoped resource surfaces, causal traceability, and network-native composition across independently operated runtime nodes.

This aim is to articulate a system class through concrete capabilities.

1. The problem it addresses

In most organisations, operational logic exists but is fragmented across applications, pipelines, dashboards, spreadsheets, human reasoning, and machine systems. The result is that the system’s actual structure is never represented in one place.

Even basic questions such as where a value came from often require manual tracing across disconnected tools and social memory.

2. The core shift

Traditional model

organisation
  ↓
systems and tools
  ↓
data outputs
  ↓
humans interpret and decide
  ↓
new system changes

Axonex model

explicit operational graph
  ↓
produces data and signals
  ↓
humans / machines / agents interact
  ↓
graph is modified and redeployed

The shift is from managing disconnected systems to modifying an explicit reasoning structure.

3. What the graph actually is

The graph is not a diagram. It is a running distributed system composed of data inputs, transformations, derived values, signal propagation, published outputs, and interaction points.

trade data → aggregation → exposure calculation → risk model join → published metric

A chain like this executes continuously, may span multiple runtime nodes, may cross domain boundaries, and is inspectable at each step.

Axonex graphs are not deployed onto infrastructure — they are constructed across it.

4. Cognition is part of the graph

The graph is not limited to computation. It can include human and machine cognition as part of the operational loop.

Human interfaces

  • reading rendered outputs, metrics, or reports
  • interpreting results
  • entering decisions, adjustments, or overrides
graph → rendered output → human interprets → human inputs → graph continues

Machine interfaces

  • LLM reasoning steps
  • ML inference
  • automated classification or decision systems
graph → model / LLM → derived output → graph continues

This makes reasoning pathways more explicit and decisions more traceable. The graph becomes a substrate for interacting cognitive and computational processes.

5. Domains and the resource model

Axonex is composed of domains rather than a single global system. A domain acts as a trust boundary, ownership boundary, discovery scope, and performance boundary.

Domain
 ├── Domain Nodes   (discovery / registry layer)
 └── Runtime Nodes  (execution layer)

Domain nodes resolve resource locations and act as a control plane. Runtime nodes execute FlowFrames, hold live state, and publish or consume resources.

Resource identity

domain::resource_id
domain::resource_id::v1

Resources are domain-scoped, versioning is explicit, and there is no universal global identity. Equivalence is determined by consumers, not imposed by the system.

6. Federation and recursive composition

Domains interact through explicit publication and controlled import. FlowFrames compose into larger structures, producing a graph of graphs across runtime nodes and domains.

FlowFrame A → produces resource → FlowFrame B consumes → produces new resource
domain A → domain B → domain C

The system is network-native. Different parts of a graph may run on different nodes, and execution composes across those boundaries without requiring a single global scheduler.

The graph is made out of the network.

7. Execution model

Computation operates on live in-memory state. State changes emit signals, which drive downstream execution and update derived state.

value change → signal → downstream execution → new value state

In addition to value propagation, execution also carries a notion of source state. Data surfaces may move between conditions such as loading, live availability, and degraded or stale upstream status, with those conditions propagating through dependent graph structure.

source state change → propagation → downstream execution context → new source state

This allows downstream computation and decision logic to respond differently to current, transitional, and no-longer-live inputs as part of the execution model itself.

The use of live in-memory state does not imply that the system is purely ephemeral. State may be sourced from, or reflected into, external systems through adapters, including databases, logs, and other persistent stores.

This allows execution to operate over state that can be seeded, persisted, and reconstituted, while treating all sources—whether continuously updating or relatively static—as part of a unified, real-time execution model.

A runtime may initialise from a prior state, continue independently, and transition between data providers, including substitution of upstream sources as they become available, without altering the execution model itself.

The execution model remains consistent across live, restored, and externally sourced state, preserving operational behaviour under changing conditions.

8. Interaction model

Axonex graph interfaces operate through a pipeline that carries two orthogonal channels:

In the simplest case, only the downstream channel is used, and the interaction behaves as a standard continuously updating stream, with no upstream modulation required. When present, upstream modulation integrates into the same execution model in a straightforward manner, rather than introducing a separate interaction pattern.

upstream channel   (control / modulation)
downstream channel (data / transformation)

Conceptually:

  upstream channel
         │
         ↓
  [ adapter node ]   (e.g. database, service, or external graph)
         │
         ↓
 downstream channel → computation → downstream state

The downstream channel carries data through the pipeline and is transformed by nodes.

The upstream channel carries control or modulation toward the root of the pipeline and is not transformed by nodes.

The adapter node defines the boundary between the graph and an external system. It provides:

The adapter does not perform computation itself. It represents a coupling between the graph and an external domain such as a database, service, or another graph.

8.1 Upstream modulation

Upstream inputs are conveyed through the upstream channel and modulate downstream computation as part of normal execution.

These inputs are live and may change over time.

For example:

  upstream channel (vehicle plates)
         │
         ↓
 [ vehicle adapter ]   (e.g. database interface)
         │
         ↓
 downstream channel (vehicle records) → computation

As upstream inputs change, the adapter reflects those changes into the external system, and downstream output updates as new data is produced.

Downstream outputs are therefore continuously updated projections of current upstream state and internal computation.

input changes → output updates

Upstream inputs may remain static, or be omitted entirely, in which case the interaction behaves as a continuously updating stream without upstream influence.

8.2 Compositional interaction

Because both channels exist within the same pipeline structure, they can be composed:

This allows interaction patterns such as:

graph A → graph B → graph C
           ↑       ↓
           └───────┘

where pipelines combine to form larger systems while preserving both downstream transformation and upstream control.

These composition patterns are expressed directly in AxL (the graph definition language), where pipelines define both downstream transformation and upstream modulation within a single structure.

8.3 Implication

Interaction in Axonex is:

This enables systems where:

9. Runtime ephemerality and recovery

Runtime nodes are ephemeral. They come online, register, serve resources, go offline, and may be replaced. Resources are therefore resolved dynamically against active providers.

consumer → query domain → active provider

If a provider fails, consumers re-resolve against the domain. Redundancy is possible where multiple providers expose the same resource surface, but it is explicit rather than assumed.

10. Causal traceability and system philosophy

A defining capability is the ability to ask what a value is and how it came to be, and then inspect upstream resources, transformations, intermediate values, and decision points as part of the actual producing structure.

Axonex assumes that systems drift, nodes fail, dependencies change, and operational structures evolve. It is designed to provide visibility, traceability, and adaptability rather than pretending to eliminate change.

observe → modify → deploy → observe

Over time, this enables a recursive loop in which humans or machine systems can inspect the graph, alter it, redeploy it, and observe the consequences in the same structural medium.

11. Performance envelope

Axonex is designed as a performance-oriented execution system, intended to operate under real-time, stateful conditions where latency, throughput, and continuous change are fundamental constraints.

At the local runtime level, this includes workloads in the class of real-time financial data processing and analytics, where computation must respond continuously to high-frequency updates while maintaining deterministic behaviour and structural traceability.

At the system level, the architecture is intended to extend across increasing operational scope without changing its underlying execution model. The same graph structure may span from individual components to larger compositions such as:

The objective is not scale in isolation, but the ability to maintain coherent execution as system scope expands, while preserving visibility into causality, state, and behaviour.

Adaptability, inspectability, and graph-level interaction are therefore treated as properties of the execution model itself, not as layers added on top. These capabilities are developed alongside a runtime designed to sustain high-throughput, low-latency operation under real conditions, rather than as trade-offs against it.

Contact

For aligned discussion:

mitch@trefen.net