A field note from inside several FRTB SA implementations. Published on orbaos.com.
I have spent more time inside FRTB programmes than most people who are not required to file the resulting reports. Across them, the same pattern keeps appearing — and it is not what the consulting decks describe.
The decks describe a quantitative problem. They sketch sensitivity calculations, bucket aggregations, default risk charges, residual risk add-ons. They suggest the programme cost reflects the difficulty of the underlying mathematics.
That is not where the cost lives.
The cost lives in coordination replay.
The pattern
What I mean by that.
At the moment a capital figure is challenged — by a regulator, by an internal model-risk function, by a head of trading who does not believe the number — the bank does not lack a calc engine. It has several. What it lacks is a defensible chain of which inputs were authoritative, which version of which calibration was used, and which transformation produced the figure that ended up on the report.
So the figure gets reconstructed by hand.
That reconstruction routinely takes three teams over four working days. Trader-side risk pulls a snapshot from a P&L cube. Middle office reconciles it against a regulatory-feed cut taken at a different cutoff. Quant produces a parameter file from a model library that has been patched twice since last quarter without a release note. The figure is then explained back to the questioner via email, with screenshots.
The next quarter, the same question comes back about the same desk. And it happens again.
The compute layer is doing the same calculation seven times because seven different people are not coordinating on which inputs are authoritative. The fee is paid in headcount, in Excel reconciliation hours, and most damagingly, in the credibility lost every time a reported figure cannot be defended without a manual re-reconstruction.
This is not a fault of the calc engines. The vendors have done their job. The programme spend has not bought what the programmes were sold to buy.
The math is the easy part
The core SA mathematics itself is surprisingly compact. The difficulty is not expressing the formulas; it is operationalising them safely inside a regulated institution where the inputs arrive from twelve places and no one is the canonical source.
Sensitivity scaling, within-bucket and across-bucket aggregation with negative clamps, default risk on residual exposures, the residual add-on. None of it is hard once you have settled what delta actually is on this row, this morning, against this curve.
That settling is the bottleneck. And settling is a coordination problem, not a math problem.
The three components of the coordination problem are predictable:
- Re-run economics. Each calc is performed several times by different teams across the bank because none can prove they are using the same inputs as the others. The marginal cost of one more recomputation is taken to be free; the cumulative cost is most of the programme.
- Explanation latency. When someone asks "why is capital up this week?" the answer is reconstructed manually on each occurrence, even when the question is the same one asked last quarter against the same desk.
- Authority fragmentation. No single store holds the authoritative versions of inputs, calibration, and engine. So every team caches its own. Every reconciliation between caches is a labour cost.
The architecture requirement that follows
Once you frame FRTB this way, the architecture requirements change.
The critical requirement is no longer merely "produce the capital figure." It becomes:
Prove which inputs were admissible. Prove which transformation produced the figure. Prove that the same chain can be replayed later without ambiguity.
That requirement is closer to a governance substrate than to a pricing library. It is the part of the stack that the vendors have not delivered, because their incentives are different — they sell breadth of asset-class coverage and global support footprint. None of that helps when a regulator asks why the capital figure on row 47 of last Tuesday's report is not the same as the one in this Tuesday's report.
The tool
I have been building a tool that approaches FRTB from this angle. It is a self-hosted FRTB SA calculator with a few specific properties.
- Risk-factor inputs are committed to the system only after the caller has been shown the canonical form and has echoed back the SHA-256 of that canonical form. Mismatches are first-class refusals, persisted as queryable data.
- Every capital figure produced is a node in a Merkle DAG anchored to its inputs and to the function identity (engine version, regime calibration, parameter-set hash) that produced it. Drift any of those, the root hash changes.
- The lineage is exportable as a single JSON file. A regulator or auditor runs an offline verifier — one stdlib-only Python file, no project imports, no network — against that JSON and replays the chain. Any single-byte mutation produces a refusal at the precise broken node.
- Excel exports embed the root hash and the per-row leaf hashes. In block mode, the renderer refuses to ship a row that lacks provenance.
- Refusals are persisted as queryable entities. "Show me every refused computational state on this desk last quarter" is one endpoint call, not a ticket.
- It runs in your own infrastructure. Trades, sensitivities, and capital figures never leave your network.
The maths is published. The verifier is published. The chain holds under tampering or it does not, and you can prove which.
What that does to the four-day reconstruction
A movement that previously required three teams and four working days to reconstruct becomes a deterministic replay against a saved lineage root.
The replay does not need access to the running system. The verifier and the saved JSON are sufficient. An auditor at their own desk, in their own environment, can confirm the figure is the figure that came out of the engine on the day, against the inputs claimed, under the calibration claimed.
The same answer next quarter is the same hash. If anything has changed — a parameter file, a row, an engine version — the hash is different and the diff identifies the change. There is no manual reconstruction.
That is the operational unlock.
The wedge
This is a deliberately narrow positioning.
It is not a Murex replacement and is not pitched as one. The breadth-of-coverage problem is well solved. The replayability problem is not. The two compete on different axes.
Banks do not need another opaque engine. They need a way to defend capital figures without reconstructing them manually every quarter.
If that framing is recognisable from inside your programme, I would be interested in talking. Quietly, off any vendor list. The tool is private; access is granted on request after a short conversation.
Luigi Pascal · contact@orbaos.com