Validated Performance Data

Real hardware. Real workloads. All results independently reproducible.

AriaOS — Governance Layer

132.6/100 Composite benchmark score (Jetson AGX Orin)
47ms P95 response latency
99.97% Uptime — 800/800 endpoints validated
2,847 RPS Requests per second capacity

Memory Bus Performance

MetricTargetMeasuredStatus
Message Latency (P50)<1ms0.4msPASS
Message Latency (P95)<5ms2.1msPASS
Throughput>10K msg/s47K msg/sPASS
Memory Overhead<64MB42MBPASS

Autonomous Recovery

MetricValue
Detection Time<100ms
Isolation Time<500ms
Full Recovery<2 seconds
Data Loss Events0

Validation Environments

Jetson AGX Orin 64GB

132/100 composite score. Hardware validation and inference benchmarks on NVIDIA edge AI platform.

Apple Silicon (14+ days)

Sustained fault injection, concurrency, governance enforcement, and audit integrity on M-series SoC.

HP ProLiant G8 Chaos

Ubuntu 24.04 — network degradation, resource exhaustion, recovery behavior, governance continuity.

HammerIO — Compression Engine

8,537 MB/s GPU decompress (in-memory)
4,258 MB/s GPU decompress (10GB roundtrip)
5.8x Roundtrip speedup vs CPU zstd-1

In-Memory Performance (Raw Throughput)

Method Processor Compress Decompress Integrity
nvCOMP LZ4 GPU 705 MB/s 8,537 MB/s PASS
nvCOMP Snappy GPU 1,615 MB/s 5,756 MB/s PASS
zstd-1 CPU 1,747 MB/s 2,001 MB/s PASS

Roundtrip Results (10GB with Disk I/O)

Method Processor Compress Decompress Ratio Integrity
nvCOMP LZ4 GPU 517 MB/s 4,258 MB/s 1.98x PASS
zstd-1 CPU 1,094 MB/s 733 MB/s 2.00x PASS
zstd-3 CPU 1,014 MB/s 741 MB/s 2.00x PASS

ModelSafe — Model Checkpoint Manager

Platform: Jetson AGX Orin 64GB | Engine: nvCOMP GPU LZ4 via HammerIO | 2026-04-06

3.6s 7B model restore time
7.9s 13B model restore time
391 MB/s Peak decompress throughput
Model Size Original Compressed Ratio Compress Decompress Restore Time Integrity
1GB (7B equiv) 1.0 GB 875.1 MB 1.17x 208 MB/s 283 MB/s 3.615s PASS
3GB (13B equiv) 3.0 GB 2.4 GB 1.24x 237 MB/s 391 MB/s 7.861s PASS
7GB (70B equiv) 7.0 GB 5.7 GB 1.23x 289 MB/s 378 MB/s 18.979s PASS

Note: Synthetic model weight data — float32 arrays with realistic distribution. Ratios reflect actual model weight compressibility. Restore time includes SHA-256 verification. Powered by HammerIO nvCOMP GPU LZ4.

AriaOS: Forge — Domain Model Training

Platform: Jetson AGX Orin 64GB | Base: Qwen2.5-Coder-3B | Method: LoRA fp16 | Dataset: OpenStack Q&A | 2026-04-06

~10h Full pipeline — end to end, no cloud
19.7 tok/s Fine-tuned model inference speed
33% Quality improvement over base 7B model

A/B Comparison — Fine-tuned vs Base Model

Model Parameters Quality Score Tokens/sec TTFT Result
ariaos-forge:latest 3B (LoRA fine-tuned) 80/100 19.7 4,467ms WINNER
qwen2.5-coder:7b 7B (base, untuned) 60/100 10.3 29,853ms

Note: Quality scoring via PraetorianMind Model A/B Compare — heuristic evaluation only. Speed advantage reflects 3B vs 7B parameter count. Training dataset: 623 samples (OpenStack Q&A + documentation). All training performed locally — zero cloud dependency.

Validation Environments

Platform OS CUDA Mode Modules Tested
Jetson AGX Orin 64 GB JetPack 6.2.2 12.6 MAXN All
Apple Silicon M-series macOS N/A Max AriaOS
HP ProLiant G8 Ubuntu 24.04 N/A AriaOS chaos