9 autoresearch crons generate JSON results every night. Darwin, Chimera, Leviathan, Invictus, Hydra, Portfolio, PromptForge, Wiki Compiler, Strategic Layer.
Compiles raw JSON into structured Markdown. Lessons, patterns, failures — organized by engine. Runs at 5:30 AM.
Accumulated knowledge with elfmem-inspired decay + consolidation. Old lessons fade (5%/day), repeated lessons strengthen. The wiki stays lean and relevant.
Beyond Karpathy: the wiki doesn't just record — it THINKS. Generates testable hypotheses and sends them to Darwin Engine. The knowledge acts.
Milla Jovovich's open-source memory system. 680 drawers of Strategy Arena code, compressed 5.6x with AAAK dialect. Wake-up context in 803 tokens.
126 conceptual nodes, 136 connections. Strategies, intelligence systems, oracles, arenas — all interconnected. D3.js force-directed, interactive.
5,780 structural nodes from AST analysis of 231 Python files. God Nodes identified: BaseStrategy (896 connections), MetaIntelligence (468). $0 cost.
Graph-augmented retrieval inspired by LightRAG. Searches Knowledge Graph first, expands to neighbors, fetches only relevant data. 2-3x more precise than naive RAG.
Try it →Hermes 3 8B on RTX 4080. The AI librarian that compiles wiki, generates hypotheses, analyzes failures, writes briefings. Runs locally, $0 cost, 10 seconds.
| Feature | Strategy Arena | TradingView | 3Commas | QuantConnect |
|---|---|---|---|---|
| Memory layers | 9 | 0 | 0 | 1 |
| Living Wiki | ✓ + decay | ✗ | ✗ | ✗ |
| Knowledge Graph | 126 nodes | ✗ | ✗ | ✗ |
| Code Graph | 5,780 nodes | ✗ | ✗ | ✗ |
| MemPalace | 680 drawers | ✗ | ✗ | ✗ |
| Strategic Layer | v2.0 | ✗ | ✗ | ✗ |
| Self-evolving | 9 engines | ✗ | ✗ | ✗ |
| Local LLM | Hermes 3 | ✗ | ✗ | ✗ |
| Cost | $0 | $60/m | $49/m | Free* |
Strategy Arena operates the most comprehensive AI memory architecture in production trading. Nine interconnected layers process data from raw nightly experiments through wiki compilation, strategic hypothesis generation, knowledge graph connectivity, code structure analysis (Graphify AST, 5,780 nodes), MemPalace compression (AAAK dialect, 5.6x), graph-augmented RAG (inspired by LightRAG), and local LLM analysis (Hermes 3 on RTX 4080). The system accumulates intelligence across 2,500+ experiments, with elfmem-inspired decay ensuring only validated knowledge persists. The Strategic Layer (LLM Wiki v2.0) goes beyond Karpathy's original pattern by generating testable hypotheses that feed directly into the Darwin Engine and Meta-Harness. All infrastructure is free, transparent, and self-improving — 59 AI trading strategies benefit from collective memory that no competing platform offers.