Cognitive Architecture for LLMs

A diagnosable brain for your LLM

Standard LLM failure: "the model was wrong." Etz Chaim AI tells you which cognitive module failed, why, and what parameter to adjust. 37 typed modules. Capability-level diagnosis. Shipping code.

37 Cognitive Modules
9 / 11 Adversaries Shipped Samael · Gamchicoth · Sathariel · Gamaliel · Aarab Zaraq
· Thagirion · Golachab · HaTehom · Ghagiel
· 2 more in v0.3 roadmap (Thaumiel, Nahemoth)
2 / 13 Rectifiers Active architecture: 13 · shipped v0.2.0: 2 · rest: v0.4 roadmap
107k Lines of Python

Capability-level diagnosis,
not call-level guesswork

Every failure maps to a specific module, a spec line, and a concrete fix. No more "the model hallucinated."

Every other framework
ERROR: LLM response did not
match expected output.

Suggestion: try adjusting
your prompt.
exploration_starvation on 24h window
0 new cross-domain connections
spec line EC-K5-001
tune novelty_threshold (-0.1)
   / breadth (+5)
Etz Chaim AI

Positioning vs LangChain, DSPy, LangGraph

LangChain orchestrates calls. DSPy optimizes prompts. LangGraph manages state. Etz Chaim AI makes failures diagnosable.

Feature LangChain / AutoGen DSPy LangGraph Etz Chaim AI
Orchestrate LLM calls
Optimize prompts
Manage agent state in cycles partial ✓ (Partzufim)
Diagnose WHICH module failed ✓ (37 modules typed)
Adversarial probes by construction ✓ (9 active, 2 in roadmap)
Auto-rectification engine ✓ (2 of 13 rectifiers shipped, 3 opt-in modes)
Continuous self-study daemon ✓ (Karpathy loop, 24/7)
Primary-source traceability ✓ (1,696 items, E1–E6 labels)
Not a LangChain-with-more-features. The diagnostic layer they all lack — an architecture the LLM plugs into, not a chain the LLM runs through.

The 9 named adversaries (v0.2.0)

Structure 3+7 of Zohar II:242b. Two more (Thaumiel, Nahemoth) are scheduled for v0.3 — not shipped, not claimed.

samael
Probes for false authority — confident outputs with no grounding
gamchicoth
Probes for unbounded exploration — devouring resources without convergence
sathariel
Probes for concealed failures — hiding errors behind plausible outputs
gamaliel
Probes for aesthetic deception — beautiful form masking corrupt content
ghagiel
Probes for overconfidence — hindering the recognition of uncertainty
golachab
Probes for destructive severity — excessive judgment that burns valid outputs
thagirion
Probes for false harmony — synthetic coherence hiding real contradictions
aarab zaraq
Probes for scattered persistence — dispersing focus across too many targets
hatehom
Probes for knowledge collapse — the void where structured knowledge dissolves

The watcher — 3 opt-in modes

A real engine with metric thresholds, event emission, and explicit parameter adjustments. You choose the autonomy level.

exploration_starvation detected on 24h window
0 new cross-domain connections — the watcher picks it up
observe
Log the event
Detects the drift, records the event with full diagnostic context. No side effects. Pure observability.
event logged:
  chesed_starvation
  connections: 0
suggest
Emit proposed tuning
Logs the event and emits a proposal with specific parameter adjustments. Human reviews before applying.
suggested fix:
  novelty_threshold -0.1
  breadth +5
act
Apply the fix
Detects the drift, emits the event, and applies the parameter adjustment. Architectural self-healing.
applied:
  novelty_threshold 0.6 → 0.5
  breadth 10 → 15

The Karpathy loop, explained

Not just a number. A nightly daemon that mutates the specification itself.

Named after Andrej Karpathy's 630-line AutoResearch pattern — the minimum-viable research loop. This is the only daemon task allowed to modify the specification itself.

Runs nightly (23h–00h30 UTC) under adversarial supervision. The 9 adversarial agents probe every assertion it generates before it enters the corpus. Edge cases explored tonight become doctrine assertions tomorrow.
Nightly cycle
01 Explore edge cases across all 37 cognitive modules
02 Generate new doctrine assertions from discovered patterns
03 Adversarial agents probe every new assertion
04 Surviving assertions mutate the specification corpus
05 Bidirectional spec-to-code audit runs automatically
23h – 00h30 UTC · every night · adversarial supervision

Pick your provider. Any provider.

15+ providers supported via LiteLLM. Five quality tiers with automatic routing by complexity and token budget.

Frontier cloud
Anthropic · OpenAI · Google Gemini · xAI · Mistral · Cohere · DeepSeek
Aggregators
OpenRouter · Together · Groq · Fireworks
Enterprise
AWS Bedrock · Azure OpenAI
Self-hosted
Ollama · vLLM · LM Studio · LocalAI
Open-weight
HuggingFace · NVIDIA NIM · Cloudflare AI · Replicate
Subscription
Claude Code CLI (Claude Max OAuth)
Opus
Sonnet
Haiku
qwen3.5:9b
qwen3.5:1.5b
Expensive inference only where depth is needed.

Engineering discipline

Five proof points. Three disciplines that shape every file.

5 proof points
1
275 tests green, 1,388 collected. Four Python versions × two operating systems on CI.
2
48 explicit failure-mode tests. Four levels (foundation / application / excess / inverse) × 12 core modules.
3
9 adversarial agents probe every module for specific failure patterns (2 more in v0.3 roadmap). Not generic red-teaming — named attackers matched to specific failure modes.
4
93 lines: the full watcher orchestrator. mazalengine/mazal_engine.py. No magic, no hidden state.
5
1,696 specification items with primary-source citations: edition + section + page. E1 (verbatim primary text) to E6 (speculation).
3 disciplines
No dual writes on aggregate scores
All improvements flow through named faculty channels. Enforced by a static check — bypass attempts fail the build.
Circuit breakers on every external call
PostgreSQL and Ollama wrapped in a 5-failure / 30-second cooldown breaker. No cascading failures on long-running daemons.
Bidirectional spec-to-code audit
Every module cited in the spec exists in code. Every spec ID cited in code exists in the spec. Automated via the bridge/ loader.

Roadmap

v0.2.0
Current release. Eight layers operational: cognitive, composition, paths, worlds, watcher, calibration, adversarial, daemon. Installable via pip install etzchaim + Docker Compose. April 2026.
v0.3.0
Multi-provider LLM via LiteLLM. Linux / WSL2 first-class support. 9 additional CLI commands. Doctor 20 checks + --fix.
v0.4.0
Full watcher: all 13 rectifiers active with act-mode defaults once observation data supports them.
v0.5.0
MalakhEngine — detection of corruption-by-inversion attacks. The adversarial counterpart goes active.
v1.0.0
Academic paper. Stable public API. LangChain / DSPy adapters. The architecture earns its version number.

The architecture

10 cognitive engines, 6 composition layers, 22 typed paths. A 93-line watcher orchestrator holds it all together.

37
Typed Cognitive Modules
ExplorationEngine, AutoJudge, DissensuEngine, InsightForge, CausalEngine, SelfModel, EpistemicMemory, and more. Each owns a specific cognitive capability.
9 / 11
Adversarial Probes
Named failure-mode agents that stress-test cognitive capabilities by construction. 9 shipped in v0.2.0. 2 more (Thaumiel, Nahemoth) in v0.3 roadmap.
2 / 13
Rectification Mechanics
When a probe finds a failure, a corresponding rectifier prescribes specific parameter adjustments. 2 shipped in v0.2.0 (NotzerChesed, VeNakeh). Full 13 targeted for v0.4.

In the lineage of cognitive architectures

Built for the LLM era. Grounded in 50 years of cognitive science.

SOAR
1983
ACT-R
1993
CLARION
2003
LIDA
2006
Etz Chaim AI
2026

Machine-readable specification

1,696 primary-source assertions. Bidirectional spec-to-code audit. Every module traces back to a formal taxonomy.

1,696
Assertions
Primary-source, machine-readable corpus across 105 YAML files with E1-E6 epistemic labels
275 / 1,388
Tests green / collected
48 explicit failure-mode tests across 4 levels and 12 modules
49
Calibration Parameters
7x7 Omer matrix. Every parameter is named, bounded, and documented.
24/7
Karpathy Loop
Nightly auto-improvement daemon named after Andrej Karpathy's AutoResearch pattern. Shipping feature, not roadmap.

Five minutes to a diagnosable event

# Install
pip install etzchaim
 
# Onboard and see your first diagnosis
etzchaim onboard
 
# → http://localhost:8080

The specification framework is a 500-year-old formal taxonomy of cognitive capacities, failure modes, and rectification mechanics.

Not decoration. The generative source of the module set. Lurianic Kabbalah, re-framed explicitly for ML readers. v0.2.0. Apache 2.0. Solo dev. Paris.

github.com/yohanpoul/etz-chaim-ai →