Standard LLM failure: "the model was wrong." Etz Chaim AI tells you which cognitive module failed, why, and what parameter to adjust. 37 typed modules. Capability-level diagnosis. Shipping code.
Every failure maps to a specific module, a spec line, and a concrete fix. No more "the model hallucinated."
LangChain orchestrates calls. DSPy optimizes prompts. LangGraph manages state. Etz Chaim AI makes failures diagnosable.
| Feature | LangChain / AutoGen | DSPy | LangGraph | Etz Chaim AI |
|---|---|---|---|---|
| Orchestrate LLM calls | ✓ | ✓ | ✓ | ✓ |
| Optimize prompts | — | ✓ | — | — |
| Manage agent state in cycles | partial | — | ✓ | ✓ (Partzufim) |
| Diagnose WHICH module failed | ✗ | ✗ | ✗ | ✓ (37 modules typed) |
| Adversarial probes by construction | ✗ | ✗ | ✗ | ✓ (9 active, 2 in roadmap) |
| Auto-rectification engine | ✗ | ✗ | ✗ | ✓ (2 of 13 rectifiers shipped, 3 opt-in modes) |
| Continuous self-study daemon | ✗ | ✗ | ✗ | ✓ (Karpathy loop, 24/7) |
| Primary-source traceability | ✗ | ✗ | ✗ | ✓ (1,696 items, E1–E6 labels) |
Structure 3+7 of Zohar II:242b. Two more (Thaumiel, Nahemoth) are scheduled for v0.3 — not shipped, not claimed.
A real engine with metric thresholds, event emission, and explicit parameter adjustments. You choose the autonomy level.
Not just a number. A nightly daemon that mutates the specification itself.
15+ providers supported via LiteLLM. Five quality tiers with automatic routing by complexity and token budget.
Five proof points. Three disciplines that shape every file.
10 cognitive engines, 6 composition layers, 22 typed paths. A 93-line watcher orchestrator holds it all together.
Built for the LLM era. Grounded in 50 years of cognitive science.
1,696 primary-source assertions. Bidirectional spec-to-code audit. Every module traces back to a formal taxonomy.
Not decoration. The generative source of the module set. Lurianic Kabbalah, re-framed explicitly for ML readers. v0.2.0. Apache 2.0. Solo dev. Paris.
github.com/yohanpoul/etz-chaim-ai →