AURI is an autonomous reasoning system with grounded knowledge, metacognition, and a commitment to truth over confidence.
A research AGI platform exploring metacognition, grounded reasoning, and ethical AI principles.
12 neuroscience-grounded modules: amygdala intuition, theory of mind, somatic markers, vmPFC integration. Dual-process moral reasoning like the human brain.
4-phase cognitive pipeline: prediction-error learning, workspace competition (GWT), recurrent processing (IIT), self-model with introspective notices.
124,024 nodes, 1.38M edges with provenance tracking. Hebbian learning strengthens connections through use. Every claim traceable to source.
0.0% hallucination rate over 8 months. Every factual claim requires citation. Unknown-First policy: honest uncertainty over confident fabrication.
Remembers experiences, not just facts. Cue-based retrieval, emotional weighting, overnight consolidation. Learns from reading, conversation, and reasoning.
5 specialized instances: Core (reasoning), AURIA (trading), AURIV (healthcare), Family, AURIX (physical). Peer coordination via shared knowledge.
Reality Engine verified - no inflated claims, honest about limitations.
20+ years in cybersecurity, AI, and enterprise technology. Track record spanning 12 startups, 4 buyouts, and 2 IPOs.
Currently exploring how AI systems can be built with transparency, ethical grounding, and genuine utility - not just impressive demos.
AURI represents my belief that the path to beneficial AGI requires systems that know their limitations, cite their sources, and prioritize truth over confidence.