Two decades of clinical research. Specialist expertise from the Brain and Mind Centre. Iterative testing with real clinicians. This is how we engineered AI reasoning and knowledge systems you can trust with care decisions.
"We didn't train a chatbot. We encoded decades of specialist reasoning into an AI system that thinks the way clinicians think."
— The MIA Design Philosophy
MIA was built at the intersection of three distinct knowledge sources — each one essential, none sufficient alone.
Expert reasoning provides the how. The evidence bank provides the what. User testing provides the proof.
The result: an AI agent that matches expert consensus 95% within top-2 choices.
MIA wasn't trained on the internet — it was built on specialist knowledge, curated evidence, and continuous human feedback. Three clinical pillars, each deliberately engineered.
Specialists didn't just review outputs — they authored the reasoning. Clinicians codified how they identify critical features, structure assessments, and build domain-specific care plans. MIA reasons using a multidimensional clinical framework covering what to assess, how to score it, and when to escalate — mirroring the decision process of an experienced practitioner, not a generic chatbot.
Curated from two decades of Brain and Mind Centre research — clinical guidelines, validated instruments like the IAR-DST, and treatment protocols across care levels 1–5. This isn't a static reference library; it's a living, versioned knowledge base that MIA actively reasons over to ground every assessment, recommendation, and care plan in peer-reviewed evidence.
Mental health professionals and individuals with lived experience iteratively shaped MIA — reviewing clinical outputs, correcting reasoning paths, and surfacing edge cases that became safety constraints. This isn't a one-off training run; specialists continuously refine how MIA handles nuanced clinical scenarios, ensuring it improves with each feedback cycle rather than drifting.
The key moments in MIA's journey — from foundational research to a validated clinical AI agent.
The Brain and Mind Centre's Youth Mental Health and Technology team conducts two decades of transdiagnostic mental health research, developing clinical staging models and the multidimensional assessment framework that would become MIA's foundation.
UNCAPT and the University of Sydney commence proof of concept work — operationalising BMC's clinical expertise as an AI agent on UNCAPT's agentic platform.
Clinical specialists begin encoding reasoning and evidence into MIA. The Knowledge Bank is curated, expert evaluators authored, and the first version of MIA goes live on the UNCAPT platform.
Clinicians and individuals with lived experience test MIA across hundreds of scenarios. Continuous feedback cycles refine clinical accuracy, conversational flow, and safety guardrails.
MIA achieves 95% top-2 agreement with expert consensus, validating the system's clinical reasoning across the BMC multidimensional framework for youth mental health.
MIA's clinical intelligence doesn't run in isolation. It's operationalised on UNCAPT's purpose-built agentic AI platform — a secure, scalable infrastructure designed to host domain-specialist AI agents across regulated industries.
The platform provides the orchestration layer: the reasoning engine, memory, evaluation harness, and deployment infrastructure. The Brain and Mind Centre provides the clinical knowledge. Together, they form MIA.
Observe–Orient–Decide–Act loop with tool routing, memory management, evaluation harness, escalation logic, and chain-of-reasoning logging.
Enables subject-matter experts to converse with the agent, edit and rewind thought processes, and provide feedback to produce fine-tuning datasets.
Ingestion, vectorisation, clustering pipelines, contradiction analysis, query engine, and a visual portal to explore the curated knowledge bank.
Azure Australian-hosted deployment with co-pilot and autopilot modes, web interfaces, and enterprise-grade security and observability services.
Clinical experts directly shaped MIA's reasoning engine — here's what they contributed.
Clinicians provided structured input to ensure MIA's assessments mirror expert judgement.
Two decades of research from the Brain and Mind Centre powers MIA's knowledge bank.
MIA integrates the research output of the Brain and Mind Centre's Youth Mental Health and Technology team — spanning transdiagnostic models, staging frameworks, and measurement-based care.
MIA wasn't built in a single pass. Each capability went through repeated cycles where clinical specialists interacted directly with the agent — observing its reasoning in real time, identifying where logic broke down, and providing structured corrections that fed back into the system.
This isn't passive annotation. Through the training platform, experts converse with MIA, edit and rewind its thought processes, and give positive or negative feedback — producing the datasets that progressively sharpen clinical accuracy.
This cycle runs continuously — every edge case surfaced becomes a permanent design constraint, not a one-off fix.
Specialists craft clinical scenarios targeting edge cases — ambiguous presentations, comorbidities, high-risk indicators — across all 8 domains.
Experts interact with MIA in real time, tracing its reasoning chain to see exactly where clinical logic holds or breaks down.
Specialists rewind decisions, edit thought processes, and provide structured feedback — each correction producing fine-tuning data.
Updated model benchmarked against multi-expert consensus. If thresholds aren't met, the cycle restarts with new scenarios.
A clinical intelligence agent that matches expert consensus 69% exactly and 95% within top-2 — ready to transform mental health assessment, triage, and care planning at scale.