Under the hood
How Our AI Works
Corexi is built on a proprietary hybrid LLM pipeline, combining multiple foundation models fine-tuned with our UX-specific training layers. No single model owns the output. Every analysis is the result of cross-validated, multi-signal reasoning.
3-layer architecture
Three specialized layers work in sequence. Each layer adds signal depth, and the output of one feeds into the next.
Visual AI Layer
Corexi Vision EngineAutomated capture renders your product at multiple viewports (desktop and mobile) using headless browsers with bot-protection bypass. Each screenshot is analyzed by our proprietary multi-model vision pipeline, trained on UX-specific patterns.
- Multi-viewport capture (desktop 1440px + mobile 390px)
- Bot-protection aware rendering with fallback chains
- Element-level bounding box detection for annotated findings
- 9-category visual scoring with evidence extraction
Behavioral Analytics Layer
Signal FusionConnect GA4, Clarity, Hotjar, Mixpanel, or Amplitude. Corexi ingests sessions, bounce rates, rage clicks, dead clicks, and engagement metrics. These signals are cross-validated with visual findings to surface issues that only show up in real user behavior.
- 6 analytics providers supported (GA4, Clarity, Hotjar, Mixpanel, Amplitude, Firebase)
- Behavioral signal correlation with visual findings
- Rage click and dead click hotspot mapping
- Session-level engagement pattern analysis
PX Engine
Hybrid ReasoningThe PX Engine combines visual analysis with behavioral signals using hybrid reasoning. It weighs evidence from multiple sources, applies category-specific scoring models, and generates the final PX Score with prioritized, fix-ready findings.
- Weighted multi-source scoring across 5 UX dimensions
- Confidence scoring based on data coverage
- Neurodiversity lens (ADHD, dyslexia, autism, sensory, color vision)
- Stack-aware fix code generation for every finding
Custom training layers
Our foundation models are enhanced with UX-specific training data.
50K+
Annotated UX patterns
Categorized by severity, component type, and industry
WCAG 2.2
Full standard corpus
Every success criterion mapped to evaluation rules
600+
E-commerce UX patterns
Checkout flows, product pages, and conversion-critical UI evaluated
9
Specialized scoring models
One per UX category, each with distinct evaluation criteria
Analysis pipeline
From URL to report in under 60 seconds.
Capture
~3-8sHeadless browser renders at desktop + mobile viewports
Analyze
~8-15sMulti-model vision pipeline scores 9 UX categories
Enrich
~1-3sBehavioral analytics data merged (if connected)
Score
~<1sPX Engine computes weighted composite + neurodiversity lens
Generate
~2-5sFix code written for your stack + findings prioritized
Deliver
~<1sReport with annotated screenshots and actionable output
Transparency and limitations
We believe in being honest about what Corexi can and cannot do.
What about dynamic content?
Corexi captures the initial render state. Lazy-loaded content, modals triggered by interaction, and infinite scroll areas may not be fully analyzed in a single scan.
How accurate is the AI?
Our multi-model pipeline achieves strong precision on structural issues (contrast, spacing, hierarchy). Subjective design quality is harder. We always show evidence so you can verify.
What about authenticated pages?
Currently, Corexi scans publicly accessible pages. Authenticated page scanning is on our roadmap.
How is my data handled?
Screenshots are processed in memory, scored, then stored encrypted. Analytics connections use read-only OAuth tokens. We never modify your analytics or codebase.
Want to see the numbers behind the scores?