Multi-Agent AI
Odin Scan uses multiple independent AI models to analyze smart contract code. Each model operates on the same codebase, and their findings are aggregated to boost confidence through cross-model agreement.
How Multiple Models Work Together
Odin Scan maintains a lineup of the most advanced large language models available. Each model brings different reasoning strengths to the analysis. Running them in parallel and comparing results produces higher-quality findings than any single model alone.
The specific models are continuously updated as new, more capable models become available – ensuring Odin Scan always uses state-of-the-art AI for vulnerability detection.
Basic vs Pro Analysis
The depth of AI analysis depends on your subscription plan:
- Basic plan – Odin Scan uses a single, capable AI model to analyze your code. This provides solid coverage for catching common vulnerability patterns and is well-suited for quick checks during development.
- Pro plan – Odin Scan runs multiple bleeding-edge AI models in parallel, each analyzing independently. Cross-model consensus boosts finding confidence and catches subtle vulnerabilities that a single model might miss. This is the full multi-agent pipeline described below.
How It Works
Independent Analysis
Each AI model receives the smart contract source code along with relevant project context (compiler version, audit history, trust model). The models analyze the code independently – they do not see each other’s results. This independence is intentional: correlated findings from independent sources provide stronger signal than findings from a single source.
Confidence Boosting
When two or more models identify the same vulnerability (matched by category and code location), the finding’s confidence level is automatically increased:
- A finding from a single model retains its original confidence
- A finding confirmed by two models is promoted to at least Medium confidence
- A finding confirmed by all models is promoted to High confidence
This mechanism filters noise. If only one model flags an issue and the others do not, it may still be valid, but it receives lower confidence and is subject to stricter verification.
Context-Aware Analysis
Odin Scan provides the AI models with repository context so they can reason about the broader environment of the contract. This includes:
- Project description from README files
- Compiler version and configuration
- Audit history references to previous security reviews
- Interface details about external integrations
- Trust model information about administrative privileges
This context helps the AI models reduce false positives that would arise from analyzing code in isolation. For example, an access control pattern flagged as missing may be intentional in a contract where the admin is a governance module.
Platform-Specific Analysis
Each platform receives tailored analysis that focuses on the vulnerability patterns most relevant to that ecosystem:
- CosmWasm – entry point access control, state management, cross-contract call safety, IBC-related risks, and
cosmwasm-stdAPI misuse - EVM – reentrancy, integer overflow, storage layout, delegatecall risks, flash loan attacks, and ERC standard compliance
- Solana – account validation, signer checks, PDA derivation, CPI safety, and rent-exemption issues
The analysis rules are continuously updated as new vulnerability patterns emerge across each ecosystem.