How Adoptic's evaluation framework is built on research-validated variables and auditable evidence.
Adoptic's evaluation framework is built on 73 research-validated variables that have been identified as either causing failure or being necessary for successful adoption of innovations.
Each variable is evaluated across 6 discrete evidence levels, resulting in a total of 365 distinct evaluation points across the full framework.
Each variable is scored on a scale from 0 to 5:
| Score | Meaning |
|---|---|
| 0 | No evidence found in the submitted documents |
| 1 | Minimal evidence — early acknowledgement only |
| 2 | Partial evidence — some aspects addressed |
| 3 | Moderate evidence — substantive but incomplete |
| 4 | Strong evidence — well addressed with minor gaps |
| 5 | Fully addressed — comprehensive evidence present |
Adoptic uses a large language model (LLM) to identify linguistic patterns within submitted documents. This approach is fundamentally different from generative AI use cases:
Adoptic takes a strict approach to data handling in its use of LLM technology: