Loading...
Loading...
Most career tools tell you what you want to hear. Runway is designed to tell you what you need to know — grounded in published research, calibrated against real assessment data, and updated quarterly as AI capabilities change. This page explains what we measure, why, and what makes the approach different.
Every assessment produces three scores (0–100) plus a runway estimate. Together they capture your structural position — not your skills, not your potential, but where your role sits relative to current and near-future AI capability.
Automation Risk
How much of your current task mix falls within existing or near-term AI automation capability. Higher = more structurally exposed.
Augmentation Opportunity
How much of your work could be meaningfully accelerated by AI tools — without replacing the human judgment required. Higher = more leverage available.
Defensive Strength
How much of your role relies on capabilities that are structurally hard to automate: regulated judgment, trust, proprietary context, physical presence. Higher = stronger position.
In addition to your current scores, Runway projects where your risk will be in 6–36 months based on how quickly AI capability is advancing in each area of your work. The gap between current and projected risk is often the most important signal.
Anti-reassurance
Your results include a "Comfortable Version" — the safe, hedged interpretation of your scores. Then we show you the real one. Every assessment is designed to surface uncomfortable truths, not validate existing beliefs.
Adaptive results
Your results page is structurally different from anyone else's. Content ordering, section emphasis, tone, pathway count, and time horizons all adapt based on your scores, urgency, and psychology. Nobody gets a template.
Live market research
Every assessment triggers real-time web research for your specific role, industry, and region. Your analysis is grounded in what's happening now, not a static dataset.
Pathway stress-testing
Career pathways aren't just suggestions — each one is validated against projected risk. If a "safe" destination is also compressing, we tell you. Pathways that don't meaningfully improve your position are rejected.
Scoring follows a two-phase approach. Phase 1 is deterministic — your task allocation is scored against research-calibrated risk and defensibility ratings, adjusted by your work environment and seniority level. Phase 2 uses an LLM to review your diagnostic answers and real-time market research, making bounded adjustments that capture context formulas cannot.
What drives your scores
All scoring weights are calibrated from published research and updated quarterly. LLM adjustments are bounded, must cite evidence, and are shown in your results with full reasoning.
Your results aren’t calculated in isolation. Runway maintains market benchmarks across roles, industries, and seniority levels — continuously refined as more assessments are completed.
What cohort data adds
Cohort benchmarks improve with every assessment. The more data we collect for a role/industry combination, the more precise the benchmarks become for everyone in that cohort.
Every assessment includes a confidence score reflecting input quality. Score intervals widen when input quality is lower — more specific answers produce tighter ranges. We also rate the data quality of your market research: strong, moderate, or limited.
What reduces confidence
When data quality is limited, narratives explicitly state this and note which insights are inferred rather than sourced. Source verification checks citations against known credible publications.
Scoring calibration and market benchmarks draw from published research and industry data. Real-time market research is performed for every individual assessment.
Last updated: 2026-Q1
Scores reflect task structure and environment — not individual capability, work quality, or adaptability.
Market benchmarks represent averages for a role type. Your specific employer, sector, and team may deviate significantly.
AI capability is moving faster than any benchmark update cycle. Scores become stale. We update quarterly.
Runway estimates assume no active adaptation. People who adapt reduce their exposure.
This tool does not constitute career advice. It provides structured data to inform your own decision-making.
We update the scoring model as AI capabilities evolve and new research becomes available. Each version is tagged in assessment records so your scores remain reproducible.
Added 24-month exponential acceleration curves, 30+ industry-specific acceleration multipliers, V5 task dimension scoring, expanded scoring manifest with near-future vulnerability weights.
Introduced environment modifiers (company size, AI policy, industry pace, remote work, region), seniority adjustments, confidence intervals.
Initial public release. Task-weighted vulnerability scoring across 160+ occupational categories with defensibility weights.
This is a model, not a prophecy. The value is in using it to ask better questions — not in treating the numbers as facts.
Take the assessment →