Loading...
Loading...
Evidence-backed analysis of how AI automation affects Lawyer / Legal Counsels. Scores derived from published research — McKinsey, BLS, Stack Overflow, and industry data.
Automation Risk
Defensive Strength
Estimated Runway
6+ YearsMarket Intelligence
Harvey AI, Thomson Reuters CoCounsel, and LexisNexis Protege now handle autonomous document review, legal research, and due diligence. Baker McKenzie cut 600–1,000 staff in Jan 2026 explicitly citing AI — primarily support staff. Corporate legal AI adoption jumped from 23% to 52% in one year. However, practicing attorney headcount at AmLaw 100 firms is NOT declining — personal liability, courtroom advocacy, and client trust remain irreducibly human. Goldman Sachs estimates 17.2% of legal jobs at direct risk (not roles — individual tasks).
Source: Based on Thomson Reuters Legal AI Adoption Survey 2025, Baker McKenzie restructuring announcement Jan 2026, Goldman Sachs Future of Legal Work 2025, and Legal Technology Association adoption data.
Task Breakdown — Time Allocation vs. Vulnerability
Highest Exposure Areas
Writing / Summarising / Documentation
GPT-5 Deep Research and Claude already produce publication-quality reports, emails, and documentation. By 2027, AI writing assistants will handle first-draft creation for virtually all standard business documents with minimal human input.
Analysis / Reporting
Standard analysis and reporting is already being absorbed by AI at the enterprise level. McKinsey notes analysis tasks among the sharpest automation increases. The defensible remainder is interpretation requiring proprietary context — that window is closing.
Customer / Stakeholder Communication
AI agents are now handling routine customer communication autonomously. The protection in this task comes from novel relationship context and trust — which erodes when your client interactions become standardised or when AI gains sufficient context to replicate the pattern.
Strongest Defenses
Compliance / Risk / Regulated Judgement
Regulatory requirements create a genuine structural moat — human sign-off requirements under EU AI Act, financial regulations, and professional liability standards. The near-future pressure: AI handles the interpretation and analysis; the human role narrows to final sign-off and accountability.
Decision-Making Under Uncertainty
This remains one of the most defensible task categories — AI struggles with genuine novelty and accountability. The erosion condition: as AI decision-support tools become standard, the bar for what counts as 'genuine uncertainty' rises, and roles that mostly execute defined playbooks lose this protection.
Negotiation / Persuasion
Live negotiation remains human-critical due to real-time reading of counterparties and credibility. The near-future pressure comes from AI handling preparation, concession modelling, and post-deal documentation — compressing the human portion to the actual negotiation moment only.
This is the average. What about you?
The average Lawyer / Legal Counsel scores 28/100 risk. But your specific role, environment, and task allocation could be higher or lower. Get your personalised score in ~10 minutes.