Overview
The Financial Services Task Inventory is a structured reference dataset cataloguing the work performed across a large financial services organization. It decomposes 15 business functions into 70 L2 processes and 253 L3 activities, ultimately defining 4,075 individual tasks — each annotated with cognitive complexity, regulatory classification, cross-functional dependencies, skills requirements, and AI exposure assessments.
The inventory is designed as a foundation for strategic workforce planning, job redesign, and AI transformation analysis. Rather than relying on a single AI exposure metric, the project implements a transparent 3-tier assessment framework that layers peer-reviewed research, empirical data, and forward-looking indicators — each with clearly documented sources and limitations.
View Framework & Methodology3-Tier AI Assessment Framework
Every AI exposure metric in this inventory is traceable to a specific source. No scores are fabricated or interpolated. The framework is designed so that users can evaluate the evidence behind each tier independently.
Eloundou E0 / E1 / E2 Classification
Each task is classified using the categorical rubric from Eloundou et al. (2023), published in Science. E0 = no meaningful LLM exposure; E1 = direct LLM use can reduce task time by 50%+; E2 = LLM-powered tools (with software built on top) can reduce task time by 50%+. Aggregated via the Eloundou beta measure.
Source: Eloundou et al. (2023). "GPTs are GPTs." Science, 381(6654).
Anthropic Economic Index (AEI) Success Rates
Where available, tasks are matched to empirical success rates from the Anthropic Economic Index — derived from 2 million real Claude.ai conversations (November 2025 data). 1,709 of 4,075 tasks (42%) have an AEI match. Success rates reflect observed task completion, not theoretical capability.
Source: Anthropic Economic Index (January 2026). CC-BY, HuggingFace. Note: Anthropic is a commercial AI lab with potential interest in demonstrating AI capability.
Agentic Potential Flag
A constructed indicator (High / Medium / Low) identifying tasks where agentic AI — multi-step workflows, tool orchestration, code execution — may expand capabilities beyond what basic LLM chatbot interaction can achieve. This is an analytical construct, not an empirical measurement.
Source: WARLAB-constructed heuristic based on task description analysis. No peer-reviewed validation.
Taxonomy Structure
The inventory follows a four-level hierarchy designed to map to real organizational structures in financial services. L1 functions represent major business areas (e.g., Retail Banking, Risk Management, Technology & Operations). L2 processes decompose functions into operational domains. L3 activities represent discrete work areas. L4 tasks are the atomic unit — individual activities that can be assigned to a role and assessed for AI exposure.
Each task carries 18 structured fields including cognitive complexity (Bloom's taxonomy), regulatory classification, defense line mapping (Three Lines of Defense), cross-functional flags, skills requirements, and all three tiers of AI exposure assessment.
Data Sources
The inventory draws on publicly available occupational frameworks and research, including O*NET 30.2 task descriptions, SOC occupation codes, industry certification requirements, regulatory frameworks (Basel III, IFRS 9, AML/KYC), and financial services domain expertise. AI exposure classifications are sourced from Eloundou et al. (2023) and the Anthropic Economic Index (2026). No proprietary employer data is used.
Tools & Technologies
Key Artifacts
-
Assessment Framework & Methodology 3-tier AI exposure framework, scoring methodology, data source documentation
-
Interactive Task Explorer Filterable, searchable interface for all 4,075 tasks with drill-down detail
-
Executive Dashboard KPI cards, distribution charts, and strategic summary
-
Hay Method Runbook Korn Ferry Hay Method integration for job evaluation
-
GitHub Repository Source code and deployed application
Governance Notes
This inventory is a reference dataset constructed from public sources for educational and analytical purposes. It does not represent any specific employer's organizational structure, job descriptions, or workforce data. AI exposure assessments are based on published research and should not be used as the sole basis for workforce decisions. All metrics include transparent source attribution so users can evaluate the evidence independently.