A skills-focused simulation suggests AI’s technical reach in the U.S. labor market is far larger and more geographically widespread than visible adoption indicates: current AI capabilities overlap with about 11.7% of U.S. wages (~$1.2tn), compared with roughly 2.2% (~$211bn) reflected in present adoption concentrated in tech hubs.
Artificial Intelligence is reshaping America’s over $9.4 trillion labor market, with cascading effects that extend far beyond visible technology sectors. When AI automates quality control in automotive plants, consequences spread through logistics networks, supply chains, and local service economies. Yet traditional workforce metrics cannot capture these ripple effects: they measure employment outcomes after disruption occurs, not where AI capabilities overlap with human skills before adoption crystallizes. Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools. It introduces the Iceberg Index, a skills-centered metric that measures the wage value of skills AI systems can perform within each occupation. The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approximately $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approximately $1.2 trillion). This exposure is fivefold larger and geographically distributed across all states rather than confined to coastal hubs. Traditional indicators such as GDP, income, and unemployment explain less than 5% of this skills-based variation, underscoring why new indices are needed to capture exposure in the AI economy. By simulating how capabilities may spread under alternative scenarios, Project Iceberg enables policymakers and business leaders to identify exposure hotspots, prioritize training and infrastructure investments, and test interventions before committing billions to implementation. Iceberg is built with the AgentTorch framework.
Summary
Main Finding
Project Iceberg introduces the Iceberg Index, a skills-centered metric that quantifies the wage-value share of occupational tasks AI systems can technically perform. Using Large Population Models to represent 151 million U.S. workers (923 occupations, 3,000+ counties) and a catalog of 13,000+ AI tools mapped to 32,000+ skills, the baseline Index shows that visible AI adoption in computing/tech (the “surface”) accounts for ~2.2% of wage value (~$211B), while technical capability beneath the surface spans administrative, financial, and professional services, totaling ~11.7% (~$1.2T) of wage value—about five times larger and geographically widespread. The Index measures exposure (capability overlap), not displacement or timing.
Key Points
- What the Iceberg Index measures
- Skills-centered KPI: percent of wage value in an occupation tied to skills AI can technically perform.
- Focus: technical exposure (where human and AI capabilities overlap), not forecasts of job loss or adoption timing.
- Construction: weights each skill by importance, automatability (tools + LLM tool-use), and prevalence to produce 0–100% exposure per occupation.
- Scale and scope
- Human population: 151M workers, 923 occupations, 32,000+ skills, 3,000+ counties.
- AI population: 13,000+ tools (copilots, workflow automation, etc.).
- Simulation engine: Large Population Models implemented in AgentTorch, run on Oak Ridge Frontier supercomputer.
- Key quantitative results
- Surface Index (observable tech-sector adoption): 2.2% of wage value (~$211B).
- Iceberg Index (baseline technical capability across economy): 11.7% of wage value (~$1.2T).
- Traditional metrics (GDP, income, unemployment) explain <5% of cross‑regional variation in skills-based exposure.
- Validation
- Skill embeddings vs. career transitions: 85% recall for predicting commonly observed occupation-to-occupation moves, supporting that skill-based representations capture real labor-market structure.
- Adoption alignment: 69% geographic agreement between the paper’s Surface Index and Anthropic Economic Index (AEI) usage tiers.
- Uses
- Baseline assessment of maximum technical exposure (no adoption assumptions).
- Input to scenario simulations that model adoption speed, transferability, and policy interventions.
- Policymaker applications: identify exposure hotspots, prioritize training/infrastructure, reassess multiplier assumptions, test interventions before large investments.
Data & Methods
- Data sources and representations
- Occupational skill taxonomy (32k+ skills) mapped from standard sources (e.g., O*NET) to individual workers and AI tools.
- Worker population modeled as agents with attributes: skills, tasks, location, occupation.
- Catalog of 13k+ AI tools mapped to same skill taxonomy to assess tool availability for tasks.
- Model & computation
- Large Population Models (LPMs) simulate billions of interactions among workers, skills, and tools; implemented in AgentTorch.
- Simulations run at national scale on Oak Ridge’s Frontier supercomputer.
- Index computation
- For each occupation, exposure = sum over skills of (skill importance × automatability × prevalence), normalized to 0–100%.
- Automatability determined by evidence that tools exist and language models can use them to perform the skill in at least one context.
- Baseline Index reports the maximum technical exposure (i.e., capability demonstrated in any context) rather than expected adoption.
- Validation approach
- Structural validation: skill-embedding similarity compared to observed career transition networks (85% recall).
- Adoption validation: compare Surface Index rankings to AEI real-world usage tiers (69% agreement; high agreement at extremes).
- Limitations noted in methods
- Current baseline focuses on digital/cognitive tool exposure; physical/robotic automation excluded for now.
- Index does not model firm adoption decisions, regulatory effects, worker acceptance, or labor-market dynamics that determine realized outcomes.
Implications for AI Economics
- Measurement and forecasting
- Existing labor statistics (employment, wages, GDP) miss substantial AI-mediated exposure; skills-centered metrics are required to anticipate where capability exists before observable adoption.
- The Iceberg Index provides a forward-looking input for scenario analysis and cost–benefit calculations of training or infrastructure investments.
- Policy and workforce strategy
- Many exposure hotspots lie outside coastal tech hubs—states and regions should not equate low current AI employment with low exposure.
- Policymakers can use the Index to prioritize reskilling where high-wage-value skills are automatable, to design targeted transitions (e.g., administrative automation freeing clinical time), and to reassess job-multiplier assumptions under increased automation.
- The platform enables pre-deployment testing of interventions (training programs, incentives, regulations) to evaluate effectiveness before large expenditures.
- Research and market implications
- Firms and training providers can identify transferable skill pathways and anticipate changes in entry-level vs. experienced hiring demand.
- The large “hidden” exposure suggests substantial market for AI-augmentation tools in non-tech sectors (finance, admin, professional services).
- Cautions for interpretation
- Exposure ≠ displacement: technical capability does not imply immediate job losses or reduced employment; outcomes depend on adoption, complementarities, policy, and worker adaptation.
- Baseline depends on tool catalog and mapping assumptions; results should be updated as tools, LLM capabilities, and tool-use practices evolve.
- Directions for future work
- Incorporate physical/robotic automation and capability-benchmark data (e.g., task-level LLM benchmarks) to refine automatability.
- Use the baseline as input to dynamic adoption simulations to estimate realized labor-market impacts under alternative policy or market scenarios.
- Extend geographic and firm-level analyses to evaluate sectoral multiplier shifts under different adoption pathways.
Summary takeaway: The Iceberg Index reframes AI’s labor-market impact as a skills-level exposure problem—revealing a much larger and geographically dispersed potential for AI to perform valuable work than is visible through current tech-sector adoption metrics—and provides a scalable simulation framework for policymakers and firms to test strategies before committing large investments.
Assessment
Claims (14)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| AI technical capability in the U.S. labor market is substantially larger and far more geographically diffuse than visible adoption suggests. Automation Exposure | mixed | medium | difference between skills-based exposure (Iceberg Index) and visible AI-adoption wage-share, and geographic dispersion of exposure across counties/states |
n=151000000
0.05
|
| Visible AI adoption concentrated in computing/technology represents about 2.2% of U.S. wage value (~$211 billion). Labor Share | negative | medium | percent of U.S. wage value attributed to visible AI adoption (2.2%) and corresponding dollar estimate (~$211B) |
n=151000000
2.2% (~$211 billion)
0.05
|
| Broader cognitive automation potential across administrative, financial, and professional services amounts to 11.7% (~$1.2 trillion). Labor Share | negative | medium | percent of U.S. wage value exposed to current AI capabilities (Iceberg Index = 11.7%) and dollar estimate (~$1.2T) |
n=151000000
11.7% (~$1.2 trillion)
0.05
|
| The broader cognitive automation potential is roughly five times larger than visible adoption and is geographically widespread (present across all states, not only coastal hubs). Automation Exposure | negative | medium | ratio of Iceberg Index wage-share to visible-adoption wage-share (~5×) and geographic distribution of Iceberg Index values across counties/states |
n=151000000
≈5× (Iceberg Index vs visible adoption); geographically widespread
0.05
|
| Traditional macro indicators (GDP, income, unemployment) explain less than 5% of the state- and county-level variation in skills-based exposure. Automation Exposure | negative | medium | percent variance explained (R^2) in the Iceberg Index by traditional macro indicators (<5%) |
n=3000
<5% explained variance (R²) by macro indicators
0.05
|
| The Iceberg Index is a skills-centered metric that measures the wage value of specific skills AI systems can perform within each occupation; it quantifies technical exposure (capability overlap), not displacement, adoption timelines, or realized outcomes. Automation Exposure | null_result | high | Iceberg Index value (wage-value of automatable skills per occupation/geography) |
0.09
|
| The simulation model represents 151 million U.S. workers as autonomous agents, covers 32,000+ distinct skills, links agents to thousands of AI tools, and provides county-level resolution (~3,000 U.S. counties). Other | null_result | high | model scope metrics: number of agents (151M), skills (~32k), counties (~3k), and linked AI tools (thousands) |
n=151000000
151,000,000 agents; ~32,000 skills; ~3,000 counties; thousands of AI tools
0.09
|
| The framework supports counterfactual scenario simulations that vary capability diffusion, adoption rates, policy interventions, and firm behavior to explore how exposures might translate into outcomes. Other | positive | high | simulated labor-market trajectories under alternative counterfactual parameterizations (no single numeric outcome) |
0.09
|
| Model and simulations are implemented with the AgentTorch framework. Other | null_result | high | implementation platform (AgentTorch) |
Implemented in AgentTorch
0.09
|
| The Iceberg Index captures capability overlap but does not capture firm adoption choices, regulatory constraints, social acceptance, complementarity effects, or worker reallocation dynamics. Other | null_result | high | scope/limitations of the Iceberg Index (what it does not measure) |
0.09
|
| The framework can help policymakers and firms locate exposure hotspots, prioritize investments in training and infrastructure, and test interventions prior to large deployments. Governance And Regulation | positive | medium | decision-support capabilities: identification of exposure hotspots and evaluation of intervention scenarios (qualitative outcome) |
0.05
|
| Because exposure is geographically widespread and concentrated in service and administrative work as well as tech, policy responses should be spatially and sectorally granular (county- or state-level interventions rather than only coastal/hub strategies). Governance And Regulation | positive | medium | recommended policy targeting granularity based on spatial and sectoral distribution of Iceberg Index values |
n=3000
0.05
|
| Research should prioritize more granular skill-to-AI-capability mappings, longitudinal tracking of adoption vs. exposure, and integration of firm behavior and regulatory dynamics into agent-based models to move from exposure assessment toward outcome prediction. Research Productivity | positive | low | proposed research directions (not an empirical measurement) |
0.03
|
| The Iceberg Index indicates where capability exists but does not indicate whether or when job losses will occur. Automation Exposure | null_result | high | distinction between capability exposure (Iceberg Index) and realized job loss/adoption timing (not measured) |
0.09
|