The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Demand for AI skills in knowledge work has surged since ChatGPT—appearing in 27.8% of job listings and linked to a 17.7% advertised wage premium—while adoption varies from 43.2% in high-tech to 9.7% in the public sector. Conventional appraisal systems miss hybrid human–AI capabilities, so the paper proposes a three-part performance framework (AI Tool Mastery, Collaborative Work Quality, Human–AI Synergy) for firms redesigning work.

Reconstruction of knowledge worker performance evaluation system in the ChatGPT era: an exploratory study based on human-AI collaborative work model
Zhixin Yu, Zhicheng Yu · Fetched March 17, 2026 · Future Technology
semantic_scholar correlational low evidence 7/10 relevance DOI Source
Using 5,000 LinkedIn job ads and 2,000 Indeed salary listings from 2022–2024, the study finds AI skills cited in 27.8% of knowledge-worker roles (a 376% increase since ChatGPT) and an associated 17.7% advertised wage premium, and proposes a three-dimensional performance measurement model—AI Tool Mastery, Collaborative Work Quality, and Human–AI Synergy—for human–AI work.

The emergence of ChatGPT in November 2022 disrupted practice in knowledge work and defied performance-measurement systems in human-exclusive task accomplishment under unprecedented comparability. This current study fills the gap in the literature between traditional models of appraisal and AI-enabled workspaces through the development of an evidence-based model of measuring performance in human-AI collaborative settings. Drawing on systematic analysis of 5,000 LinkedIn job adverts and 2,000 Indeed salary information between 2022-2024, the present study examined the shift in performance needs and skill needs in knowledge sectors following the release of ChatGPT. The study's findings indicated that AI skills are especially needed in 27.8% of knowledge workers' jobs, with a growth rate of 376% since the release of ChatGPT. AI-trained staff are rewarded with a 17.7% overall premium for their wages, and occupational competence varies from 43.2% in high-tech to 9.7% in the public sector. Systematic skill differences cannot be captured by conventional measuring systems, according to the results. The study discovers a three-dimensional model for measuring performance, including AI Tool Mastery, Collaborative Work Quality, and Human-AI Synergy to measure hybrid skills developed through human-machine collaboration. The research establishes the theory of performance management by developing operational measurement solutions for companies going through workplace redesign due to AI.

Summary

Main Finding

The release of ChatGPT precipitated a rapid reconfiguration of knowledge-work performance needs: AI-related skills now appear in 27.8% of knowledge-worker jobs (a 376% increase since November 2022), AI-trained workers receive an average wage premium of 17.7%, and the study proposes a new three-dimensional performance-measurement model (AI Tool Mastery, Collaborative Work Quality, Human–AI Synergy) to capture hybrid human–AI competencies that conventional systems miss.

Key Points

  • Prevalence and growth
    • 27.8% of knowledge-worker roles now list AI skills as required or desirable.
    • This represents a 376% growth in AI-skill-demand since ChatGPT’s release (Nov 2022).
  • Wage effects
    • AI-trained staff earn an estimated overall wage premium of 17.7%.
  • Sectoral heterogeneity
    • Measured occupational competence (ability to meet new hybrid skill requirements) ranges from 43.2% in high‑tech sectors down to 9.7% in the public sector.
  • Measurement gap
    • Conventional performance-measurement systems systematically fail to capture the skill differences emerging from human–AI collaboration.
  • Proposed measurement model (three dimensions)
    • AI Tool Mastery: depth of skill in using and customizing AI tools.
    • Collaborative Work Quality: ability to sustain team processes and outputs in hybrid teams.
    • Human–AI Synergy: novel hybrid competencies that emerge from effective human–AI interaction (e.g., prompting, verification, model oversight).

Data & Methods

  • Data sources and scope
    • 5,000 LinkedIn job adverts and 2,000 Indeed salary records, covering 2022–2024.
  • Approach
    • Systematic analysis of job-ad text and posted salary information to identify shifts in advertised skill needs and associated pay outcomes.
    • Quantified growth in AI-skill demand and estimated average wage premia for AI-trained workers.
    • Developed an operational, evidence-based measurement model for hybrid human–AI performance grounded in the observed skill signals.
  • Notes on inference and coverage
    • Findings are based on advertised requirements and posted salaries; they reflect demand signals and compensation patterns visible in online recruitment platforms and may be skewed toward sectors and regions well represented on those platforms.
    • The study links observed demand and pay patterns to measurement design rather than causally attributing productivity changes to AI.

Implications for AI Economics

  • Labor demand and skill-biased change
    • Rapid upskilling and reallocation pressures: AI skills are becoming a significant margin of demand in knowledge work and should be treated as a distinct form of human capital in labor‑demand models.
  • Wage structure and inequality
    • A sizeable AI wage premium (≈17.7%) implies renewed returns to skill and potential widening of within‑occupation and between‑sector wage dispersion, especially where occupational competence is low.
  • Measurement and productivity accounting
    • Standard performance metrics and occupational skill classifications undercount hybrid competencies; macro and micro productivity studies should incorporate measures of AI-tool use and human–AI synergy to avoid misattributing gains or losses.
  • Firm strategy and HR policy
    • Firms need operational measures (the proposed three dimensions) to redesign job architectures, training, hiring, and appraisal processes for hybrid work.
  • Public policy
    • Targeted reskilling and sector-specific support (e.g., for the public sector) can help mitigate unequal adaptation rates; labor-market monitoring should include AI-skill indicators.
  • Research agenda
    • Integrate advertised-skill signals into empirical labor models, track wage premia over longer horizons, and validate the three-dimensional measurement framework in employer-level productivity and personnel studies.

Assessment

Paper Typecorrelational Evidence Strengthlow — Findings are based on observational job-ad and scraped salary data with keyword-based measures and no clear causal identification strategy or robust controls for confounding (e.g., worker experience, firm size, location, occupation-task composition). This raises concerns about selection bias, measurement error, and omitted variables that could explain the observed wage premium and growth rates. Methods Rigormedium — The study uses relatively large, systematically collected samples of job adverts and salary listings and conducts temporal and sectoral breakdowns, which is stronger than anecdote-based work; however, the methods described appear to rely on keyword coding without documented validation, limited information about sampling frame or representativeness, and absence of robustness checks or causal identification techniques. Sample5,000 LinkedIn job advertisements and 2,000 Indeed salary records collected between November 2022 and 2024, covering 'knowledge work' occupations and multiple sectors (high-tech, public sector, etc.); AI-skill presence identified by keyword/search terms; occupational competence measured by sector-level prevalence; wage premium estimated from advertised/posted salary data. Themeshuman_ai_collab skills_training IdentificationObservational association: systematic text analysis of 5,000 LinkedIn job adverts and matching/aggregation with 2,000 Indeed salary listings (2022–2024) using keyword-based coding for 'AI skills' and occupational categories; trend analysis over time and cross-sectional comparisons by sector to estimate prevalence and a wage premium; no quasi-experimental variation or explicit controls reported to support causal inference. GeneralizabilityBased on job postings and posted salaries, which may not reflect actual employment or realized wages (vacancy bias)., Geographic coverage not specified — results may not generalize across countries or local labour markets., Sample may over-represent firms and occupations that post on LinkedIn/Indeed (bias toward private-sector, tech-savvy employers)., Keyword-based measurement of 'AI skills' may misclassify job requirements or capture marketing language rather than actual skill use., Time window (post-ChatGPT early adoption phase) may capture transient spikes rather than long-run equilibria.

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
The emergence of ChatGPT in November 2022 disrupted practice in knowledge work and defied performance-measurement systems in human-exclusive task accomplishment under unprecedented comparability. Organizational Efficiency negative medium disruption to knowledge work practices and adequacy of existing performance-measurement systems
n=7000
0.09
AI skills are especially needed in 27.8% of knowledge workers' jobs. Adoption Rate positive medium proportion (%) of knowledge-worker job adverts requiring AI skills
n=5000
27.8%
0.09
The need for AI skills has grown at a rate of 376% since the release of ChatGPT. Adoption Rate positive medium percentage growth in AI-skill mentions in job adverts (growth rate)
n=5000
376%
0.09
AI-trained staff are rewarded with a 17.7% overall premium for their wages. Wages positive medium wage premium (%) associated with AI-trained staff
n=2000
17.7%
0.09
Occupational competence varies from 43.2% in high-tech to 9.7% in the public sector. Skill Acquisition mixed medium measured occupational competence (%) by sector (high-tech and public sector examples)
n=7000
43.2% (high-tech); 9.7% (public sector)
0.09
Systematic skill differences cannot be captured by conventional measuring systems. Organizational Efficiency negative medium ability of conventional measurement systems to detect systematic skill differences (binary/qualitative assessment)
n=7000
0.09
The study discovers a three-dimensional model for measuring performance, including AI Tool Mastery, Collaborative Work Quality, and Human-AI Synergy to measure hybrid skills developed through human-machine collaboration. Organizational Efficiency positive medium dimensions of a proposed performance-measurement model (AI Tool Mastery, Collaborative Work Quality, Human-AI Synergy)
n=7000
0.09
The research establishes the theory of performance management by developing operational measurement solutions for companies going through workplace redesign due to AI. Organizational Efficiency positive low operational performance-measurement solutions and theoretical framing for performance management in AI-driven workplace redesign
n=7000
0.04

Notes