The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Demand for AI skills in knowledge work has surged since ChatGPT—appearing in 27.8% of job listings and linked to a 17.7% advertised wage premium—while adoption varies from 43.2% in high-tech to 9.7% in the public sector. Conventional appraisal systems miss hybrid human–AI capabilities, so the paper proposes a three-part performance framework (AI Tool Mastery, Collaborative Work Quality, Human–AI Synergy) for firms redesigning work.

Reconstruction of knowledge worker performance evaluation system in the ChatGPT era: an exploratory study based on human-AI collaborative work model
Zhixin Yu, Zhicheng Yu · Fetched March 17, 2026 · Future Technology
semantic_scholar correlational low evidence 7/10 relevance DOI Source PDF
Using 5,000 LinkedIn job ads and 2,000 Indeed salary listings from 2022–2024, the study finds AI skills cited in 27.8% of knowledge-worker roles (a 376% increase since ChatGPT) and an associated 17.7% advertised wage premium, and proposes a three-dimensional performance measurement model—AI Tool Mastery, Collaborative Work Quality, and Human–AI Synergy—for human–AI work.

The emergence of ChatGPT in November 2022 disrupted practice in knowledge work and defied performance-measurement systems in human-exclusive task accomplishment under unprecedented comparability. This current study fills the gap in the literature between traditional models of appraisal and AI-enabled workspaces through the development of an evidence-based model of measuring performance in human-AI collaborative settings. Drawing on systematic analysis of 5,000 LinkedIn job adverts and 2,000 Indeed salary information between 2022-2024, the present study examined the shift in performance needs and skill needs in knowledge sectors following the release of ChatGPT. The study's findings indicated that AI skills are especially needed in 27.8% of knowledge workers' jobs, with a growth rate of 376% since the release of ChatGPT. AI-trained staff are rewarded with a 17.7% overall premium for their wages, and occupational competence varies from 43.2% in high-tech to 9.7% in the public sector. Systematic skill differences cannot be captured by conventional measuring systems, according to the results. The study discovers a three-dimensional model for measuring performance, including AI Tool Mastery, Collaborative Work Quality, and Human-AI Synergy to measure hybrid skills developed through human-machine collaboration. The research establishes the theory of performance management by developing operational measurement solutions for companies going through workplace redesign due to AI.

Summary

Main Finding

Post-ChatGPT (Nov 2022 → Apr 2024) knowledge-work requirements shifted markedly toward AI-enabled competencies: 27.8% of job postings explicitly required AI skills (a 376% increase from the pre-ChatGPT baseline), and workers with AI competencies earned an average wage premium of 17.7%. Traditional, human-only performance measurement frameworks fail to capture these hybrid skills; the authors propose a three-dimensional performance evaluation model—AI Tool Mastery, Collaborative Work Quality, and Human–AI Synergy—plus simple operational metrics to measure human-AI collaborative effectiveness.

Key Points

  • Scope and headline results
    • Data: 5,000 LinkedIn job postings (2022–2024), 2,000 Indeed salary records, 10 corporate case studies.
    • By April 2024, 27.8% of knowledge-worker positions explicitly required AI skills (up from ~9% pre-ChatGPT).
    • Reported growth in AI-skill mentions = 376% (absolute mentions rose from 42 → 200 in the sample).
    • Average salary premium for AI-competent roles = 17.7%; by level: mid-level 21.5% > senior 16.7% > entry 15.0%.
  • Industry heterogeneity
    • Highest AI demand: technology/software (43.2% of postings).
    • Lowest AI demand: government/public sector (9.7%).
    • Cross-industry average in sample = 27.8%; finance, consulting above average; manufacturing/engineering below average.
  • Skill-category shifts
    • AI-related requirements rose across eight domains (technical proficiency, analytics, communication, problem-solving, project management, creativity, QA, learning & adaptation) with growth rates 216–300%.
  • Measurement contribution
    • Authors argue existing appraisal systems are blind to hybrid (human+AI) competencies.
    • Proposed three-dimensional framework:
      • AI Tool Mastery: tool familiarity, usage proficiency (e.g., ChatGPT experience).
      • Collaborative Work Quality: output validation, quality control, analytical accuracy when working with AI.
      • Human–AI Synergy: effective task allocation between human and AI, AI-supported reasoning, integration capability.
    • Suggested simple metrics (tests, quality-control indicators, problem-solving evaluations) and industry-weighted scoring.

Data & Methods

  • Data sources
    • LinkedIn job postings: n = 5,000 positions (U.S.-focused), 2022–2024.
    • Indeed salary records: n = 2,000 entries, 2022–2024.
    • Corporate case studies: 10 organizations across five sectors (tech, finance, consulting, healthcare, manufacturing).
  • Analytical approach
    • Text mining / NLP (NLTK, spaCy) to extract AI-related keywords from postings.
    • Manual coding: two researchers coded 200 postings; inter-rater reliability Cohen’s Kappa = 0.84; coding scheme applied to full dataset.
    • Descriptive time-series and cross-sectional analyses to document demand shifts and industry differences.
    • Regression analysis to estimate wage differentials between AI-skilled and non-AI-skilled roles (controls and model details not fully reported in the summary).
  • Limitations acknowledged by authors
    • Platform bias: LinkedIn and Indeed skew toward tech/market-facing roles.
    • Job postings may overstate "ideal" requirements vs. realized skills.
    • Sample is U.S.-focused and short-run (2022–2024), so findings capture the immediate post-ChatGPT shock rather than long-run equilibria.
    • Causality between ChatGPT release and wage/policy outcomes is inferred from temporal association, not demonstrated with causal identification strategies.

Implications for AI Economics

  • Labor demand and returns to skills
    • Rapid rise in AI-skill demand and a measurable wage premium imply growing returns to AI-related human capital. Economists should treat AI competence as a distinct skill class when modeling wage structures, skill-biased technological change, and human capital investment.
  • Occupational and sectoral heterogeneity
    • Large cross-sector differences (e.g., tech vs. public) suggest uneven diffusion of generative-AI augmentation—implications for inequality, regional labor-market transitions, and targeted retraining policies.
  • Measurement and productivity accounting
    • Standard productivity and performance metrics that attribute output to human labor alone risk mismeasurement when AI augments production. National accounts, firm productivity analyses, and micro-level productivity studies should consider hybrid human–AI inputs.
  • Compensation and firm behavior
    • The 17.7% premium indicates firms monetize AI-competent labor; HR and compensation models may increasingly price AI proficiency, altering promotion paths and internal labor markets.
  • Policy and training
    • Public policy and corporate training programs should prioritize upskilling/reskilling in AI tool use, output validation, and integration skills—not just technical ML training but also hybrid competencies emphasized in the three-dimensional model.
  • Research and empirical priorities
    • Recommended next steps for empirical AI-economics work: incorporate AI-skill indicators into labor surveys, use representative employer-employee datasets to causally link AI usage to wages/productivity, and develop standardized measures of human–AI synergy for cross-study comparability.
  • Cautions
    • Short-run premium and demand spikes may change as AI diffuses, tasks reorganize, and supply of AI-skilled workers increases. Heterogeneous adoption and institutional constraints (regulation, procurement cycles) can temper long-run outcomes.

Brief actionable suggestions for economists and policymakers - Add explicit AI-skill variables to wage and employment regressions and to national labor-force surveys. - Design field or panel studies to identify causal effects of AI-tool adoption on output and wages (difference-in-differences, IV, RCT training interventions). - Encourage firms and public agencies to pilot the three-dimensional performance metrics and share anonymized outcomes to support broader measurement standards.

Assessment

Paper Typecorrelational Evidence Strengthlow — Findings are based on observational job-ad and scraped salary data with keyword-based measures and no clear causal identification strategy or robust controls for confounding (e.g., worker experience, firm size, location, occupation-task composition). This raises concerns about selection bias, measurement error, and omitted variables that could explain the observed wage premium and growth rates. Methods Rigormedium — The study uses relatively large, systematically collected samples of job adverts and salary listings and conducts temporal and sectoral breakdowns, which is stronger than anecdote-based work; however, the methods described appear to rely on keyword coding without documented validation, limited information about sampling frame or representativeness, and absence of robustness checks or causal identification techniques. Sample5,000 LinkedIn job advertisements and 2,000 Indeed salary records collected between November 2022 and 2024, covering 'knowledge work' occupations and multiple sectors (high-tech, public sector, etc.); AI-skill presence identified by keyword/search terms; occupational competence measured by sector-level prevalence; wage premium estimated from advertised/posted salary data. Themeshuman_ai_collab skills_training IdentificationObservational association: systematic text analysis of 5,000 LinkedIn job adverts and matching/aggregation with 2,000 Indeed salary listings (2022–2024) using keyword-based coding for 'AI skills' and occupational categories; trend analysis over time and cross-sectional comparisons by sector to estimate prevalence and a wage premium; no quasi-experimental variation or explicit controls reported to support causal inference. GeneralizabilityBased on job postings and posted salaries, which may not reflect actual employment or realized wages (vacancy bias)., Geographic coverage not specified — results may not generalize across countries or local labour markets., Sample may over-represent firms and occupations that post on LinkedIn/Indeed (bias toward private-sector, tech-savvy employers)., Keyword-based measurement of 'AI skills' may misclassify job requirements or capture marketing language rather than actual skill use., Time window (post-ChatGPT early adoption phase) may capture transient spikes rather than long-run equilibria.

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
The emergence of ChatGPT in November 2022 disrupted practice in knowledge work and defied performance-measurement systems in human-exclusive task accomplishment under unprecedented comparability. Organizational Efficiency negative medium disruption to knowledge work practices and adequacy of existing performance-measurement systems
n=7000
0.09
AI skills are especially needed in 27.8% of knowledge workers' jobs. Adoption Rate positive medium proportion (%) of knowledge-worker job adverts requiring AI skills
n=5000
27.8%
0.09
The need for AI skills has grown at a rate of 376% since the release of ChatGPT. Adoption Rate positive medium percentage growth in AI-skill mentions in job adverts (growth rate)
n=5000
376%
0.09
AI-trained staff are rewarded with a 17.7% overall premium for their wages. Wages positive medium wage premium (%) associated with AI-trained staff
n=2000
17.7%
0.09
Occupational competence varies from 43.2% in high-tech to 9.7% in the public sector. Skill Acquisition mixed medium measured occupational competence (%) by sector (high-tech and public sector examples)
n=7000
43.2% (high-tech); 9.7% (public sector)
0.09
Systematic skill differences cannot be captured by conventional measuring systems. Organizational Efficiency negative medium ability of conventional measurement systems to detect systematic skill differences (binary/qualitative assessment)
n=7000
0.09
The study discovers a three-dimensional model for measuring performance, including AI Tool Mastery, Collaborative Work Quality, and Human-AI Synergy to measure hybrid skills developed through human-machine collaboration. Organizational Efficiency positive medium dimensions of a proposed performance-measurement model (AI Tool Mastery, Collaborative Work Quality, Human-AI Synergy)
n=7000
0.09
The research establishes the theory of performance management by developing operational measurement solutions for companies going through workplace redesign due to AI. Organizational Efficiency positive low operational performance-measurement solutions and theoretical framing for performance management in AI-driven workplace redesign
n=7000
0.04

Notes