The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (1902 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Skills Training Remove filter
Instrumental-variable (IV) estimation is used to address endogeneity of AI adoption and to identify causal effects on employment and wages.
Paper states IV identification strategy applied to the 38-country panel; robustness checks and alternative specifications reported (paper refers to instrument details in full text).
high null result Artificial Intelligence and Labor Market Transformation: Emp... Causal estimate identification strategy for employment and wage outcomes
The AI Adoption Index is constructed as a composite measure combining enterprise investment in AI, AI-related patent filings, and workforce/firm surveys on AI use across 38 OECD countries (2019–2025).
Paper's methodological description of the index construction; data sources enumerated as investment, patenting, and survey measures over the panel period.
high null result Artificial Intelligence and Labor Market Transformation: Emp... AI adoption intensity (composite index)
There is a need for standardized metrics and measurement protocols for public-sector productivity and non-market outcomes (service quality, processing time, cost per transaction, transparency, trust).
Methodological critique within the review pointing to heterogeneity of outcome measures across studies and calling for standardized metrics; based on synthesis of reviewed literature.
high null result Digital Transformation and AI Adoption in Government: Evalua... existence/adoption of standardized measurement protocols and consistency of repo...
Much of the literature on public-sector digital/AI interventions is descriptive or case-based; causal, quantitative evidence on net productivity effects is limited and context-dependent.
Methodological assessment within the review noting heterogeneous study designs, reliance on secondary sources, and a lack of randomized or quasi-experimental studies; the review explicitly states this limitation.
high null result Digital Transformation and AI Adoption in Government: Evalua... availability of causal quantitative estimates of productivity impacts
Research priorities include causal studies on AI’s impacts on SME productivity, employment and inequality in LMICs; cost–benefit analyses of financing and policy interventions; evaluation of data governance models; and development of metrics/monitoring systems for inclusive adoption.
Authors' identification of evidence gaps from the structured literature review highlighting areas with insufficient causal or evaluative research.
high null result Artificial Intelligence Adoption for Sustainable Development... existence and quality of targeted causal and evaluative research on AI in LMIC S...
Empirical causal evidence on long-run welfare, distributional outcomes, and labor effects of AI in LMIC SMEs remains thin.
Gap identified through the structured review: few causal studies (e.g., RCTs, natural experiments) addressing long-run effects in LMIC SME contexts.
high null result Artificial Intelligence Adoption for Sustainable Development... availability of causal evidence on welfare, distributional effects, and labor ou...
Heterogeneity in SME types and sectors limits the generalizability of findings about AI adoption and impacts.
Authors' methodological limitation noted in the review: the evidence base spans diverse firm sizes, sectors, and contexts, constraining broad generalization.
high null result Artificial Intelligence Adoption for Sustainable Development... generalizability of reviewed findings across SMEs and sectors
Theoretical framing integrates Resource-Based View (RBV), Dynamic Capabilities (DC), Technology–Organization–Environment (TOE), and Diffusion of Innovation (DOI) to explain how firm resources, learning capacity, organizational and environmental factors shape AI adoption.
Conceptual synthesis performed as part of the literature review; integration based on existing theoretical literature rather than primary empirical testing.
high null result Artificial Intelligence Adoption for Sustainable Development... explanatory scope for AI adoption drivers (theoretical coherence rather than an ...
The systematic review followed PRISMA protocol and analyzed a corpus of 103 items (peer‑reviewed articles and institutional reports) published 2010–2024.
Explicit methodological statement in the paper describing PRISMA use and corpus size/timeframe.
high null result Models, applications, and limitations of the responsible ado... review methodology and corpus characteristics (sample size, timeframe)
Methodological needs for AI-era labor models include dynamic skill taxonomies, high-frequency labor data (job postings, firm-level automation measures), and uncertainty quantification.
Paper's Research & policy recommendations and Methodological needs section (explicit recommendations).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... requirements for model inputs and design (dynamic taxonomies, data frequency, un...
The scenario analysis framework varies economic growth, automation rates, policy interventions, and investment to produce probabilistic demand–supply gaps.
Methods description of scenario analysis components and the variables varied in scenario experiments (explicit in Data & Methods).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic demand–supply gap distributions produced under varied scenario par...
Intended users of the Hub include organizations, educational institutions, and policymakers to inform reskilling/education strategies, regional economic policy, and labor-market interventions.
Explicit statement of target users and use cases in the Key Points / Implications sections.
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... targeting of outputs to specified stakeholder groups (intended adoption/use-case...
The system produces interpretable outputs for stakeholders: demand–supply trend analysis, geospatial hotspot maps, skill-gap radar charts, and policy simulation dashboards.
Paper's description of outputs and interactive visual analytics (listed output modalities).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... generation of interpretable visual/analytic artifacts (trend charts, hotspot map...
The core modeling approach uses probabilistic growth modeling combined with intelligent skill synthesis to estimate future workforce requirements under alternative economic and policy scenarios.
Methods section describing the modeling components: probabilistic growth modeling and intelligent skill synthesis (architectural description).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic forecasts of future workforce requirements by sector/region under ...
The platform integrates multiple indicators such as regional economic growth projections, automation velocity, policy intervention strength, investment intensity, and market volatility (macro- and micro-level indicators).
List of input indicators given in the Data & Methods section of the paper (explicit enumeration of macro and micro variables).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... integration of listed macro- and micro-level indicators into the modelling pipel...
Significant empirical gaps remain on long-term impacts (wage trajectories, employment composition, firm-level returns), verification/remediation cost quantification, and public-good risks of insecure code proliferation.
Cross-study synthesis explicitly identifying missing longitudinal and firm-level empirical research in the reviewed literature.
high null result ChatGPT as a Tool for Programming Assistance and Code Develo... absence or paucity of longitudinal studies and firm-level quantitative measureme...
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
high null result AI and Robotics Redefine Output and Growth: The New Producti... strength of causal inference and measurement validity
Researchers and firms should measure generation throughput, verification throughput, defect accumulation rates, mean time to detection/fix, costs per incident, and the marginal value of additional verification capacity to evaluate the framework's claims.
Prescriptive measurement priorities listed in the paper as recommendations for empirical validation.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... set of recommended metrics (generation throughput, verification throughput, defe...
The abstract reports no empirical tests, simulations, or field experiments; empirical validation of the framework is left for future work.
Direct observation of the paper's abstract and methods description indicating lack of empirical validation.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... presence or absence of empirical validation in the paper
The paper's contribution is primarily conceptual/architectural rather than empirical.
Explicit statement in the paper and absence of reported empirical tests, simulations, or field experiments in the abstract and methods section.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... type of contribution (conceptual vs. empirical)
There are limited standardized measures of 'AI capital,' scarce data on firm-level AI investment and implementation quality, and few long-run causal estimates of AI’s effects on managerial productivity and labor outcomes.
Gap analysis based on literature review and methodological discussion within the book; observation about the state of available empirical evidence.
high null result Modern Management in the Age of Artificial Intelligence: Str... availability and standardization of AI investment/asset measures; existence of l...
There is a lack of large‑scale causal evidence on generative AI’s effects; the paper recommends RCTs, difference‑in‑differences, matched employer–employee panels, and longitudinal studies to fill empirical gaps.
Methodological critique and research agenda provided in the review; observation based on the authors' survey of the literature.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (research design recommendation; outcome is future evidence generation)
Policy interventions are needed for data protection, bias mitigation, model transparency, accountability, and public investments in workforce retraining to smooth transitions and reduce inequality.
Normative policy recommendations grounded in the review's synthesis of risks and distributional concerns; not an empirical claim but a recommendation.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... policy adoption (existence of regulations, programs), outcomes: retraining parti...
New productivity metrics are needed to capture AI impacts, including time‑use changes, quality‑adjusted output, and accounting for intangible AI capital.
Methodological recommendation from the conceptual synthesis, motivated by limitations of existing measures discussed in the paper.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (recommendation for metrics: time use, quality‑adjusted output, AI capital a...
Static equilibrium and representative-agent models neglect dynamic reallocation, task re-bundling, and firm-level heterogeneity, limiting their realism for forecasting labour outcomes under AI adoption.
Theoretical critique offered in the paper and referenced critiques in the literature; evidence is conceptual and based on model assumptions identified across studies.
high null result Recent Methodologies on AI and Labour - a Desk Review completeness/realism of economic models used to forecast labour-market effects
Common empirical strategies (cross-sectional exposure correlations and panel-difference analyses) often lack strong causal identification due to endogeneity of adoption and unobserved confounders.
Surveyed analytical strategies and explicit critique in the paper noting endogeneity and confounding; evidence is methodological critique grounded in the literature's reliance on observational exposure measures.
high null result Recent Methodologies on AI and Labour - a Desk Review validity of causal estimates of AI adoption effects on labour outcomes
Researchers construct AI exposure indices at the task level to indicate susceptibility to AI automation or augmentation.
Cited examples (Felten et al., 2023; Eloundou et al., 2023) that develop task-level scores; evidence basis is methodological papers that publish indices and mapping procedures (often using O*NET tasks, expert labeling, or model-based scoring).
high null result Recent Methodologies on AI and Labour - a Desk Review task-level AI exposure scores
Commonly used data sources for measuring AI exposure include job postings and descriptions, occupational task databases (O*NET-style), employer/household surveys, administrative payroll data, and firm-level productivity measures.
List of data sources compiled in the paper; evidence is a methodological summary of datasets used across the cited literature rather than novel data collection.
high null result Recent Methodologies on AI and Labour - a Desk Review coverage and types of data used for AI exposure and labour-outcome measurement
Many studies rely on static assumptions (fixed comparative advantage, no adaptation) and theoretical models, which limits causal inference and makes projections model-dependent.
Methodological critique cited in the paper (e.g., critique of Acemoglu & Restrepo, 2022; Webb, 2020) and the paper's survey of common modeling choices (static equilibrium or representative-agent models); evidence basis is theoretical critique and literature review rather than new causal estimates.
high null result Recent Methodologies on AI and Labour - a Desk Review strength of causal identification and robustness of projected employment/wage ou...
Task-level approaches capture within-occupation heterogeneity in automation and augmentation risk that occupation-level analyses miss.
Empirical and methodological work cited (Felten et al., 2023; Eloundou et al., 2023) that construct task-level exposure indices and show variation across tasks within the same occupation; evidence based on task mappings from O*NET-style databases and job descriptions.
high null result Recent Methodologies on AI and Labour - a Desk Review heterogeneity in automation/augmentation risk across tasks within occupations
Recent research in AI–labor economics has shifted from occupation-level analysis to task-level analysis, mapping task-by-task exposure to AI.
Synthesis of recent literature cited in the paper (e.g., Felten et al., 2023; Eloundou et al., 2023) which develop task-level exposure mappings using occupational task databases (O*NET-style) and job-posting text; evidence is bibliographic and methodological rather than a single new empirical dataset.
high null result Recent Methodologies on AI and Labour - a Desk Review granularity of exposure measurement (occupation-level vs. task-level AI exposure...
Further quantitative research is needed to measure task‑level productivity effects, skill‑depreciation trajectories, and market impacts of differential GenAI adoption; structural models could incorporate TGAIF to predict labor demand and wage effects.
Authors' stated research agenda and limitations acknowledged in the paper; this is a call for future empirical work rather than an empirical claim.
high null result Where Automation Meets Augmentation: Balancing the Double-Ed... task-level productivity, skill-depreciation trajectories, market impacts, labor ...
ChatGPT was used as the generative engine for the MLLM in the system implementation described in the paper.
Methods section: integration of AR overlays with an MLLM, with ChatGPT used as the generative engine (explicit in the summary).
high null result Augmented Reality-Based Training System Using Multimodal Lan... Identity of generative model used (ChatGPT)
Further quantitative and comparative research is needed to measure net productivity effects, skill trajectories, and generalizability across firm types and industries.
Authors' methodological assessment and limitations section noting single-firm qualitative design (Netlight) and rapidly evolving toolchains; recommendation for future empirical work.
high null result Rethinking How IT Professionals Build IT Products with Artif... gaps in current empirical evidence (lack of quantitative, longitudinal, cross-fi...
Another important gap is quantifying complementarities between AI and different skill types (evaluative vs. generative tasks).
Review observation that existing empirical work has not systematically quantified how AI productivity gains vary with worker skill composition and complementary roles.
high null result ChatGPT as an Innovative Tool for Idea Generation and Proble... magnitude of complementarities between AI assistance and various human skill typ...
Key research gaps include a lack of long-run causal evidence on the effects of LLMs on firm-level innovation rates, business formation, and industry structure.
Explicit identification of gaps in the literature within the nano-review; the review states that most studies are short-term, task-level, or descriptive.
high null result ChatGPT as an Innovative Tool for Idea Generation and Proble... long-run causal impacts of LLM adoption on firm innovation, business formation, ...
Study limitations include reliance on perceptual measures (rather than solely objective performance), heterogeneity across institutional samples, and likely correlational rather than strictly causal identification.
Authors' own noted limitations in the paper's methods section: mixed-methods design using perceptions from questionnaires and interviews, sample heterogeneity across multinational institutions, and quantitative analyses that are associative rather than strictly causal.
high null result Human-AI Synergy in Financial Decision-Making: Exploring Tru... validity/causal identification of study findings
Measurement and research gaps (data scarcity, informality) complicate robust economic assessment of AI impacts; improved metrics, granular labour and firm‑level data, and mixed‑methods evaluation are required.
Methodological critique based on reviewed literature and identified gaps; no new data collection in the paper.
high null result Towards Responsible Artificial Intelligence Adoption: Emergi... availability and granularity of labour and firm-level datasets, prevalence of mi...
There is a lack of causal evidence on the long-run impacts of AI-driven HRM on employment, wages, and firm survival—this is a key research gap identified by the review.
Explicitly stated research gap in the review based on assessment of methodologies and findings across the 47 included studies.
high null result Data-Driven Strategies in Human Resource Management: The Rol... availability of causal studies on long-run employment, wage, and firm survival i...
A systematic review following PRISMA identified 47 peer-reviewed studies (2012–2024) on data-driven HRM and workforce resilience from Scopus, Web of Science, and Google Scholar.
Explicit review protocol and search/screening results reported by the paper (PRISMA-based), final sample size = 47 studies.
high null result Data-Driven Strategies in Human Resource Management: The Rol... number of studies included in the review
Recommended research designs to estimate impacts include RCTs, quasi-experimental methods (difference-in-differences, regression discontinuity, matching), and longitudinal cohort tracking.
Paper explicitly lists these evaluation designs as appropriate methods for causal inference and long-term outcomes measurement. This is a methodological recommendation rather than an empirical claim.
high null result Curriculum engineering: organisation, orientation, and manag... employment probabilities, earnings, long-term career outcomes (as targeted by th...
There is a need for causal, longitudinal studies on how AI‑enabled fintech affects women's portfolio outcomes and on algorithmic interventions designed to reduce gender gaps.
Explicit statement in the paper noting limitations of existing literature (heterogeneity, limited longitudinal causal evidence, possible platform sample selection).
high null result Women's Investment Behaviour and Technology: Exploring the I... existence/absence of causal longitudinal evidence on fintech impacts by gender
Analyses were conducted as intent-to-treat comparisons across arms, with hypothesis tests reported (including p-values) and principal stratification used for mechanism decomposition.
Methods statement: intent-to-treat comparisons, reported p-values for score differences, and use of principal stratification for separating total effect into adoption and effectiveness channels in the randomized trial (n = 164).
high null result Training for Technology: Adoption and Productive Use of Gene... Analysis methods (ITT, hypothesis tests, principal stratification)
The primary outcomes analyzed were LLM adoption (use), exam score (grade points), and answer length.
Study’s stated primary outcomes in methods: adoption indicator, exam score on an issue-spotting exam, and answer length (measured). Sample size n = 164.
high null result Training for Technology: Adoption and Productive Use of Gene... Adoption; exam score; answer length
The study used a randomized controlled design with three arms: no LLM access, optional LLM access, and optional LLM access plus brief training.
Study methods description: randomized assignment of 164 law students to three experimental conditions as listed.
high null result Training for Technology: Adoption and Productive Use of Gene... Study design (randomization and arm definitions)
The intervention consisted of roughly a ten-minute training focused on how to use the LLM effectively.
Study description of the intervention in the randomized experiment (three-arm design with one arm receiving ~10-minute targeted training).
high null result Training for Technology: Adoption and Productive Use of Gene... Intervention duration/content (training implementation)
Empirical validation of the book’s proposals would require complementary case studies, model documentation, and outcome measurements.
Author/reviewer recommendation in the blurb about methodological limitations and next steps; not an empirical finding.
high null result Governing The Future need for empirical case studies, documented models, and outcome metrics to valid...
The book is predominantly conceptual and policy-analytic and uses illustrative case vignettes rather than presenting a single empirical study.
Explicit methodological description in the Data & Methods blurb: synthesis of technical ideas, governance requirements, and illustrative vignettes; no empirical sample or experimental protocol described.
high null result Governing The Future presence or absence of empirical methodology in the book
Limitations of the review include the small sample of studies, uneven geographic coverage, heterogeneity in methods across studies, and limited long‑run evidence (especially on generative AI), which complicate causal aggregation.
Author-reported limitations based on the meta-assessment of the 17 included studies (variation in methods, contexts, and time horizons).
high null result The role of generative artificial intelligence on labor mark... limitations to causal inference and generalizability
Design of this work: a systematic literature review and meta‑synthesis of empirical findings from peer‑reviewed journals (2020–2025), based on 17 publications.
Stated methods and inclusion criteria of the paper: systematic review of peer‑reviewed literature (sample = 17).
high null result The role of generative artificial intelligence on labor mark... study design / review methodology