The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (1902 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Skills Training Remove filter
The sample is limited to Chinese A-share-listed design enterprises (2014–2023), which may limit generalizability to small and medium-sized enterprises (SMEs) or firms in other countries/regions.
Study sample description: A-share-listed design-oriented enterprises in China between 2014 and 2023; authors explicitly note this as a limitation.
high negative AI-driven design management: enhancing organizational produc... External validity / generalizability of results
Using TFP as a proxy for project efficiency aggregates effects at the firm level and therefore lacks micro-level insight into specific project workflows or design iteration processes.
Methodological limitation acknowledged in the paper: TFP is used as a firm-level proxy and the dataset does not include micro-level project workflow or iteration logs.
high negative AI-driven design management: enhancing organizational produc... Granularity of project-efficiency measurement (limitation of TFP proxy)
Human judgment is constrained by bounded rationality, cognitive biases, and information-processing limitations.
Cited as established findings from prior research across decision sciences and related fields (extensive literature evidence referenced; no new empirical data in this paper's abstract).
high negative Reframing Organizational Decision-Making in the Age of Artif... human judgment accuracy/quality and cognitive processing capacity
Ireland exhibits the largest gender gap in advanced digital task use: approximately 44% of men versus 18% of women perform advanced digital tasks — a 26 percentage point gap, close to double the European average.
Country-level descriptive statistics from ESJS for Ireland reporting shares of men and women performing advanced digital tasks. (Exact Irish sample size not provided in the excerpt.)
high negative Squandered skills? Bridging the digital gender skills gap fo... Share (%) of men and women in Ireland performing advanced digital tasks; gender ...
Across Europe, women are around 15 percentage points less likely than men to perform advanced digital tasks in their jobs.
Empirical analysis of the European Skills and Jobs Survey (ESJS) (Cedefop, 2021) using regression-based estimates and descriptive statistics across European countries. (Exact sample size and country count not provided in the excerpt.)
high negative Squandered skills? Bridging the digital gender skills gap fo... Probability / share of workers performing advanced digital tasks (binary indicat...
AI substitutes many routine tasks, including both manual and cognitive/rule-based activities, disproportionately affecting middle-skill occupations.
Task-based substitution reasoning within SBTC framework and cross-sectoral task analysis. The paper provides conceptual synthesis rather than presenting new microdata or quantified task-level estimates.
high negative Artificial Intelligence, Automation, and Employment Dynamics... employment and wages in routine / middle-skill occupations; task displacement
Key implementation challenges include data quality and integration, model interpretability, cybersecurity and privacy, regulatory/compliance uncertainty, skills gaps among accounting professionals, and implementation costs.
Identified by the paper through literature review and practitioner reports; these are presented as recurring barriers rather than quantified with a specific sample.
high negative Role of Artificial Intelligence in the Accounting Sector incidence/severity of implementation barriers (data quality scores, integration ...
Nearby business closures increased perceived impediments to growth, amplifying pessimism via local exposure (social contagion effect).
Empirical comparison of perceived impediments to growth across variation in local exposure to nearby business closures (survey measures of local closures correlated with respondents' perceived impediments), using the cross-country survey sample.
high negative Peer Influence and Individual Motivations in Global Small Bu... perceived impediments to growth
Two regimes emerge: an inequality-decreasing regime when AI behaves like a broadly available commodity technology or when labor-market institutions share rents widely (high ξ).
Model regime characterization and calibrated counterfactuals showing falling wage dispersion and ΔGini under commodity-like AI assumptions or higher rent-sharing elasticity.
high negative When AI Levels the Playing Field: Skill Homogenization, Asse... wage dispersion and aggregate inequality (ΔGini)
Generative AI compresses within-task skill differences (reduces dispersion of individual task performance).
Theoretical task-based model and calibrated quantitative simulations (Method of Simulated Moments matching six empirical moments) showing reductions in within-task performance dispersion after introducing AI technology.
high negative When AI Levels the Playing Field: Skill Homogenization, Asse... within-task performance dispersion (skill/ability variance within a task)
No evaluated program reported Kirkpatrick‑Barr level‑4 outcomes (organizational change, patient outcomes, or sustained metacognitive mastery).
Reviewers mapped reported outcomes from all 27 included programs and found none that demonstrated organizational-level impacts or patient‑level outcomes (level 4).
high negative Assessing the effectiveness of artificial intelligence educa... Kirkpatrick‑Barr level‑4 outcomes (organizational impact, patient outcomes, meta...
Because the design is cross-sectional and sampling purposive/geographically constrained, causal inference and generalizability are limited.
Authors' stated limitations in the summary: cross-sectional design and purposive, geographically constrained sample (Karnataka, India).
high negative AI-driven stress management and performance optimization: A ... generalizability / causal inference (methodological limitation)
Workplace stress is associated with lower employee retention.
PLS-SEM analysis on a cross-sectional survey of N = 350 pharmaceutical workers in Karnataka, India (purposive sampling). Reported direct path: Stress → Retention, β = 0.321, p < 0.001. (Note: the paper interprets this as stress reducing retention; sign/coding conventions of the variables are not detailed in the summary.)
high negative AI-driven stress management and performance optimization: A ... employee retention (retention intent/behavior)
Automated compliance and credentialing systems raise governance issues (auditability, appeals mechanisms) and risk incorrect automated deregistration if not properly governed.
Governance and algorithmic-risk discussion in the paper; logical argumentation rather than case-based evidence.
high negative &lt;i&gt;Electrotechnical education, institutional complianc... rate of incorrect automated decisions, existence and effectiveness of appeal pro...
The paper models career progression as a continuous function and treats certification gaps as discontinuities that impede labour-market mobility.
Mathematical/conceptual modeling described in the methods (career-progression-as-continuous-function approach); this is a modeling choice reported in the paper rather than an empirical finding.
high negative &lt;i&gt;Electrotechnical education, institutional complianc... labour-market mobility / continuity of career progression (in the conceptual mod...
There is limited long-term impact evidence and few system-level assessments of AI in developing-country agriculture.
Authors' methodological caveat based on the temporal scope and types of studies available in the >60-study review.
high negative A systematic review of the economic impact of artificial int... presence/absence of long-term impact evaluations and system-level assessments
The evidence base is skewed toward pilots and high‑performer contexts; there is a lack of long‑panel, multi‑project longitudinal studies to validate typical returns and scalability.
Authors' assessment of evidence types in the 160 studies: mix of conceptual papers, case studies, pilots, and only limited larger empirical evaluations.
high negative Digital Twins Across the Asset Lifecycle: Technical, Organis... representativeness and longitudinal robustness of evidence
Opacity, bias, and errors in AI systems demand auditing, standards, and governance (algorithmic accountability) to ensure trustworthy assessment.
Synthesis of literature on algorithmic bias and accountability plus policy analysis recommending audits and standards; supported by country cases that discuss governance concerns.
high negative The Future of Assessment: Rethinking Evaluation in an AI-Ass... algorithmic fairness, transparency, and reliability
Student data used by AI vendors raises risks around consent, reuse, commercial exploitation, and other data-privacy concerns.
Policy analysis and literature on data governance, privacy law debates; examples from national policy documents in the comparative cases. No original data on breaches or misuse presented.
high negative The Future of Assessment: Rethinking Evaluation in an AI-Ass... privacy risks and governance of student data
Limitations of the study include reliance on self-reported perceptions (subject to response and survivorship bias), lack of experimental/causal identification, potential non-representative sample, and cross-sectional design limiting inference about long-term productivity effects.
Authors' stated limitations in the paper summary.
high negative Artificial Intelligence as a Catalyst for Innovation in Soft... validity threats (self-report bias, lack of causal design) as reported by author...
Improving explainability can trade off with predictive performance, privacy, and robustness; these trade-offs must be managed rather than ignored.
Review aggregates technical literature and conceptual analyses documenting trade-offs reported by researchers (e.g., simpler interpretable models sometimes having lower predictive accuracy; disclosure risks to privacy; robustness concerns). No single causal estimate provided.
high negative Explainable AI in High-Stakes Domains: Improving Trust, Tran... predictive performance, privacy risk, model robustness
Tasks that are routine, repetitive, or pattern‑based (e.g., boilerplate coding, refactoring, unit test generation, some accessibility fixes) will be increasingly automated by AI.
Task‑level decomposition and examples of current automation capabilities (code generation, test suggestion tools); conceptual projection rather than empirical measurement.
high negative How AI Will Transform the Daily Life of a Techie within 5 Ye... rate of automation for routine software development tasks (proportion of such ta...
A one standard-deviation increase in AI adoption (2019–2025, 38 OECD countries) causally reduces employment in routine cognitive occupations by 2.3%.
Panel of 38 OECD countries, 2019–2025; AI Adoption Index (composite of enterprise AI investment, AI patent filings, workforce/firm AI-use surveys); instrumental-variable (IV) estimation to identify causal effect on occupational employment; country and year fixed effects and macro controls reported.
high negative Artificial Intelligence and Labor Market Transformation: Emp... Employment in routine cognitive occupations (percent change per 1 SD increase in...
High upfront costs and lack of tailored financing instruments are significant financial constraints on SME AI adoption.
Case studies, finance sector reports, and SME surveys cited in the review showing cost barriers and financing gaps; evidence descriptive rather than causal.
high negative Artificial Intelligence Adoption for Sustainable Development... upfront investment costs; access to tailored finance; adoption rates
Infrastructure deficits (unreliable power, inadequate broadband, limited local compute) materially constrain AI uptake by SMEs.
Policy reports and empirical studies in the literature documenting infrastructural limitations in LMIC contexts (including Botswana) that impede digital and AI deployment.
high negative Artificial Intelligence Adoption for Sustainable Development... infrastructure adequacy metrics (power reliability, broadband access); AI adopti...
Skills shortages (AI literacy, data science, digital management) are a primary constraint on SME AI adoption in developing economies.
Consistent findings across surveys, interviews, and case studies in the reviewed literature highlighting skill gaps as a common barrier; authors note multiple empirical sources pointing to this constraint.
high negative Artificial Intelligence Adoption for Sustainable Development... availability of AI-relevant skills; reported skills constraints limiting adoptio...
Heterogeneity in study designs and contexts within the literature limits direct comparability and generalizability of findings.
Limitation noted in the paper based on the authors' assessment of diversity across the 103 reviewed studies (varying methods, contexts, metrics).
high negative Models, applications, and limitations of the responsible ado... comparability/generalizability of evidence across studies
Institutional inertia, fragmented governance structures, limited technical capacity, and weak data stewardship impede scale‑up of AI systems in the public sector.
Thematic synthesis of barriers reported across empirical studies and institutional reports within the systematic review (103 items).
high negative Models, applications, and limitations of the responsible ado... ability to scale AI systems / scale‑up rate
Low‑ and middle‑income contexts face persistent gaps—infrastructure, data ecosystems, and talent retention—that slow AI adoption in public governance.
Consistent findings across multiple studies in the 103‑item corpus reporting infrastructure deficits, weak data ecosystems, and brain drain/retention issues in LMIC settings.
high negative Models, applications, and limitations of the responsible ado... rate/extent of AI adoption in public governance in low- and middle‑income contex...
Reliance on imperfect data and model assumptions can produce biased or misleading forecasts; careful validation, transparency about assumptions, and governance are necessary.
Risks & governance discussion in the paper raising this limitation and recommending practices (qualitative argumentation).
high negative AI-Based Predictive Skill Gap Analysis for Workforce Plannin... risk of biased or misleading forecasts arising from data/model limitations (qual...
Rural digital divides and uneven infrastructure constrain the reach of AI health solutions and risk exacerbating health inequities unless explicitly addressed.
Synthesis of infrastructure and equity literature, national connectivity data referenced in reviewed documents, and policy analyses included in the review period 2020–2025.
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... geographic disparities in digital infrastructure (broadband access, device avail...
Regulatory and governance frameworks for health AI in Indonesia are fragmented, with limited requirements for transparency/explainability and weak procurement/governance mechanisms.
Thematic analysis of national policy papers, SATUSEHAT governance reports, and regulatory documents identified in the 42 supplementary documents and literature review (2020–2025).
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... presence/strength of regulation and governance mechanisms (transparency requirem...
AI-generated code can introduce security vulnerabilities and raise licensing/intellectual-property concerns.
Case studies of security incidents, analyses of generated code provenance, and vulnerability-detection studies synthesized in the review.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in generated code; instances of license or...
LLMs sometimes generate incorrect, nonsensical, or insecure code (hallucinations).
Multiple benchmarks, code-generation accuracy tests, and incident case studies documented in the empirical literature showing incorrect or fabricated outputs.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate; incidence of hallucinated outputs (false or fabrica...
Data security, privacy risks, unequal gains, and regulatory shortfalls can undermine the benefits of AI/robotics adoption.
Policy and risk analyses from secondary literature, case studies, and institutional reports synthesized in the paper; examples cited but no original incident-level dataset or incidence rates provided.
high negative AI and Robotics Redefine Output and Growth: The New Producti... data/privacy risk incidence, inequality measures, regulatory adequacy (qualitati...
Transition frictions and skills mismatches are important barriers to workers moving into newly created AI‑related roles.
Qualitative review of workforce and skills literature, case studies, and sector reports; evidence comes from secondary sources with varied methodologies; the paper does not report pooled quantitative estimates.
high negative AI and Robotics Redefine Output and Growth: The New Producti... transition costs, skills mismatch incidence, retraining needs (labor market fric...
Integrating AI raises questions of accountability, transparency, fairness, privacy, and bias; managerial responsibility includes governance design, validation, and audit of AI decisions.
Normative and governance-focused synthesis citing ethical frameworks and illustrative cases; identifies governance tasks and validation/audit needs rather than empirical prevalence rates.
high negative Modern Management in the Age of Artificial Intelligence: Str... presence and quality of AI governance mechanisms (accountability frameworks, tra...
Deficits in governance, auditing, and interpretability constrain the safe deployment of generative AI in firms.
Synthesis of industry reports and conceptual literature noting gaps in governance and interpretability; no quantitative governance dataset reported.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... presence/absence of governance processes, frequency of audit findings, deploymen...
Algorithmic biases in generative AI can amplify and codify discriminatory patterns in organizational decisions.
Extensive literature on algorithmic bias synthesized in the review and applied to generative models; case examples referenced.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... disparities in decision outcomes (error rates, disparate impact metrics by group...
Generative AI use introduces significant organizational risks including data privacy breaches and leakage when models or third‑party services are used.
Conceptual analysis and references to documented incidents and industry reports within the review; no single aggregated incident dataset provided.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... incidence of data breaches/leakage, number of privacy violations
Generated code can introduce security vulnerabilities.
Security analyses and code audits documenting examples where LLM-generated code contains known vulnerability patterns; incident-oriented case studies and controlled experiments assessing vulnerability incidence.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in AI-generated code
LLMs can produce plausible-looking but incorrect or insecure code (so-called 'hallucinations').
Benchmarks and controlled tests demonstrating incorrect outputs; security analyses and replicated examples showing erroneous or insecure snippets produced by LLMs across multiple models and prompts.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate and frequency of insecure code returned
AI-driven impacts will be heterogeneous across education, race, gender, age, firm size, and geography, implying crucial equity concerns and the need for disaggregated reporting and targeted validation.
Policy analysis and literature synthesis in the paper; this claim reflects widely-documented labor economics findings about heterogeneous technological impacts though no new empirical breakdowns provided here.
high negative Enhancing BLS Methodologies for Projecting AI's Impact on Em... distribution of employment/wage/transition impacts across demographic and firm/r...
The study is limited by being a single-domain (CMM) case study with a likely modest sample size and dependence on specific AR hardware and MLLM capabilities; further validation across other machines and larger samples is needed.
Authors note these limitations in their discussion; the summary explicitly lists single-case domain, likely modest sample size, and dependency on particular hardware/MLLM as limitations.
high negative Augmented Reality-Based Training System Using Multimodal Lan... External validity/generalizability of findings (limitations stated)
Integration cost: AI-generated outputs often require human revision, testing, and manual integration into existing systems.
Reported practitioner experience and observed practices from the field study at Netlight; authors note time and effort spent on revision and integration; no quantitative time-cost estimates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... human time/effort required to adapt AI outputs for production
AI systems lack full project context, design rationale, and long-term constraints, creating context gaps for development tasks.
Interviews and workflow observations at Netlight where practitioners reported contextual limitations of AI tools; qualitative examples provided; single-firm qualitative evidence.
high negative Rethinking How IT Professionals Build IT Products with Artif... degree of project/contextual awareness in AI-produced recommendations
AI outputs commonly contain errors and hallucinations: generated code can be incorrect, incomplete, or misleading.
Practitioner reports and observed interactions with AI tools documented in the Netlight qualitative study; specific instances and practitioner concerns described in the paper; no quantitative error rates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... accuracy and correctness of AI-generated outputs
Generative AI is susceptible to social and representational biases and to factual errors or hallucinations; it lacks tacit, contextual domain expertise.
Documented examples in the literature of biased outputs and hallucinations; controlled evaluations and audits of model outputs; qualitative reports highlighting lack of tacit knowledge in domain-specific tasks.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... incidence of biased content; factual error/hallucination rate; performance on do...
The quality of AI-generated outputs is highly variable; models frequently produce mediocre but plausible-sounding content that requires human filtering.
Multiple user studies and qualitative reports documenting variability in output quality and the need for human curation; outcome measures include error rates, user-rated quality, and time spent vetting.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... output quality distributions; user-perceived quality; time/effort for human filt...
High linguistic diversity in Africa makes building and evaluating multilingual language technologies more difficult and is a barrier to inclusive AI.
Synthesis of technical literature on NLP and multilingual model development and policy/NGO reports highlighting missing language resources; no original model evaluation reported.
high negative Towards Responsible Artificial Intelligence Adoption: Emergi... language technology availability, model performance across African languages, nu...