The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2432 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Labor Markets Remove filter
Generative AI use introduces significant organizational risks including data privacy breaches and leakage when models or third‑party services are used.
Conceptual analysis and references to documented incidents and industry reports within the review; no single aggregated incident dataset provided.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... incidence of data breaches/leakage, number of privacy violations
Generated code can introduce security vulnerabilities.
Security analyses and code audits documenting examples where LLM-generated code contains known vulnerability patterns; incident-oriented case studies and controlled experiments assessing vulnerability incidence.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in AI-generated code
LLMs can produce plausible-looking but incorrect or insecure code (so-called 'hallucinations').
Benchmarks and controlled tests demonstrating incorrect outputs; security analyses and replicated examples showing erroneous or insecure snippets produced by LLMs across multiple models and prompts.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate and frequency of insecure code returned
The technical feasibility of robust token verification and resistance to spoofing needs demonstration; it is not yet proven.
Authors explicitly acknowledge this limitation in the paper; no prototypes or red-team results are presented.
high negative Token Taxes: mitigating AGI's economic risks robustness of token verification to spoofing/evasion
AI-driven impacts will be heterogeneous across education, race, gender, age, firm size, and geography, implying crucial equity concerns and the need for disaggregated reporting and targeted validation.
Policy analysis and literature synthesis in the paper; this claim reflects widely-documented labor economics findings about heterogeneous technological impacts though no new empirical breakdowns provided here.
high negative Enhancing BLS Methodologies for Projecting AI's Impact on Em... distribution of employment/wage/transition impacts across demographic and firm/r...
Governing-logic stability uncertainty (whether decision logic or objectives remain stationary) is a distinct risk posed by agentic AI.
Conceptual argument and proposed taxonomy; no empirical tests reported.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... stability of AI decision logic/objectives over time
Epistemic grounding uncertainty (uncertainty about how/why an AI produced a particular output) increases with agentic AI.
Literature synthesis on model-level opacity and causal explanation limits; conceptual reasoning in the paper.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... ability to explain/ground AI outputs
Behavioral trajectory uncertainty (difficulty predicting long-run actions) is a primary form of uncertainty introduced by agentic AI.
Conceptual classification and argument; proposed as one of three principal uncertainties; no empirical estimation.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... predictability of long-run agentic AI actions
Integration and engineering complexity (legacy systems, privacy/compliance pipelines, multi-channel platforms) is a persistent barrier to deployment.
Industry case studies and practitioner reports synthesized in the review documenting integration challenges; no systematic cost accounting or sample sizes presented.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... integration complexity metrics, implementation time/cost, number of integration ...
Hallucinations and factual errors from generative AI can damage service quality and customer trust.
Documented failure cases and empirical reports from the literature aggregated by the review; no novel incident count or experimental data in this paper.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... incidence of factual errors/hallucinations, measures of service quality and cust...
Generative AI is susceptible to social and representational biases and to factual errors or hallucinations; it lacks tacit, contextual domain expertise.
Documented examples in the literature of biased outputs and hallucinations; controlled evaluations and audits of model outputs; qualitative reports highlighting lack of tacit knowledge in domain-specific tasks.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... incidence of biased content; factual error/hallucination rate; performance on do...
The quality of AI-generated outputs is highly variable; models frequently produce mediocre but plausible-sounding content that requires human filtering.
Multiple user studies and qualitative reports documenting variability in output quality and the need for human curation; outcome measures include error rates, user-rated quality, and time spent vetting.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... output quality distributions; user-perceived quality; time/effort for human filt...
Factual errors and 'hallucinations' create misinformation risks and can produce costly service failures.
Model evaluation studies, incident case reports from deployments, and academic/industry analyses documenting hallucination rates and concrete failure examples.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... factual accuracy / hallucination rate; incidents of service failure (operational...
High linguistic diversity in Africa makes building and evaluating multilingual language technologies more difficult and is a barrier to inclusive AI.
Synthesis of technical literature on NLP and multilingual model development and policy/NGO reports highlighting missing language resources; no original model evaluation reported.
high negative Towards Responsible Artificial Intelligence Adoption: Emergi... language technology availability, model performance across African languages, nu...
Structural constraints—limited digital infrastructure, scarce and skewed data, and high linguistic diversity—complicate AI development, deployment and evaluation in African contexts.
Desk review of infrastructure and data availability reports and scholarly literature demonstrating gaps and their effects; no new measurement in this paper.
high negative Towards Responsible Artificial Intelligence Adoption: Emergi... internet/digital infrastructure coverage, availability and representativeness of...
Rapid skill obsolescence in AI necessitates frequent curriculum updates and responsive governance.
Identified as a risk: the paper notes AI skill change rates and recommends frequent updates and governance mechanisms. This aligns with general domain knowledge; the paper does not provide empirical measurement of obsolescence rates.
high negative Curriculum engineering: organisation, orientation, and manag... update frequency, lag between skill demand change and curriculum update
Aligning multiple standards is complex, posing a disadvantage and implementation risk.
Stated explicitly in Disadvantages/Risks: complexity of aligning multiple standards is listed. This is a reasoned observation in the paper rather than empirically demonstrated.
high negative Curriculum engineering: organisation, orientation, and manag... complexity measures (number of standards to reconcile, conflicts identified), ti...
Implementing this framework requires significant resources and continuous updating.
Stated explicitly under Main Finding and Disadvantages/Risks; paper lists cost/time metrics to track (cost-per-curriculum, time-to-update) and highlights resource intensity. Support is descriptive/analytic rather than empirical.
high negative Curriculum engineering: organisation, orientation, and manag... resource intensity (cost-per-curriculum), time-to-update, maintenance burden
The digital divide (lack of reliable electricity and connectivity) constrains adoption of MIS and AI, creating geographic and regional inequities in who benefits from the framework.
Infrastructure constraint argument presented in the paper; no quantified coverage maps or population-level access statistics included.
high negative Establishes a technical and academic bridge between the educ... coverage of system access, differential adoption rates by region, inequality in ...
AI-driven equivalency systems carry risks including algorithmic bias, opaque decisions without explainability, and potential reinforcement of inequities when training data under-represents some regions/institutions.
Risk assessment drawing on established AI ethics literature; no empirical bias audit from the proposed system is provided.
high negative Establishes a technical and academic bridge between the educ... measures of algorithmic bias (disparate impact), explainability scores, unequal ...
The major disadvantage of an MIS is dependency on reliable electricity and internet, creating systemic vulnerability due to the digital divide.
Paper notes infrastructure dependency as a constraint; assertion grounded in common infrastructural realities but no measured connectivity or outage statistics from DRC/SA are provided.
high negative Establishes a technical and academic bridge between the educ... geographic/regional access to equivalency services and system uptime availabilit...
Potential limitations include limited methodological detail on case selection and measurement, possible selection and reporting bias from practitioner-sourced examples, and variable generalizability to small firms or highly regulated industries.
Authors' self-reported limitations in the Methods/Limitations section (qualitative assessment).
high negative Governed Hyperautomation for CRM and ERP: A Reference Patter... methodological completeness and generalizability (qualitative limitation)
Prompt fraud exploits the natural-language interface of large language models (LLMs) to produce outputs that appear authoritative (reports, audit trails, explanations) without system intrusion, credential theft, or software exploitation.
Definition and threat-model description using conceptual examples and case vignettes; literature/regulatory review to position the threat relative to traditional fraud vectors.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... production of authoritative-appearing artifacts by LLMs without technical system...
Data privacy and cross-border compliance issues arise from using cloud and SECaaS, complicating legal compliance for firms.
Regulatory analyses and compliance reports; documented examples in case studies and industry guidance on cross-border data flows.
high negative Security- as- a- service: enhancing cloud security through m... compliance incident rates / regulatory risk exposure
The cloud shared responsibility model creates potential ambiguities in liability between providers and customers.
Regulatory guidance, legal analyses, and documented post-incident case studies showing confusion over responsibilities.
high negative Security- as- a- service: enhancing cloud security through m... clarity/ambiguity of security and liability responsibilities
Automation and LLM-driven orchestration add opacity; errors in instrument control or analysis could propagate quickly, raising liability, insurance, and reproducibility concerns.
Analytical discussion of risks and analogies to automated systems in other domains; no incident-level empirical data from microscopy given.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... frequency and impact of errors, liability exposure, reproducibility failures
Ethical and governance issues related to LLM-driven microscopy include accountability, reproducibility, access inequities, data privacy, and concentration of capabilities in large providers.
Policy-oriented synthesis and analogies to governance challenges observed in other AI deployments; no new empirical measurement in microscopy contexts.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... presence of governance risks: accountability gaps, reproducibility problems, une...
Integration of LLMs with microscopes faces challenges including safety and reliability of instrument control, verification of scientific outputs, data provenance, and alignment with experimental constraints.
Analytical discussion based on known reliability and safety issues in automated systems and AI tool use; no empirical incident data from microscopy provided.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... risks to safety, reliability, and scientific validity when deploying LLM-driven ...
There is substantial uncertainty in economic forecasts due to possible scale-up failures, regulatory constraints, feedstock price volatility, and path‑dependent lock‑in effects.
Synthesis of technical failure modes, regulatory uncertainty, and sensitivity analyses reported in TEA/LCA literature and economic modeling sections of the review.
high negative Harnessing Microbial Factories: Biotechnology at the Edge of... forecast variance in cost trajectories, probability of commercial success, and s...
Regulatory and biosafety concerns (including environmental release risks and dual‑use issues) increase fixed costs and create entry barriers that shape industry structure and diffusion.
Policy and governance literature reviewed alongside technical case studies; citations of regulatory requirements, biosafety frameworks, and examples of compliance costs affecting project viability.
high negative Harnessing Microbial Factories: Biotechnology at the Edge of... regulatory compliance costs, time-to-market, number of approved facilities/proce...
Engineering and economic challenges—scale‑up hurdles, process robustness, feedstock cost, and downstream purification—limit industrial deployment of many bio-based processes.
Case study TEA/LCA summaries and process reports in the review highlighting scale-up failures or increased costs at larger scales, purification complexity for low‑concentration products, and sensitivity to feedstock prices.
high negative Harnessing Microbial Factories: Biotechnology at the Edge of... capital and operating costs, purification yield and cost, process robustness met...
Technical biological limitations—metabolic burden, pathway crosstalk, byproduct formation, and genetic instability—remain major constraints on strain performance and scalability.
Multiple experimental reports and method papers cited in the review documenting decreased growth/productivity due to engineered pathway burden, unintended interactions between pathways, accumulation of byproducts, and genetic mutations during production runs.
high negative Harnessing Microbial Factories: Biotechnology at the Edge of... strain growth rate, productivity (g/L/h), byproduct concentrations, genetic muta...
Measurement issues (task-based output measurement, attributing output changes to AI) and selection into early adoption bias estimated productivity gains upward.
Methodological robustness checks reported in the paper: task-based measures, bounding exercises, placebo tests, and analysis of pre-trends; discussions of selection on unobservables and potential upward bias.
high negative S-TCO: A Sustainable Teacher Context Ontology for Educationa... validity/bias of estimated productivity effects
Implementing the governed hyperautomation pattern raises upfront costs (governance tooling, monitoring, validation, compliance processes).
Economic and cost-structure discussion in the paper, based on qualitative reasoning and industry experience; no quantified cost estimates or sample-based cost analysis provided.
high negative Governed Hyperautomation for CRM and ERP: A Reference Patter... upfront implementation costs (governance tooling, validation, compliance overhea...
The cost of formalizing informal labor (CFIL) implies formalizing a worker costs on average 88% more than the informal wage in 2023.
New CFIL metric calculated for 19 countries (2023 baseline) by estimating the additional employer cost of hiring and formalizing an informal worker and reporting it relative to the informal wage, using compiled statutory obligations and informal wage benchmarks.
high negative Salaried Labor Costs in Latin America and the Caribbean: A T... CFIL (additional cost of formalizing) as % above informal wage
There is sizable attrition in the pipeline from applicant admission through to direct employment of AI graduates, indicating leakages at multiple stages (application → admission → graduation → employment).
Quantification of human-resource losses across pipeline stages using the monitoring dataset for the 191 institutions; descriptive counts/percentages of entrants, admitted students, graduates, and those directly employed in AI roles (pipeline loss metrics reported in paper).
high negative Employment og Graduates of Educational Programs in the Field... Attrition rates / absolute losses at sequential pipeline stages (applicants → ad...
Graduates from Russian universities running AI-related educational programs together with alternative training routes (self-education and professional retraining) satisfy 43.9% of estimated national AI personnel demand.
Monitoring dataset of 191 Russian universities implementing AI-related programs; aggregated counts of university graduates plus estimated contributions from self-education and professional retraining compared to an estimated national AI personnel demand (coverage reported as 43.9%).
high negative Employment og Graduates of Educational Programs in the Field... Share (%) of estimated national AI personnel demand satisfied by combined univer...
AI automates routine and some mid-skill tasks, reducing employment in those occupations.
Empirical task-based exposure measures mapping AI capabilities to occupational task content, microdata analyses of employment by occupation using household/employer/administrative datasets, and panel regressions/decompositions that document within-occupation declines and between-occupation shifts.
high negative Intelligence and Labor Market Transformation: A Critical Ana... employment levels in routine and mid-skill occupations
Relying on secondary literature limits the paper's ability to make causal inferences and constrains empirical generalizability to all sectors or countries.
Stated limitations in the paper's Data & Methods section acknowledging scope and inferential constraints.
high negative Who Loses to Automation? AI-Driven Labour Displacement and t... causal inference strength and generalizability of conclusions
Increases in K_T reduce employment levels in affected firms and industries even when aggregate productivity rises.
Panel econometric estimates at firm and industry levels relating K_T intensity to employment outcomes, controlling for demand, input prices, and firm characteristics; difference-in-differences specifications and instrumental-variable robustness checks; corroborated by sectoral case studies.
high negative The Macroeconomic Transition of Technological Capital in the... employment (firm- and industry-level employment counts or employment growth)
Rising technological capital (K_T) — proxied by robot/automation density, software and intangible capital accumulation, AI adoption surveys, and AI-related patenting — leads to a decline in labor’s share of output.
Firm- and industry-level panel regressions linking constructed K_T intensity measures to labor shares, supported by macro growth-accounting decompositions; robustness checks include difference-in-differences and instrumenting adoption with plausibly exogenous shocks (e.g., cross-border technology diffusion, trade shocks); validated with cross-country comparisons and case studies.
high negative The Macroeconomic Transition of Technological Capital in the... labor share of income (share of output paid to labor)
We critically compare LLM-generated rulings against 10,000 real-world court judgments from China Judgments Online (CJOL).
Dataset statement: the paper compares model outputs to a corpus of 10,000 CJOL labor dispute judgments.
high neutral LLM Safety in Judicial AI: A Stress Test of Social Media Inf... agreement / deviation between LLM-generated rulings and CJOL judgments
We introduce a novel stress test that evaluates LLM-generated labor dispute outcomes by injecting social media sentiment as an external pressure.
Methodological description in the paper: a designed stress test where social media sentiment is used to perturb LLM outputs for labor dispute cases.
high neutral LLM Safety in Judicial AI: A Stress Test of Social Media Inf... sensitivity of LLM-generated labor dispute outcomes to injected social media sen...
Economic evaluations of GLAI should account for end-to-end risk externalities (error propagation, institutional trust, rights impacts), not only short-term productivity gains.
Methodological recommendation grounded in conceptual synthesis of technical, behavioral, and legal risks; normative argument rather than empirical result.
high neutral Why Avoid Generative Legal AI Systems? Hallucination, Overre... comprehensiveness of economic evaluations (inclusion of externalities vs. narrow...
Generative Legal AI (GLAI) systems are built on token-prediction (LLM) architectures rather than formal legal-reasoning architectures.
Conceptual and technical analysis in the paper distinguishing GLAI from other legal-tech; literature synthesis on common LLM architectures. No original empirical dataset or sample size—qualitative/technical review.
high neutral Why Avoid Generative Legal AI Systems? Hallucination, Overre... underlying model architecture type (token-prediction vs. formal-reasoning)
AI’s societal integration in India is gradual, and therefore its impact on economic variables (like wages and inequality) is also gradual.
Synthesis in the paper based on empirical adoption figures (e.g., <0.7% adoption for AI ride services) and the observed weak changes in inequality measures in the transportation sector.
high null result Artificial Intelligence, Demand Switching and Sectoral Wage ... pace of AI integration and consequent economic impact
Despite AI’s introduction, wage inequality in the transportation sector (measured by the Gini coefficient) has not significantly worsened.
Empirical investigation reported in the paper analyzing transportation-sector wage disparities over time using the Gini coefficient; the paper reports no significant worsening post-introduction.
high null result Artificial Intelligence, Demand Switching and Sectoral Wage ... Gini coefficient of wages in the transportation sector
Four control mechanisms emerged from the review: GPS tracking (panoptic surveillance), rating systems (emotional labour demands), dynamic pricing (income volatility), and automated sanctions (deactivation fear).
Thematic synthesis across the 48 reviewed studies identifying recurring algorithmic control mechanisms.
high null result Algorithmic Control and Psychological Risk in Digitally Mana... presence/identification of algorithmic control mechanisms
Thematic synthesis integrated Job Demand-Control Model, Conservation of Resources Theory, and Algorithmic Management Theory to develop an integrated multilevel theoretical framework.
Authors' stated method: thematic synthesis combining those three theoretical frameworks across the reviewed literature (48 studies).
high null result Algorithmic Control and Psychological Risk in Digitally Mana... theoretical integration
PRISMA-guided systematic integrative review of 48 peer-reviewed studies (2016-2025) sourced from 4,812 initial records (Scopus, Web of Science, PubMed).
Methods statement in the paper: PRISMA-guided systematic integrative review; search across Scopus, Web of Science, PubMed; initial yield 4,812 records; final included studies = 48.
high null result Algorithmic Control and Psychological Risk in Digitally Mana... number of studies and records screened/included