The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (4049 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Governance Remove filter
Both open-source and proprietary approaches carry risks of algorithmic bias and fairness violations, especially when models are uncontrolled or poorly validated across populations.
Multiple peer-reviewed studies and audit reports summarized in the literature synthesis documenting bias/fairness issues across model types and populations.
high negative Framework for Government Policy on Agentic and Generative AI... bias / fairness metrics / differential performance across populations
Rural digital divides and uneven infrastructure constrain the reach of AI health solutions and risk exacerbating health inequities unless explicitly addressed.
Synthesis of infrastructure and equity literature, national connectivity data referenced in reviewed documents, and policy analyses included in the review period 2020–2025.
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... geographic disparities in digital infrastructure (broadband access, device avail...
Regulatory and governance frameworks for health AI in Indonesia are fragmented, with limited requirements for transparency/explainability and weak procurement/governance mechanisms.
Thematic analysis of national policy papers, SATUSEHAT governance reports, and regulatory documents identified in the 42 supplementary documents and literature review (2020–2025).
high negative Artificial Intelligence in Healthcare in Indonesia: Are We R... presence/strength of regulation and governance mechanisms (transparency requirem...
AI-generated code can introduce security vulnerabilities and raise licensing/intellectual-property concerns.
Case studies of security incidents, analyses of generated code provenance, and vulnerability-detection studies synthesized in the review.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in generated code; instances of license or...
LLMs sometimes generate incorrect, nonsensical, or insecure code (hallucinations).
Multiple benchmarks, code-generation accuracy tests, and incident case studies documented in the empirical literature showing incorrect or fabricated outputs.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate; incidence of hallucinated outputs (false or fabrica...
Data security, privacy risks, unequal gains, and regulatory shortfalls can undermine the benefits of AI/robotics adoption.
Policy and risk analyses from secondary literature, case studies, and institutional reports synthesized in the paper; examples cited but no original incident-level dataset or incidence rates provided.
high negative AI and Robotics Redefine Output and Growth: The New Producti... data/privacy risk incidence, inequality measures, regulatory adequacy (qualitati...
Transition frictions and skills mismatches are important barriers to workers moving into newly created AI‑related roles.
Qualitative review of workforce and skills literature, case studies, and sector reports; evidence comes from secondary sources with varied methodologies; the paper does not report pooled quantitative estimates.
high negative AI and Robotics Redefine Output and Growth: The New Producti... transition costs, skills mismatch incidence, retraining needs (labor market fric...
International and national legal approaches to these stages are fragmented, creating uncertainty for IP, privacy, liability and evidence law.
Comparative review of international and national legal approaches and judicial responses cited in the paper (secondary legal sources).
high negative Ethical and societal challenges to the adoption of generativ... degree of fragmentation and legal uncertainty across jurisdictions
Output-stage risks include authenticity/deception concerns, attribution and reuse-rights disputes, reputational harms, and broader societal impacts from abundant generated media.
Review of empirical studies on media authenticity, legal cases, and policy analyses included in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... authenticity, deception potential, attribution disputes, reputational and societ...
Process-stage risks include governance of model development, control over deployment, transparency, auditing, and operational safety.
Conceptual synthesis of technical governance literature and policy reports cited in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... governance and operational safety concerns in model development/deployment
Input-stage risks include concerns about consent, copyright, representativeness, bias, provenance and data ownership for training material.
Synthesis of legal and policy literature and documented legal cases/statutes related to training data and IP/privacy issues (secondary sources only).
high negative Ethical and societal challenges to the adoption of generativ... legal/ethical compliance and risk factors in training datasets
Generative audiovisual AI poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts).
Conceptual three-stage framework built from comparative review of literature, legal cases/statutes and policy reports described in the paper.
high negative Ethical and societal challenges to the adoption of generativ... presence and types of ethical, governance, transparency and legal risks across i...
Limitations of the study include potential selection bias in reviewed sources and contingency of conclusions on evolving legal decisions and technology developments.
Author-stated limitations section within the paper; qualitative acknowledgement rather than empirical bias assessment.
high negative Ethical and societal challenges to the adoption of generativ... reliability and generalizability of the review's conclusions
Output-stage risks include challenges to authenticity and provenance, erosion of trust (deepfakes and misinformation), and potential legal liability for harms caused by generated content.
Synthesis of technical papers on deepfakes, legal analyses of liability, and policy reports referenced in the review; no original incident dataset or quantitative prevalence estimate included.
high negative Ethical and societal challenges to the adoption of generativ... authenticity/provenance verification success, consumer trust, incidence of misin...
Input-stage risks include copyright infringement, lack of consent, poor data provenance, and biases/representational harms encoded in training datasets.
Review and synthesis of academic and legal literature on training data issues; examples and case law discussed, but no original dataset audit or sample counts provided.
high negative Ethical and societal challenges to the adoption of generativ... legal/compliance risk and bias in generated outputs arising from training data
Use of these models faces significant ethical, control, transparency, and legal challenges across three stages—input (training data), process (development/control), and output (generated artifacts).
Framework constructed from interdisciplinary literature (technical, ethical, legal sources) and review of statutes/judicial approaches; qualitative synthesis rather than primary data.
high negative Ethical and societal challenges to the adoption of generativ... presence and severity of ethical/legal/control challenges across input/process/o...
High environmental constraints in many African regions (poor infrastructure, challenging geography, frequent climate shocks) materially affect logistics, resilience, and supply-chain performance.
Review of literature on infrastructure, geography, and climate impacts in the conceptual paper.
high negative Continental shift: operations and supply chain management re... infrastructure and environmental constraints' impact on logistics/resilience
Africa is abundant in natural resources but exhibits relatively low development/outcomes from those resources, creating resource allocation and value-capture problems relevant to OSCM.
Development economics and regional studies literature cited in the paper's synthesis; conceptual claim without new empirical testing.
high negative Continental shift: operations and supply chain management re... resource endowment versus development outcomes (value capture in supply chains)
Africa has a large informal economy and many informal organizations that shape supply-chain behavior and market functioning.
Literature synthesis citing development and institutional studies (no primary data collection in the paper).
high negative Continental shift: operations and supply chain management re... prevalence of informality and its influence on supply-chain behavior
High upfront costs, weak digital/physical infrastructure, limited access to credit, low digital literacy, insecure land tenure, and sociocultural factors (including gendered access) limit uptake of digital and precision technologies among smallholders.
Consistent findings across program evaluations, qualitative stakeholder interviews, participatory assessments, and case studies cited in the synthesis.
high negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION technology adoption rates (uptake), barriers to adoption
Limited access to capital, data, digital infrastructure, skills, and insecure land tenure reduce adoption rates for advanced innovations among smallholders.
Multiple empirical studies and program evaluations synthesized in the review documenting adoption barriers; policy review identifying structural constraints across regions.
high negative MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION adoption rates of AI/IoT/precision tools, uptake of new practices
Integrating AI raises questions of accountability, transparency, fairness, privacy, and bias; managerial responsibility includes governance design, validation, and audit of AI decisions.
Normative and governance-focused synthesis citing ethical frameworks and illustrative cases; identifies governance tasks and validation/audit needs rather than empirical prevalence rates.
high negative Modern Management in the Age of Artificial Intelligence: Str... presence and quality of AI governance mechanisms (accountability frameworks, tra...
Deficits in governance, auditing, and interpretability constrain the safe deployment of generative AI in firms.
Synthesis of industry reports and conceptual literature noting gaps in governance and interpretability; no quantitative governance dataset reported.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... presence/absence of governance processes, frequency of audit findings, deploymen...
Algorithmic biases in generative AI can amplify and codify discriminatory patterns in organizational decisions.
Extensive literature on algorithmic bias synthesized in the review and applied to generative models; case examples referenced.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... disparities in decision outcomes (error rates, disparate impact metrics by group...
Generative AI use introduces significant organizational risks including data privacy breaches and leakage when models or third‑party services are used.
Conceptual analysis and references to documented incidents and industry reports within the review; no single aggregated incident dataset provided.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... incidence of data breaches/leakage, number of privacy violations
Generated code can introduce security vulnerabilities.
Security analyses and code audits documenting examples where LLM-generated code contains known vulnerability patterns; incident-oriented case studies and controlled experiments assessing vulnerability incidence.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... incidence of security vulnerabilities in AI-generated code
LLMs can produce plausible-looking but incorrect or insecure code (so-called 'hallucinations').
Benchmarks and controlled tests demonstrating incorrect outputs; security analyses and replicated examples showing erroneous or insecure snippets produced by LLMs across multiple models and prompts.
high negative ChatGPT as a Tool for Programming Assistance and Code Develo... code correctness/error rate and frequency of insecure code returned
The technical feasibility of robust token verification and resistance to spoofing needs demonstration; it is not yet proven.
Authors explicitly acknowledge this limitation in the paper; no prototypes or red-team results are presented.
high negative Token Taxes: mitigating AGI's economic risks robustness of token verification to spoofing/evasion
Risks: dependence on LLM behavior means hallucinations, bias, or misaligned reasoning can propagate into simulated outcomes; Chain-of-Thought reasoning may be hard to fully verify, posing interpretability/auditability challenges.
Paper's cautions section listing potential failure modes and ethical/interpretability risks; these are identified risks rather than quantified failures observed in experiments.
high negative An LLM-Driven Multi-Agent Simulation Framework for Coupled E... propagation of LLM-induced errors/bias into simulation outcomes and interpretabi...
Key failure modes for AI in drug R&D include overfitting, poor generalizability, dataset bias, insufficient external validation, and misalignment with evolving regulatory expectations.
Synthesis of literature and case reports in the narrative review describing observed failures and risks across projects (qualitative evidence).
high negative Artificial Intelligence in Drug Discovery and Development: R... failure incidence of AI projects (model performance collapse, regulatory rejecti...
Absent rigorous controls (validation, applicability-domain reporting, attention to dataset bias), AI models risk overfitting, producing inequitable outcomes and regulatory friction that can undermine economic benefits.
Theoretical arguments plus case reports and literature cited in the review documenting instances and mechanisms of overfitting, dataset bias, and regulatory challenges; narrative summary rather than systematic quantification.
high negative Artificial Intelligence in Drug Discovery and Development: R... model generalizability (out-of-sample performance), subgroup performance dispari...
Governing-logic stability uncertainty (whether decision logic or objectives remain stationary) is a distinct risk posed by agentic AI.
Conceptual argument and proposed taxonomy; no empirical tests reported.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... stability of AI decision logic/objectives over time
Epistemic grounding uncertainty (uncertainty about how/why an AI produced a particular output) increases with agentic AI.
Literature synthesis on model-level opacity and causal explanation limits; conceptual reasoning in the paper.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... ability to explain/ground AI outputs
Behavioral trajectory uncertainty (difficulty predicting long-run actions) is a primary form of uncertainty introduced by agentic AI.
Conceptual classification and argument; proposed as one of three principal uncertainties; no empirical estimation.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... predictability of long-run agentic AI actions
Integration cost: AI-generated outputs often require human revision, testing, and manual integration into existing systems.
Reported practitioner experience and observed practices from the field study at Netlight; authors note time and effort spent on revision and integration; no quantitative time-cost estimates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... human time/effort required to adapt AI outputs for production
AI systems lack full project context, design rationale, and long-term constraints, creating context gaps for development tasks.
Interviews and workflow observations at Netlight where practitioners reported contextual limitations of AI tools; qualitative examples provided; single-firm qualitative evidence.
high negative Rethinking How IT Professionals Build IT Products with Artif... degree of project/contextual awareness in AI-produced recommendations
AI outputs commonly contain errors and hallucinations: generated code can be incorrect, incomplete, or misleading.
Practitioner reports and observed interactions with AI tools documented in the Netlight qualitative study; specific instances and practitioner concerns described in the paper; no quantitative error rates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... accuracy and correctness of AI-generated outputs
Adaptive RL-driven campaigns complicate attribution and causal inference, so rigorous experimental designs (multi-armed trials, off-policy evaluation) are required for valid measurement.
Methodological claim in the implications section; supported by discussion of policy adaptivity and the need for specific evaluation techniques. No empirical demonstration provided.
high negative Personalized Content Selection in Marketing Using BERT and G... bias in causal estimates, validity of attribution, off-policy evaluation error
The system raises privacy, fairness, and safety risks including data leakage, demographic bias in generated content, manipulative targeting, and potential regulatory non-compliance.
Risk assessment and red-team / audit practices described; paper cites known classes of ML deployment risks and recommends logs/audits. This is a conceptual identification rather than a quantified empirical finding.
high negative Personalized Content Selection in Marketing Using BERT and G... incidence/risk of data leakage, demographic bias metrics, examples of manipulati...
Integration and engineering complexity (legacy systems, privacy/compliance pipelines, multi-channel platforms) is a persistent barrier to deployment.
Industry case studies and practitioner reports synthesized in the review documenting integration challenges; no systematic cost accounting or sample sizes presented.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... integration complexity metrics, implementation time/cost, number of integration ...
Hallucinations and factual errors from generative AI can damage service quality and customer trust.
Documented failure cases and empirical reports from the literature aggregated by the review; no novel incident count or experimental data in this paper.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... incidence of factual errors/hallucinations, measures of service quality and cust...
Generative AI is susceptible to social and representational biases and to factual errors or hallucinations; it lacks tacit, contextual domain expertise.
Documented examples in the literature of biased outputs and hallucinations; controlled evaluations and audits of model outputs; qualitative reports highlighting lack of tacit knowledge in domain-specific tasks.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... incidence of biased content; factual error/hallucination rate; performance on do...
The quality of AI-generated outputs is highly variable; models frequently produce mediocre but plausible-sounding content that requires human filtering.
Multiple user studies and qualitative reports documenting variability in output quality and the need for human curation; outcome measures include error rates, user-rated quality, and time spent vetting.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... output quality distributions; user-perceived quality; time/effort for human filt...
Factual errors and 'hallucinations' create misinformation risks and can produce costly service failures.
Model evaluation studies, incident case reports from deployments, and academic/industry analyses documenting hallucination rates and concrete failure examples.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... factual accuracy / hallucination rate; incidents of service failure (operational...
The study population was restricted to CHI conference papers that had publicly shared study data and analysis code (a self-selected subset), which introduces a self-selection bias that may overestimate reproducibility rates for the broader set of CHI papers.
Authors' stated sampling strategy and limitations noted in the paper (sample restricted to artifact-sharing papers and potential overestimation of reproducibility).
high negative On the Computational Reproducibility of Human-Computer Inter... generalizability of the measured reproducibility rate (bias due to sampling)
Ethical, privacy, and legal restrictions sometimes limit the ability to share data and thereby hamper reproducibility.
Authors' observations from reproduction work and survey/interview responses indicating that some datasets could not be shared for legal/ethical reasons.
high negative On the Computational Reproducibility of Human-Computer Inter... incidence of data-sharing restrictions affecting reproducibility
High linguistic diversity in Africa makes building and evaluating multilingual language technologies more difficult and is a barrier to inclusive AI.
Synthesis of technical literature on NLP and multilingual model development and policy/NGO reports highlighting missing language resources; no original model evaluation reported.
high negative Towards Responsible Artificial Intelligence Adoption: Emergi... language technology availability, model performance across African languages, nu...
Structural constraints—limited digital infrastructure, scarce and skewed data, and high linguistic diversity—complicate AI development, deployment and evaluation in African contexts.
Desk review of infrastructure and data availability reports and scholarly literature demonstrating gaps and their effects; no new measurement in this paper.
high negative Towards Responsible Artificial Intelligence Adoption: Emergi... internet/digital infrastructure coverage, availability and representativeness of...
Privacy concerns, regulatory/compliance issues, biased or opaque models, and the need for change management and HR analytics capability building are significant risks constraining adoption.
Recurring risks and constraints reported by multiple included studies; summarized in the review's 'risks and constraints' theme.
high negative Data-Driven Strategies in Human Resource Management: The Rol... adoption constraints, incidence of privacy/regulatory/ bias issues
Implementation of data-driven HRM faces recurring challenges: data quality, privacy and ethics, algorithmic bias, and deficiencies in skills and organizational readiness.
Commonly reported implementation issues across the 47 reviewed studies; extracted as a central theme in the review's thematic analysis.
high negative Data-Driven Strategies in Human Resource Management: The Rol... implementation success/failure factors, incidence of data/ethical issues