The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2340 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
Clear
Org Design Remove filter
Both open-source and proprietary approaches carry risks of algorithmic bias and fairness violations, especially when models are uncontrolled or poorly validated across populations.
Multiple peer-reviewed studies and audit reports summarized in the literature synthesis documenting bias/fairness issues across model types and populations.
high negative Framework for Government Policy on Agentic and Generative AI... bias / fairness metrics / differential performance across populations
Data security, privacy risks, unequal gains, and regulatory shortfalls can undermine the benefits of AI/robotics adoption.
Policy and risk analyses from secondary literature, case studies, and institutional reports synthesized in the paper; examples cited but no original incident-level dataset or incidence rates provided.
high negative AI and Robotics Redefine Output and Growth: The New Producti... data/privacy risk incidence, inequality measures, regulatory adequacy (qualitati...
Transition frictions and skills mismatches are important barriers to workers moving into newly created AI‑related roles.
Qualitative review of workforce and skills literature, case studies, and sector reports; evidence comes from secondary sources with varied methodologies; the paper does not report pooled quantitative estimates.
high negative AI and Robotics Redefine Output and Growth: The New Producti... transition costs, skills mismatch incidence, retraining needs (labor market fric...
International and national legal approaches to these stages are fragmented, creating uncertainty for IP, privacy, liability and evidence law.
Comparative review of international and national legal approaches and judicial responses cited in the paper (secondary legal sources).
high negative Ethical and societal challenges to the adoption of generativ... degree of fragmentation and legal uncertainty across jurisdictions
Output-stage risks include authenticity/deception concerns, attribution and reuse-rights disputes, reputational harms, and broader societal impacts from abundant generated media.
Review of empirical studies on media authenticity, legal cases, and policy analyses included in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... authenticity, deception potential, attribution disputes, reputational and societ...
Process-stage risks include governance of model development, control over deployment, transparency, auditing, and operational safety.
Conceptual synthesis of technical governance literature and policy reports cited in the narrative review.
high negative Ethical and societal challenges to the adoption of generativ... governance and operational safety concerns in model development/deployment
Input-stage risks include concerns about consent, copyright, representativeness, bias, provenance and data ownership for training material.
Synthesis of legal and policy literature and documented legal cases/statutes related to training data and IP/privacy issues (secondary sources only).
high negative Ethical and societal challenges to the adoption of generativ... legal/ethical compliance and risk factors in training datasets
Generative audiovisual AI poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts).
Conceptual three-stage framework built from comparative review of literature, legal cases/statutes and policy reports described in the paper.
high negative Ethical and societal challenges to the adoption of generativ... presence and types of ethical, governance, transparency and legal risks across i...
Limitations of the study include potential selection bias in reviewed sources and contingency of conclusions on evolving legal decisions and technology developments.
Author-stated limitations section within the paper; qualitative acknowledgement rather than empirical bias assessment.
high negative Ethical and societal challenges to the adoption of generativ... reliability and generalizability of the review's conclusions
Output-stage risks include challenges to authenticity and provenance, erosion of trust (deepfakes and misinformation), and potential legal liability for harms caused by generated content.
Synthesis of technical papers on deepfakes, legal analyses of liability, and policy reports referenced in the review; no original incident dataset or quantitative prevalence estimate included.
high negative Ethical and societal challenges to the adoption of generativ... authenticity/provenance verification success, consumer trust, incidence of misin...
Input-stage risks include copyright infringement, lack of consent, poor data provenance, and biases/representational harms encoded in training datasets.
Review and synthesis of academic and legal literature on training data issues; examples and case law discussed, but no original dataset audit or sample counts provided.
high negative Ethical and societal challenges to the adoption of generativ... legal/compliance risk and bias in generated outputs arising from training data
Use of these models faces significant ethical, control, transparency, and legal challenges across three stages—input (training data), process (development/control), and output (generated artifacts).
Framework constructed from interdisciplinary literature (technical, ethical, legal sources) and review of statutes/judicial approaches; qualitative synthesis rather than primary data.
high negative Ethical and societal challenges to the adoption of generativ... presence and severity of ethical/legal/control challenges across input/process/o...
Results reflect small-scale e-commerce use cases; external validity to larger firms, other sectors, or more complex tasks is not established.
Scope of deployments limited to small-scale e-commerce settings as stated in methods; no cross-sector or large-firm samples reported in summary.
high negative Artificial Intelligence Agents in Knowledge Work: Transformi... generalisability/external validity of observed productivity effects
The study's evidence is observational rather than randomized controlled trials, so causal estimates about productivity impacts are suggestive rather than definitive.
Declared study design: applied experimentation and observational analysis of deployments (no randomized assignment); methods section explicitly notes observational limitation.
high negative Artificial Intelligence Agents in Knowledge Work: Transformi... strength of causal inference (ability to attribute observed productivity changes...
Integrating AI raises questions of accountability, transparency, fairness, privacy, and bias; managerial responsibility includes governance design, validation, and audit of AI decisions.
Normative and governance-focused synthesis citing ethical frameworks and illustrative cases; identifies governance tasks and validation/audit needs rather than empirical prevalence rates.
high negative Modern Management in the Age of Artificial Intelligence: Str... presence and quality of AI governance mechanisms (accountability frameworks, tra...
Deficits in governance, auditing, and interpretability constrain the safe deployment of generative AI in firms.
Synthesis of industry reports and conceptual literature noting gaps in governance and interpretability; no quantitative governance dataset reported.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... presence/absence of governance processes, frequency of audit findings, deploymen...
Algorithmic biases in generative AI can amplify and codify discriminatory patterns in organizational decisions.
Extensive literature on algorithmic bias synthesized in the review and applied to generative models; case examples referenced.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... disparities in decision outcomes (error rates, disparate impact metrics by group...
Generative AI use introduces significant organizational risks including data privacy breaches and leakage when models or third‑party services are used.
Conceptual analysis and references to documented incidents and industry reports within the review; no single aggregated incident dataset provided.
high negative The Use of ChatGPT in Business Productivity and Workflow Opt... incidence of data breaches/leakage, number of privacy violations
The study is limited by being a single-domain (CMM) case study with a likely modest sample size and dependence on specific AR hardware and MLLM capabilities; further validation across other machines and larger samples is needed.
Authors note these limitations in their discussion; the summary explicitly lists single-case domain, likely modest sample size, and dependency on particular hardware/MLLM as limitations.
high negative Augmented Reality-Based Training System Using Multimodal Lan... External validity/generalizability of findings (limitations stated)
Governing-logic stability uncertainty (whether decision logic or objectives remain stationary) is a distinct risk posed by agentic AI.
Conceptual argument and proposed taxonomy; no empirical tests reported.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... stability of AI decision logic/objectives over time
Epistemic grounding uncertainty (uncertainty about how/why an AI produced a particular output) increases with agentic AI.
Literature synthesis on model-level opacity and causal explanation limits; conceptual reasoning in the paper.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... ability to explain/ground AI outputs
Behavioral trajectory uncertainty (difficulty predicting long-run actions) is a primary form of uncertainty introduced by agentic AI.
Conceptual classification and argument; proposed as one of three principal uncertainties; no empirical estimation.
high negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... predictability of long-run agentic AI actions
Integration cost: AI-generated outputs often require human revision, testing, and manual integration into existing systems.
Reported practitioner experience and observed practices from the field study at Netlight; authors note time and effort spent on revision and integration; no quantitative time-cost estimates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... human time/effort required to adapt AI outputs for production
AI systems lack full project context, design rationale, and long-term constraints, creating context gaps for development tasks.
Interviews and workflow observations at Netlight where practitioners reported contextual limitations of AI tools; qualitative examples provided; single-firm qualitative evidence.
high negative Rethinking How IT Professionals Build IT Products with Artif... degree of project/contextual awareness in AI-produced recommendations
AI outputs commonly contain errors and hallucinations: generated code can be incorrect, incomplete, or misleading.
Practitioner reports and observed interactions with AI tools documented in the Netlight qualitative study; specific instances and practitioner concerns described in the paper; no quantitative error rates provided.
high negative Rethinking How IT Professionals Build IT Products with Artif... accuracy and correctness of AI-generated outputs
Integration and engineering complexity (legacy systems, privacy/compliance pipelines, multi-channel platforms) is a persistent barrier to deployment.
Industry case studies and practitioner reports synthesized in the review documenting integration challenges; no systematic cost accounting or sample sizes presented.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... integration complexity metrics, implementation time/cost, number of integration ...
Hallucinations and factual errors from generative AI can damage service quality and customer trust.
Documented failure cases and empirical reports from the literature aggregated by the review; no novel incident count or experimental data in this paper.
high negative The Effectiveness of ChatGPT in Customer Service and Communi... incidence of factual errors/hallucinations, measures of service quality and cust...
Generative AI is susceptible to social and representational biases and to factual errors or hallucinations; it lacks tacit, contextual domain expertise.
Documented examples in the literature of biased outputs and hallucinations; controlled evaluations and audits of model outputs; qualitative reports highlighting lack of tacit knowledge in domain-specific tasks.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... incidence of biased content; factual error/hallucination rate; performance on do...
The quality of AI-generated outputs is highly variable; models frequently produce mediocre but plausible-sounding content that requires human filtering.
Multiple user studies and qualitative reports documenting variability in output quality and the need for human curation; outcome measures include error rates, user-rated quality, and time spent vetting.
high negative ChatGPT as an Innovative Tool for Idea Generation and Proble... output quality distributions; user-perceived quality; time/effort for human filt...
Privacy concerns, regulatory/compliance issues, biased or opaque models, and the need for change management and HR analytics capability building are significant risks constraining adoption.
Recurring risks and constraints reported by multiple included studies; summarized in the review's 'risks and constraints' theme.
high negative Data-Driven Strategies in Human Resource Management: The Rol... adoption constraints, incidence of privacy/regulatory/ bias issues
Implementation of data-driven HRM faces recurring challenges: data quality, privacy and ethics, algorithmic bias, and deficiencies in skills and organizational readiness.
Commonly reported implementation issues across the 47 reviewed studies; extracted as a central theme in the review's thematic analysis.
high negative Data-Driven Strategies in Human Resource Management: The Rol... implementation success/failure factors, incidence of data/ethical issues
Constraints and risks include model risk (overfitting, drift), algorithmic bias, privacy and data-sharing limits, legacy ERP complexity, interoperability challenges, and limited organizational readiness and skills.
Reviewed literature (empirical studies, technical evaluations, and standards) documenting technical and organizational failures, risk incidents, and common barriers to implementation.
high negative Integrating Artificial Intelligence and Enterprise Resource ... risk-related outcomes (e.g., model degradation rates, incidence of biased decisi...
Key audit/control weaknesses with respect to prompt fraud include lack of provenance for inputs/prompts and model outputs, inadequate access controls, and missing or ineffective monitoring and anomaly detection for AI outputs.
Qualitative control analysis and adaptation of established auditing principles to GenAI workflows; recommendations based on threat modeling rather than field data.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... presence or absence of specific control capabilities (provenance, access control...
GenAI outputs can be tailored to mimic corporate styles, templates, and evidence artifacts (e.g., summaries, memos, audit trails), which increases their credibility to auditors, managers, or customers.
Illustrative examples and scenario mapping demonstrating templated output mimicry; no controlled experiments or corpus analysis provided.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... perceived credibility of machine-generated artifacts when formatted to corporate...
Large language models produce fluent, human-like outputs that can mask falsehoods (hallucinations) as facts, making prompt fraud effective.
Well-established LLM behavior cited conceptually and supported in the paper by illustrative examples; no new empirical measurement in this article.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... propensity of LLM outputs to present fabricated information as authoritative
Prompt fraud does not require system intrusion, credential theft, or software exploits; it operates at the reasoning/language layer of large language models and therefore can be executed without technical breaches.
Logical/technical argumentation built from properties of LLMs and illustrative hypothetical attack chains; threat modeling rather than empirical attack logs.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... necessity of technical breach for successful fraud (binary: required/not require...
Prompt fraud is a new, distinct fraud modality in which adversaries intentionally craft natural-language prompts (or manipulate prompt inputs) to steer generative AI outputs into producing misleading, fabricated, or compliance-evading artifacts that bypass traditional internal controls.
Conceptual definition presented by the paper based on threat taxonomy and scenario mapping; illustrated with case-style examples. No empirical incident dataset or prevalence statistics provided.
high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... existence/recognition of a distinct fraud modality ('prompt fraud')
Potential limitations include limited methodological detail on case selection and measurement, possible selection and reporting bias from practitioner-sourced examples, and variable generalizability to small firms or highly regulated industries.
Authors' self-reported limitations in the Methods/Limitations section (qualitative assessment).
high negative Governed Hyperautomation for CRM and ERP: A Reference Patter... methodological completeness and generalizability (qualitative limitation)
China manages the openness–security trade-off through a centralized, developmentalist, techno‑sovereignty approach that privileges coordinated state direction and control.
Qualitative content analysis of national‑level policy texts: 18 Chinese policy documents coded across four analytical dimensions (coordination objectives, institutional actors, governance mechanisms, stakeholder legitimacy).
high negative Balancing openness and security in scientific data governanc... governance logic / institutional coordination type (centralized, state‑led)
Automation and LLM-driven orchestration add opacity; errors in instrument control or analysis could propagate quickly, raising liability, insurance, and reproducibility concerns.
Analytical discussion of risks and analogies to automated systems in other domains; no incident-level empirical data from microscopy given.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... frequency and impact of errors, liability exposure, reproducibility failures
Ethical and governance issues related to LLM-driven microscopy include accountability, reproducibility, access inequities, data privacy, and concentration of capabilities in large providers.
Policy-oriented synthesis and analogies to governance challenges observed in other AI deployments; no new empirical measurement in microscopy contexts.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... presence of governance risks: accountability gaps, reproducibility problems, une...
Integration of LLMs with microscopes faces challenges including safety and reliability of instrument control, verification of scientific outputs, data provenance, and alignment with experimental constraints.
Analytical discussion based on known reliability and safety issues in automated systems and AI tool use; no empirical incident data from microscopy provided.
high negative ChatMicroscopy: A Perspective Review of Large Language Model... risks to safety, reliability, and scientific validity when deploying LLM-driven ...
Implementing the governed hyperautomation pattern raises upfront costs (governance tooling, monitoring, validation, compliance processes).
Economic and cost-structure discussion in the paper, based on qualitative reasoning and industry experience; no quantified cost estimates or sample-based cost analysis provided.
high negative Governed Hyperautomation for CRM and ERP: A Reference Patter... upfront implementation costs (governance tooling, validation, compliance overhea...
Federated infrastructures introduce adversarial risks (model/data poisoning, inference attacks on updates) that require robust aggregation, anomaly detection, and other defenses.
Threat modeling and taxonomy of adversarial/privacy threats with mapped mitigations (robust aggregation, anomaly detection, DP). Evidence is conceptual and based on standard threat frameworks; no empirical attack/defense experiments reported at scale.
high negative Privacy-Aware AI Advertising Systems: A Federated Learning F... vulnerability to poisoning/inference (attack success rate), effectiveness of def...
Delayed and sparse feedback (clicks/conversions) in advertising complicates credit assignment and timely model updates, degrading learning unless specific methods for delayed/sparse signals are used.
Analytical discussion of learning dynamics with delayed/sparse labels; conceptual solutions suggested (credit assignment methods). No large-scale empirical evaluation presented.
high negative Privacy-Aware AI Advertising Systems: A Federated Learning F... learning efficacy under delayed/sparse feedback (convergence, time-to-adapt), at...
Non-IID and heterogeneous data distributions across devices and publishers impair convergence and degrade personalization unless addressed with algorithmic adaptations.
Analytical modeling of convergence under non-IID conditions; threat/robustness discussion; prototype/simulation illustrations. This claim is supported by established literature and the paper's analytic treatment.
high negative Privacy-Aware AI Advertising Systems: A Federated Learning F... convergence behavior (rate, stability), personalization performance (accuracy on...
The paper's formalism shows that prompt/system messages shape distributions over possible execution paths (indirect control) but do not evaluate actual partial paths at runtime.
Formal mapping in the paper that treats prompts as shaping prior over paths; conceptual argument and illustrative examples.
high neutral Runtime Governance for AI Agents: Policies on Paths degree of control over execution path (distributional shaping vs. path-specific ...
Through a thematic review of existing research, the authors identified recurring themes about incentive schemes: their components, how researchers manipulate them, and their impact on research outcomes.
Authors' stated method and findings: thematic review (the scope/number of reviewed papers not specified in excerpt).
high neutral Incentive-Tuning: Understanding and Designing Incentives for... themes in incentive design practices and reported impacts on empirical study out...
A critical aspect of conducting human–AI decision-making studies is the role of participants, often recruited through crowdsourcing platforms.
Claim based on the authors' thematic literature review noting participant sourcing practices (specific studies and counts not given in excerpt).
high neutral Incentive-Tuning: Understanding and Designing Incentives for... participant recruitment source (e.g., crowdsourcing) and its influence on study ...
Researchers conduct empirical studies investigating how humans use AI assistance for decision-making and how this collaboration impacts results.
Statement summarizing the research landscape; supported implicitly by the authors' thematic review of existing empirical studies (number of studies not specified in excerpt).
high neutral Incentive-Tuning: Understanding and Designing Incentives for... human behaviour and decision outcomes when assisted by AI (empirical study outco...