The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
To maintain autonomy and ethical standards, universities and research funders may need to invest in local infrastructure (on‑premise compute, vetted open tools) — a public good with implications for funding priorities and inequality across countries.
Policy recommendation derived from the case study’s identification of infrastructural inequalities and limited mitigation options; not empirically tested in the paper.
speculative positive Emerging ethical duties in AI-mediated research: A case of d... infrastructure investment needs; institutional capacity
Policy recommendations implied include: reinforce worker voice via required worker representation in AI impact assessments and protection of collective bargaining around technology use; mandate disclosure and standardized impact reporting of AI systems used for hiring/monitoring/promotion/termination; and implement targeted sector- or task-specific enforceable regulations.
Normative policy prescriptions derived from the commentary’s analysis of governance gaps and risks; not empirically tested within the paper.
speculative positive AI governance under the second Trump administration: implica... adoption of recommended policy measures (worker representation, disclosure manda...
The paper proposes user rights to opt out of nonessential generative-AI integration and to choose environmentally optimized models.
Policy design section and candidate legislative amendments recommending consumer opt-out and choice rights.
speculative positive The Global Landscape of Environmental AI Regulation: From th... proposed user rights (consumer opt-out rates; availability of 'eco-optimized' mo...
The paper proposes mandatory model-level transparency requirements covering inference energy consumption, standardized benchmarks, and disclosure of compute locations.
Policy design section: normative proposal and drafted candidate legislative amendments (paper authors’ recommendations).
speculative positive The Global Landscape of Environmental AI Regulation: From th... proposed reporting requirements (inference energy per query, benchmark protocols...
To align economic growth with equitable outcomes, Indonesia needs binding regulation (data protection, auditing, enforceable accountability), communication-rights–based safeguards, targeted protections for vulnerable groups, inclusive participatory policymaking, and mechanisms (impact assessments, transparency/reporting, independent oversight) that internalize externalities and redistribute benefits more fairly.
Normative policy recommendation derived from the paper's discourse analysis, theoretical framing, and identified gaps in current governance instruments; not an empirically tested intervention within the paper.
speculative positive Promising Protection, Producing Exposure: AI Ethics and Mobi... equity and accountability of mobile‑AI governance; internalization of externalit...
Adoption of generative neural-network audiovisual tools is effectively inevitable.
Narrative synthesis of technological trends and literature in the review; no original longitudinal adoption model or empirical adoption rates provided (qualitative projection based on cited trends).
speculative positive Ethical and societal challenges to the adoption of generativ... adoption rate of generative neural-network audiovisual tools
Policymakers may need to mandate minimum verification standards or standardize audit trails/provenance metadata in safety-critical domains to reduce information asymmetries and monitoring costs.
Policy recommendation derived from risk- and externality-focused analysis; no policy impact evaluation or legal analysis presented.
speculative positive Overton Framework v1.0: Cognitive Interlocks for Integrity i... policy adoption (existence of mandates/standards), enforcement/compliance rates,...
Cognitive interlocks (e.g., mandatory proof artifacts, enforced testing gates, provenance/audit trails, verification quotas) make the verification burden explicit and non-bypassable, restoring the appropriate burden of proof.
Architectural design proposal with illustrative usage scenarios; no implementation, field trials, or quantitative evaluation in the paper.
speculative positive Overton Framework v1.0: Cognitive Interlocks for Integrity i... compliance with verification gates (% of artifacts passing mandatory checks), pr...
The Overton Framework — an architectural model embedding 'cognitive interlocks' into development environments — can align throughput and verification by enforcing verification boundaries and restore system integrity.
Framework proposed and described conceptually; includes design principles and example interlocks but no empirical prototypes, experiments, or effectiveness evaluations reported.
speculative positive Overton Framework v1.0: Cognitive Interlocks for Integrity i... effectiveness metrics if implemented (e.g., verification coverage, reduction in ...
Token taxes could slow displacement by increasing the effective cost of automation, buying time for retraining and redistribution.
Theoretical claim in the implications section; no model simulations or empirical evidence provided.
speculative positive Token Taxes: mitigating AGI's economic risks rate of labor displacement / time available for retraining
Token taxes offer a new tax base tightly linked to digital value creation by AI and potentially restoring revenue lost to automation.
Policy argument in the paper; conceptual reasoning about tax base alignment and revenue potential; no empirical revenue estimates or calibration provided.
speculative positive Token Taxes: mitigating AGI's economic risks public revenue (tax base restoration)
Token taxes are a practical, enforceable policy instrument for mitigating the major economic risks of AGI (shrinking tax bases, falling living standards, and citizen disempowerment).
Author's central thesis supported by conceptual argumentation, architecture proposals (audit pipeline), and comparison to alternatives; no empirical validation or calibration.
speculative positive Token Taxes: mitigating AGI's economic risks mitigation of AGI-related economic risks (tax base erosion, living standards, ci...
Qualified digital endpoints and validated in silico markers create new markets and assets (digital biomarkers, validation services, certified datasets) with potential commercial value.
Market and policy implications discussed in the review; forward-looking argument based on regulatory pathways and observed demand for validation services (speculative, narrative).
speculative positive Artificial Intelligence in Drug Discovery and Development: R... emergence and revenue of markets for digital biomarkers, certification/validatio...
The Reversal Register is an auditable institutional artifact that records for each decision the prevailing authority state, trigger conditions causing transitions, and justificatory explanations, thereby supporting auditability and research.
Design specification and instrumentation proposal in the paper; description of required metadata fields and intended uses. No implemented dataset presented.
medium-high positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... auditability_score; presence_of_register_entries; completeness_of_justificatory_...
Firms that build effective orchestration layers and integrate AI across pipelines may capture outsized gains, increasing winner-take-all dynamics and concentration.
Authors' argument extrapolated from observed coordination benefits/frictions at Netlight and theory about returns to scale in platformized toolchains; no empirical market concentration analysis provided.
speculative positive Rethinking How IT Professionals Build IT Products with Artif... firm-level returns and market concentration from AI orchestration capabilities
Policy and firm responses should emphasize human-in-the-loop governance, training in evaluative/domain skills, data stewardship, and regulatory attention to IP, liability, competition, and robustness standards.
Normative recommendations drawn from the review's synthesis of empirical benefits and limitations; based on identified failure modes (bias, hallucination, variable quality) and economic risks (concentration, mismeasurement).
speculative positive ChatGPT as an Innovative Tool for Idea Generation and Proble... effectiveness of governance/training/regulation in mitigating harms and enhancin...
Policy and regulation should emphasize transparency, auditability, and model-validation standards in finance to reduce systemic risks from misplaced trust or opaque algorithms.
Authors' normative recommendation based on empirical identification of risks (misplaced trust, overreliance) from survey/interview/operational data; recommendation is prescriptive and not an empirical test within the study.
speculative positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... policy/regulatory emphasis (transparency/auditability); reduction in systemic ri...
Public goods investments—digital infrastructure, interoperable local data ecosystems, and multilingual language technologies—are prerequisites for inclusive economic benefits from AI.
Conceptual and policy literature review arguing for infrastructure and public data ecosystems; paper does not provide original infrastructure impact analysis.
medium-high positive Towards Responsible Artificial Intelligence Adoption: Emergi... infrastructure coverage (broadband, cloud), interoperability standards/adoption,...
A culturally grounded responsible‑AI governance framework based on Afro‑communitarianism (Ubuntu) and stakeholder theory—emphasizing collective well‑being and participatory governance—can help align AI deployment with inclusive and sustainable economic outcomes.
Theoretical integration and framework development based on normative literature in ethics, Afro‑communitarian thought, and stakeholder governance; framework is conceptual and not empirically validated in this paper.
low-medium positive Towards Responsible Artificial Intelligence Adoption: Emergi... governance inclusivity, alignment of AI outcomes with communal values, perceived...
Public policy interventions (subsidies, accreditation incentives) may be justified when private investment underprovides broadly beneficial AI skills.
Policy recommendation in the paper: argues theoretical justification for subsidies/accreditation incentives; no empirical policy evaluation is included.
speculative positive Curriculum engineering: organisation, orientation, and manag... public funding levels, training adoption rates, social return on investment
Embedded auditability and traceability lower the cost of regulatory compliance and enable third-party verification.
Argued under Regulation and compliance economics: auditable curricula reduce compliance costs and facilitate verification. The paper recommends measuring regulatory compliance costs but provides no empirical cost comparisons.
speculative positive Curriculum engineering: organisation, orientation, and manag... regulatory compliance costs, time/cost to obtain/verify accreditation
The framework can improve career alignment and employability of learners.
Claimed under Advantages and Implications for AI Economics (better match between training and industry AI skill needs; improved placement rates/wage outcomes suggested). Evidence proposed as measurable (placement rate, wage outcomes) but no empirical results are presented.
speculative positive Curriculum engineering: organisation, orientation, and manag... placement rate, employment probability, wage outcomes
Better-governed automations can reduce firms’ systemic operational risk and may lower insurance premiums or capital charges; insurers and lenders will value documented governance when pricing risk.
Hypothesized consequence grounded in risk-transfer logic and suggested interaction with insurance/lending markets; presented as implication rather than demonstrated outcome; no insurer data provided.
speculative positive Governed Hyperautomation for CRM and ERP: A Reference Patter... insurance premiums; lender risk-based pricing; measured operational risk metrics
Explainable EEG tools can shift clinician workflows by enabling faster decision-making and reducing the requirement for specialized interpretation, with implications for training, staffing, and productivity.
Projected operational impacts discussed as implications of improved explainability; no longitudinal workflow study provided in the reviewed literature.
speculative positive Explainable Artificial Intelligence (XAI) for EEG Analysis: ... clinician workflow efficiency, training/staffing needs, productivity
Building integrated One Health data platforms and interoperable metadata standards is a priority to enable child-centered AI applications, surveillance, and economic evaluation.
Policy recommendation grounded in identified data fragmentation; authors argue for investment and international cooperation based on the review's assessment of gaps.
speculative positive Safeguarding future generations: a One Health perspective on... availability and utility of integrated One Health data platforms and resultant i...
Economic evaluations and AI-enabled allocation algorithms need to internalize cross-sector externalities (e.g., agricultural antibiotic use) and long-term child health/human-capital impacts to prioritize effective interventions.
Recommendation based on synthesis of AMR ecology, economics, and developmental-impact literature; conceptual argument rather than empirical demonstration.
speculative positive Safeguarding future generations: a One Health perspective on... policy prioritization and cost-effectiveness outcomes when cross-sector external...
Embedding an explicit, child-centered lens into One Health research, surveillance, governance, and interventions is necessary to protect child health and equity.
Policy and normative argument built from the review synthesis; recommendation rather than empirically tested intervention—draws on identified gaps in surveillance, governance, and evidence.
speculative positive Safeguarding future generations: a One Health perspective on... anticipated improvements in child health outcomes, equity, and resilience follow...
Policy interventions that encourage or mandate identity disclosure and explainable personalization in commercial chatbots are supported by these findings (to reduce deception risk and perceived manipulation).
Interpretive implication based on experimental results showing transparency and explainable personalization reduce perceived manipulation and increase trust; recommended as a policy implication.
speculative positive AI Chatbots as Informatics-Enabled Marketing Service Systems... policy relevance (consumer protection / perceived manipulation)
Research gaps include the need for causal evaluations (RCTs or quasi-experiments) of bundled interventions (training + placement + income support), cross-country comparisons of informality's moderating role, and better data on platform employment dynamics.
Identified research agenda and priorities summarized from the literature review and gap analysis in the paper; recommendation rather than empirical finding.
speculative positive Who Loses to Automation? AI-Driven Labour Displacement and t... evidence on effectiveness of bundled interventions and cross-country moderation ...
Empirical work on automation should distinguish task vs job displacement, measure platform algorithmic effects on labour demand, and quantify fallback employment options available to displaced informal workers.
Methodological recommendation based on gaps identified in the reviewed literature and limitations of existing studies; no new data collection presented.
speculative positive Who Loses to Automation? AI-Driven Labour Displacement and t... quality of empirical measurement (ability to isolate task vs job displacement an...
Policy responses should go beyond reskilling to include mechanisms addressing informality and job quality (e.g., portable benefits, minimum standards for platforms, guaranteed work or public employment schemes, wage floors, and training linked to placement).
Policy recommendation synthesized from literature on platform labour, social protection, and training program design; normative prescription rather than empirically validated intervention within this paper.
speculative positive Who Loses to Automation? AI-Driven Labour Displacement and t... worker welfare and employment security under combined policy interventions
Unchecked shifts toward K_T-dominated production can amplify political risks (rising inequality, fiscal strain) that may fuel populism, protectionism, and demands for renegotiated social contracts.
Theoretical political‑economy discussion supported by historical analogies and model scenarios linking fiscal stress and distributional change to political-instability risks; qualitative case evidence.
speculative positive The Macroeconomic Transition of Technological Capital in the... political risk indicators (populist support, policy volatility) — discussed qual...
To make AI a driver of structural change, policy interventions must link AI investment to comprehensive energy subsidy reform and accelerated development of the new and renewable energy sector.
Policy recommendation based on integrated analysis showing that subsidy burdens and import dependence limit AI's macro impact; proposed linkage is derived from the study's scenario/logic assessment.
speculative positive (conditional) AI-Based Technological Transformation as a Driver for Develo... potential for AI to drive structural change conditional on subsidy reform and re...