The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
Recognition of digital sovereignty and data‑localization pressures can fragment data flows, increasing costs for cross‑border model training and lowering scale economies that benefit high‑quality AI.
Policy and economic analysis in the compendium drawing on comparative examples and theory about data localization and scale economies; no empirical cost accounting provided.
medium negative Diego Saucedo Portillo Sauceport Research cross‑border data flows, costs of model training, scale economies in AI developm...
Replacing opaque predictive features with interpretable substitutes could reduce predictive accuracy in some models, creating trade‑offs between fairness/transparency and short‑term efficiency.
Synthesis of technical AI governance literature and normative design discussion in the compendium; no new experimental validation reported.
medium negative Diego Saucedo Portillo Sauceport Research predictive accuracy of credit-scoring models; measures of fairness/transparency
Mandatory white‑box requirements and audits will raise compliance costs, which can increase barriers to entry for smaller fintechs and favor incumbents unless mitigated by supporting measures.
Economic reasoning and policy analysis in the AI economics section; theoretical projection based on compliance cost effects (no empirical trial reported).
medium negative Diego Saucedo Portillo Sauceport Research compliance costs for fintechs; barriers to market entry (market structure effect...
Human-in-the-loop controls formalize supervisory labor and create persistent oversight costs even after automation scales.
Pattern design and governance lifecycle recommendations highlighting human checkpoints; qualitative reasoning without measurement of oversight hours or costs.
medium negative Governed Hyperautomation for CRM and ERP: A Reference Patter... ongoing human oversight hours/costs per automated transaction
Perceived manipulation exerts a significant negative (direct) effect on purchase intention.
PLS-SEM results from the experimental study show a direct negative path from measured perceived manipulation to measured purchase intention.
Empathetic, personalized conversational tone reduces perceived manipulation among young consumers (UAE, ages 18–25).
2 × 2 between-subjects experiment manipulating tone; perceived manipulation measured; effects estimated via PLS-SEM.
Transparent AI identity disclosure reduces perceived manipulation among young consumers (UAE, ages 18–25).
2 × 2 between-subjects experiment manipulating identity disclosure; perceived manipulation was measured as an outcome; PLS-SEM used to estimate effects.
Environmental costs of large-scale model training and inference may become economically significant and should be accounted for (sustainable compute/carbon accounting).
Systems and sustainability measurement literature referenced in the paper; no new lifecycle energy/carbon dataset reported here.
medium negative Artificial Intelligence for Personalized Digital Advertising... energy/carbon costs of model training and inference
Privacy externalities and potential for manipulation (microtargeted persuasive messaging) impose social costs that are not currently captured in market prices.
Welfare economics framing and literature on privacy harms/manipulation; conceptual synthesis rather than a quantified social-cost accounting in this paper.
medium negative Artificial Intelligence for Personalized Digital Advertising... unpriced social costs (privacy harms, manipulation)
Investments are flowing toward first-party data architectures (retail media, walled gardens) and generative creative systems; smaller publishers face incentives to join platform networks or accept lower yields.
Industry trend observation and economic argument presented in the paper; not backed by a cited comprehensive investment dataset in this summary.
medium negative Artificial Intelligence for Personalized Digital Advertising... investment flows and publisher yields
Opaque ML policies can distort bidding strategies and reduce market transparency.
Theoretical auction analysis and industry examples of black-box policies; no controlled empirical quantification provided in the paper.
medium negative Artificial Intelligence for Personalized Digital Advertising... bidding behavior distortion and market transparency
Distributed training introduces novel incentive issues (free-riding, poisoning incentives, misreporting of local metrics) that require contractual and cryptographic solutions and may create demand for trusted intermediaries or certification markets.
Mechanism/incentive analysis within the paper; threat modeling and proposed governance solutions. No experimental evaluation of incentive mechanisms or market responses.
medium negative Privacy-Aware AI Advertising Systems: A Federated Learning F... incidence of strategic behaviors (free-riding, misreporting, poisoning) and effe...
Federated infrastructures redistribute informational power — moving custody away from centralized platforms reduces their exclusive access to behavioral data and can lower their data-based market power.
Economic and institutional analysis (conceptual), discussion of informational rents and bargaining positions. This is a theoretical economic claim without empirical market measurement in the paper.
medium negative Privacy-Aware AI Advertising Systems: A Federated Learning F... distribution of informational rents/market power indicators (conceptual; no empi...
Fairness constraints (e.g., disparate ad delivery) and monitoring become more challenging to enforce and audit without centralized raw data, requiring new governance and measurement mechanisms.
Policy and governance analysis describing limitations of decentralized data for fairness monitoring; proposed policy-aware governance layer and attestation/audit mechanisms. No empirical validation of governance effectiveness provided.
medium negative Privacy-Aware AI Advertising Systems: A Federated Learning F... ability to detect and correct disparate outcomes (fairness metrics) under decent...
AI-enabled platforms can increase market concentration and platform power, creating competition and data-governance risks and uneven distributional effects across regions and worker skill levels.
Observational platform-concentration indicators and distributional analyses in the case material; scenario and sensitivity checks on distributional outcomes under alternative adoption/policy regimes.
medium negative Artificial Intelligence–Enabled E-Commerce Systems and Autom... market concentration measures (e.g., platform market share), distributional outc...
Prevailing reskilling strategies assume access to stable employment, time and funds for training, certification systems, and institutional support — conditions that are weak or absent for informal platform workers; therefore standard reskilling policies are poorly suited to this context.
Qualitative synthesis of policy analyses and literature on reskilling programs and labour-market institutions; conceptual critique rather than new empirical testing.
medium negative Who Loses to Automation? AI-Driven Labour Displacement and t... effectiveness of reskilling programs in producing stable employment outcomes for...
Algorithmic management (opaque algorithms for assignment, pricing, and performance metrics) restructures platform work in ways that both change task composition and intensify precarity, reducing workers' ability to adapt to automation.
Draws on prior empirical studies and policy analyses of algorithmic management cited in the literature review; no new empirical data collected in this paper.
medium negative Who Loses to Automation? AI-Driven Labour Displacement and t... worker precarity and adaptability (e.g., job security, ability to transition to ...
Task versus job displacement operate differently across institutional contexts: in formal labour markets, task automation can be accommodated through reallocation or protections, while in informal platform work task loss typically becomes outright job loss.
Argument built from secondary literature comparing formal and informal labour-market institutions and existing empirical studies on reallocation mechanisms; conceptual analysis in the paper (qualitative synthesis only).
medium negative Who Loses to Automation? AI-Driven Labour Displacement and t... rate of worker reallocation vs complete job loss following task automation
AI-driven automation in platform-based informal work in India primarily displaces tasks, but because workers lack job security, institutional protections, and access to alternative labour tracks, task-level automation often manifests as full job displacement.
Synthesis of prior empirical studies, policy analyses, and theoretical work on platform-based labour and automation focused on India and comparable developing-country settings; conceptual framing distinguishing task-level vs job-level effects; no primary data or new empirical analysis in this paper.
medium negative Who Loses to Automation? AI-Driven Labour Displacement and t... job displacement / employment loss among platform-based informal workers
Reduced labor shares disproportionately harm lower- and middle-skill workers relative to higher-skill workers, increasing distributional inequality.
Micro and firm-case analyses linking K_T exposure to occupation- and skill-level wage/employment outcomes; regressions showing heterogeneous effects across skill groups; supporting evidence from sectoral studies.
medium negative The Macroeconomic Transition of Technological Capital in the... employment and wages by skill group; inequality indicators across skill deciles
The loss of labor share and payrolls materially undermines PAYG pension sustainability and payroll-tax revenue bases under realistic adoption trajectories.
Dynamic general equilibrium overlapping-generations model calibrated and simulated to incorporate substitution between labor and K_T and a PAYG pension sector; fiscal simulations show declining contributor bases and pressure on pension balances; sensitivity analyses across adoption speeds.
medium negative The Macroeconomic Transition of Technological Capital in the... PAYG pension sustainability metrics (e.g., contribution-revenue ratios, projecte...
Wages for workers in K_T‑intensive firms/industries fall or grow more slowly relative to less-exposed counterparts, compressing wage contributions to income.
Panel regressions estimating wage outcomes conditional on K_T intensity measures, with controls and robustness specifications; supported by matched employer‑employee microdata in case studies and industry-level decompositions.
medium negative The Macroeconomic Transition of Technological Capital in the... wage levels and wage growth
Significant implementation hurdles—chronic infrastructure gaps, weak data governance, severe digital skills shortages, high initial investment costs, and organizational inertia—create a 'pilot trap' that prevents successful AI pilots from scaling.
Qualitative findings from interviews/case studies in the mixed-methods research detailing recurring barriers to scaling AI projects in large enterprises and across the sector.
medium negative (barrier) AI-Based Technological Transformation as a Driver for Develo... ability to scale AI projects (incidence of pilots failing to scale; presence of ...
Strict oversight requirements for GLAI could raise fixed compliance costs (audit, certification, human-in-the-loop processes), benefiting incumbent firms and potentially reducing competition and barriers to entry.
Regulatory economics argument drawing on compliance-cost logic and market structure effects; no empirical entry-cost analysis or case studies.
medium negative (for competition), positive (for incumbents) Why Avoid Generative Legal AI Systems? Hallucination, Overre... barriers to entry and market competition metrics in legal-AI markets
Perception of increased legal risk and regulatory uncertainty may slow adoption of GLAI and redirect investment toward safer subfields (verification tools, retrieval-augmented systems, formal-reasoning hybrids).
Economic reasoning and market-design argumentation based on risk/uncertainty dynamics; no econometric or survey data presented.
medium negative (for generative adoption), positive (for verification subfields) Why Avoid Generative Legal AI Systems? Hallucination, Overre... adoption rates of GLAI and relative investment flows across AI subfields
Divergent regulatory regimes (e.g., strict EU rules vs. looser regimes elsewhere) may produce regulatory arbitrage, influencing where GLAI companies locate, invest, and trade internationally.
Cross-jurisdictional regulatory analysis and economic inference about firm behavior under differential regulation; no firm-level relocation data provided.
medium negative (for regulatory harmonization), neutral for firms (strategic outcome) Why Avoid Generative Legal AI Systems? Hallucination, Overre... firm location/investment decisions and cross-border trade in legal-AI services
The positive macroeconomic effects of AI are severely limited by structural issues, notably large petroleum import volumes and the fiscal burden of incomplete fuel subsidy reforms.
Integrated quantitative analysis showing that operational savings are outweighed by import volumes and subsidy fiscal costs; contextual fiscal data cited (fuel subsidy reform peak).
medium negative (limits positive effect) AI-Based Technological Transformation as a Driver for Develo... net macroeconomic impact of AI on GDP/trade balance after accounting for import ...
Evaluations that measure outcomes only via official-language channels risk underestimating impacts where vernacular mediation is central.
Argument based on the discrepancy between vernacular-mediated comprehension/adoption observed in the sample and the likely invisibility of those effects in official-language measurement channels; supported by questionnaire and qualitative data.
medium negative (regarding official-language-only evaluation validity) From Linguistic Hybridity to Development Sovereignty: Pidgin... measurement bias / underestimation of program impacts
DPPs raise privacy and surveillance risks if personal data are linked to product use; economic regulation should incentivize privacy-preserving analytics (e.g., federated learning, differential privacy) and data minimality to maintain trust.
Risk assessment and governance recommendation grounded in stakeholder concerns and standard privacy literature; not empirically measured in the surveys.
medium negative (risk) Integrating knowledge management and digital product passpor... privacy/surveillance risk and recommended governance/technical mitigations
Interpretive, ad-hoc human-centered evaluation practices (e.g., “vibe checks”, team sense-making) are rational adaptations to LLM behavior rather than merely sloppy or inferior methodological choices.
Authors' interpretive argument based on interview evidence where practitioners explained why such practices persist and how they serve sense-making for unpredictable model behavior.
medium neutral Results-Actionability Gap: Understanding How Practitioners E... characterization of interpretive evaluation practices (rational adaptation vs. m...
The possibility of strategic argument construction (gaming) motivates governance needs: standards for provenance, certification, and liability rules.
Policy recommendation based on anticipated incentive problems; no empirical governance evaluations.
medium neutral Argumentative Human-AI Decision-Making: Toward AI Agents Tha... existence and effectiveness of governance mechanisms (standards, certification, ...
Standard GDP statistics can mask AI-driven demand shortfalls; central banks and statistical agencies should therefore monitor labor-share–velocity links, distributional income measures, and consumption by income quantile in addition to headline GDP.
Theoretical Ghost GDP channel and calibration results showing divergence between measured GDP and consumption-relevant income; policy recommendation follows from those model results.
medium neutral Abundant Intelligence and Deficient Demand: A Macro-Financia... detection of demand shortfalls (labor-share–velocity relationship and consumptio...
Health technology assessment (HTA) frameworks should be adapted to evaluate models trained on synthetic or hybrid data, incorporating metrics for fidelity, domain generalization, and economic impact (cost-effectiveness, budget impact, distributional effects).
Recommendation from the review synthesizing HTA literature and gaps identified when applying existing HTA to AI models trained on non-traditional data sources; based on policy analysis rather than empirical HTA trials of synthetic-data models.
medium neutral On the use of synthetic data for healthcare AI in Africa: Te... HTA evaluation metrics (fidelity scores, generalization performance, cost-effect...
Technical fixes alone are insufficient: governance, validation pipelines (e.g., health technology assessment), and capacity building are needed for safe, effective uptake of synthetic-data–trained AI.
Cross-disciplinary synthesis of governance analyses, health technology assessment literature, and implementation studies in the review arguing for combined technical and institutional interventions; recommendation-based evidence rather than new empirical trials.
medium neutral On the use of synthetic data for healthcare AI in Africa: Te... safe/effective uptake operationalized via validated deployment, regulatory compl...
AI changes the nature of capital (digital/algorithmic assets) and complicates productivity accounting; researchers should decompose firm-level productivity gains into AI technology, complementary organizational capital, and human capital effects.
Theoretical proposal grounded in productivity accounting literature and conceptual discussion; no single decomposition empirical result presented.
medium neutral Modern Management in the Age of Artificial Intelligence: Str... components of multifactor productivity attributable to AI assets versus organiza...
Policy and governance issues become salient: liability, IP, security, and certification of AI-generated code require new standards for provenance, testing, and accountability.
Argument based on practitioner-raised concerns about security, IP, and provenance in the Netlight study; authors recommend policy attention; no legal/regulatory analysis or empirical policy evaluation provided.
medium neutral Rethinking How IT Professionals Build IT Products with Artif... need for regulatory standards and governance mechanisms for AI-assisted developm...
Time-series metrics (e.g., derivatives like d/dt(student enrollment)) are useful monitoring signals for validation and system oversight.
Methodological suggestion in the paper proposing time-series analysis of enrollment and other administrative data; no empirical demonstration or threshold criteria provided.
medium neutral Establishes a technical and academic bridge between the educ... sensitivity of monitoring to enrollment changes, anomaly detection lead time
Five interaction mechanisms were identified, with the majority propagating across the subsystem boundary.
Authors' thematic analysis and STS mapping identifying five cross- or within-subsystem interaction mechanisms; qualitative assessment that most propagate across subsystem boundary.
medium null result BARRIERS TO AGENTIC AI ENTERPRISE TRANSFORMATION interaction_mechanisms_and_propagation
The operative risk for legislators is not stable ideological bias in LLMs but contextual ignorance shaped by training data coverage.
Authors argue from observed model behavior on the 15 proposals (good performance on well-covered standardized templates; failures on idiosyncratic items) and interpret this as evidence that errors are driven by training-data coverage rather than consistent ideological bias.
medium null result Can Commercial LLMs Be Parliamentary Political Companions? C... source of systematic risk (ideological bias vs contextual ignorance)
Most action tools support medium-stakes tasks like editing files.
Classification of action tools by task consequentiality using O*NET mapping and inspection of tool functions (paper states majority are medium-stakes, e.g., file editing).
medium null result How are AI agents used? Evidence from 177,000 MCP tools consequentiality / stakes of action tools (proportion medium-stakes)
CAFTA spillovers stabilized import volumes from third countries (reduced volatility) for Chinese agricultural imports.
Analysis of import volume volatility metrics over 2000–2014 using customs data within DID framework; volatility/variance decline identified as an outcome in the mechanisms/secondary channel tests.
medium null result How regional trade policy uncertainty affects agricultural i... import volume volatility/stability (variance or coefficient of variation of impo...
The report provides scenario-based forecasts for HACCA emergence across near-, mid-, and long-term timelines, identifying capability thresholds to monitor.
Capability trajectory assessment combining trends in AI capabilities, automation of software tasks, computation availability, and diffusion dynamics; scenario and expert-judgment approach (qualitative forecasting).
medium null result Highly Autonomous Cyber-Capable Agents: Anticipating Capabil... projected timelines to HACCA emergence and associated capability thresholds
A Sankey diagram of thematic evolution shows lexical convergence over time and indicates that a small set of authors has disproportionate influence in structuring the discourse.
Thematic evolution analysis visualized with a Sankey diagram; author influence inferred from performance trends (citations/publication counts) in the bibliometric data.
medium null result Generative AI and the algorithmic workplace: a bibliometric ... lexical convergence across themes and concentration of author influence (disprop...
CID does not significantly mediate the relationship between SCD and strategic green innovation.
Mediation tests showing that while CID is related to substantive innovation, the indirect effect via CID on strategic green innovation was statistically insignificant.
medium null result Supply Chain Digitalization and its Impact on Green Innovati... strategic green innovation (signaling/compliance-oriented measures) and CID as m...
This paper is one of the first systematic reviews focused specifically on NLP in bank marketing, organizing findings along the customer journey and the marketing mix to provide a practical taxonomy.
Authors' stated novelty claim based on the scoped literature search (2014–2024) and topical focus; novelty inferred from the small number of prior papers identified at the intersection.
medium null result Natural language processing in bank marketing: a systematic ... existence of prior systematic reviews specifically on NLP in bank marketing
There is a need to develop new trade statistics that capture AI‑enabled services and platform‑mediated cross‑border transactions.
Methodological gap identified across reviewed literature and statistical analyses; recommendation based on descriptive assessment (no development of such statistics in the paper).
medium null result Analysis of Digital Services Trade and Export Competitivenes... availability and quality of trade statistics for AI/platform‑mediated services
Productivity gains from AI may be under- or mis-measured if national accounts and tax systems do not adjust for AI-driven quality changes in services.
Analytic observation in the paper's measurement and externalities discussion; not empirically tested within the study.
medium null result Explore the Impact of Generative AI on Finance and Taxation accuracy of productivity measurement and GDP accounting for AI-enabled quality i...
Distributed agency (Problem C) complicates classical principal–agent models; economists should develop models that capture multiple, overlapping agents and ambiguous attribution of outcomes.
Conceptual implication for economic modeling derived from the paper’s diagnosis of distributed agency; recommendation for formal modeling and simulations but none provided.
medium null result Examining ethical challenges in human–robot interaction usin... adequacy of classical principal–agent models to represent distributed agency (th...
An orchestrator coordinates components with intent-aware routing and layered safety checks, enabling multi-step workflows and productized services.
Paper describes an agentic tool-calling framework and multi-layer orchestrator used for intent-aware routing, defense-in-depth safety validation, and multi-step workflows.
medium null result Fanar 2.0: Arabic Generative AI Stack system orchestration capability (intent-aware routing, layered safety)
Aura is a long-form ASR system capable of handling hours-long audio.
Paper lists Aura in the product stack as 'long-form ASR handling hours-long audio.' Specific evaluation metrics or training data for ASR are not provided in the summary.
medium null result Fanar 2.0: Arabic Generative AI Stack ASR capability (long-form/hours-long audio handling)