The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (7953 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
The study is limited by being a single‑country case; contextual factors (regulatory regime, infrastructure capacity, procurement practices) may limit generalizability and the study emphasizes institutional and ethical analysis rather than quantitative measurement of economic impacts.
Explicit limitations reported in the paper summarizing scope and emphasis.
high null result Emerging ethical duties in AI-mediated research: A case of d... generalizability and scope limitations
Methods used include qualitative interviews with researchers and administrators, observation/documentation of tool use, mapping of data flows and third‑party dependencies, and normative/legal analysis contrasting local practices with GDPR principles.
Methods section of the paper as reported in the provided summary.
The study's empirical basis is a qualitative case study centered on environmental science research in Chile that adopts the GDPR as an organizing normative framework.
Paper description of study scope and normative framing (methods and focus described in Data & Methods).
high null result Emerging ethical duties in AI-mediated research: A case of d... study design / empirical basis
There is a need for validated administrative and firm-level data on AI adoption, workplace monitoring, and worker outcomes, and for evaluation of policy interventions (mandated impact assessments, transparency requirements, worker representation rules) using randomized or quasi-experimental designs where feasible.
Research and measurement priorities set out in the commentary based on identified gaps; prescriptive recommendation rather than evidence-based finding.
high null result AI governance under the second Trump administration: implica... availability of validated administrative and firm-level AI adoption data; existe...
The paper is a policy and legal commentary/synthesis and not an empirical causal study; it does not provide microdata on employment or wage effects but identifies plausible channels and institutional dynamics.
Author-stated methodology and limitations section describing type of study and data sources; explicitly reports lack of primary empirical data.
high null result AI governance under the second Trump administration: implica... study type / presence of primary empirical data
The federal U.S. approach to AI governance combines export controls for key AI hardware/software with a relatively permissive domestic regulatory stance that relies on executive guidance, voluntary standards, and sector-specific measures rather than comprehensive federal worker protections.
Comparative policy and legal review of federal-level instruments (export control lists, executive orders, agency guidance, proposed/final rules) described in the commentary; no primary empirical data or sample size.
high null result AI governance under the second Trump administration: implica... regulatory posture / governance instruments at federal level (export controls; p...
The report has limited primary quantitative impact evaluation and relies on policy texts and secondary sources rather than large-scale empirical measurement of AI’s economic effects.
Explicit limitations section in the report describing methods and data constraints.
high null result AI Governance and Data Privacy: Comparative Analysis of U.S.... presence/absence of primary quantitative impact evaluation of AI's economic effe...
The paper's empirical and policy conclusions are limited by its jurisdictional sample size (eleven) and reliance on available empirical/operational data, which the authors note is increasingly patchy due to declining transparency.
Methods and limitations sections explicitly noting sample size (eleven jurisdictions) and data availability constraints.
high null result The Global Landscape of Environmental AI Regulation: From th... limitations in generalizability (scope of jurisdictional mapping) and data compl...
Methodological needs for AI-era labor models include dynamic skill taxonomies, high-frequency labor data (job postings, firm-level automation measures), and uncertainty quantification.
Paper's Research & policy recommendations and Methodological needs section (explicit recommendations).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... requirements for model inputs and design (dynamic taxonomies, data frequency, un...
The scenario analysis framework varies economic growth, automation rates, policy interventions, and investment to produce probabilistic demand–supply gaps.
Methods description of scenario analysis components and the variables varied in scenario experiments (explicit in Data & Methods).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic demand–supply gap distributions produced under varied scenario par...
Intended users of the Hub include organizations, educational institutions, and policymakers to inform reskilling/education strategies, regional economic policy, and labor-market interventions.
Explicit statement of target users and use cases in the Key Points / Implications sections.
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... targeting of outputs to specified stakeholder groups (intended adoption/use-case...
The system produces interpretable outputs for stakeholders: demand–supply trend analysis, geospatial hotspot maps, skill-gap radar charts, and policy simulation dashboards.
Paper's description of outputs and interactive visual analytics (listed output modalities).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... generation of interpretable visual/analytic artifacts (trend charts, hotspot map...
The core modeling approach uses probabilistic growth modeling combined with intelligent skill synthesis to estimate future workforce requirements under alternative economic and policy scenarios.
Methods section describing the modeling components: probabilistic growth modeling and intelligent skill synthesis (architectural description).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic forecasts of future workforce requirements by sector/region under ...
The platform integrates multiple indicators such as regional economic growth projections, automation velocity, policy intervention strength, investment intensity, and market volatility (macro- and micro-level indicators).
List of input indicators given in the Data & Methods section of the paper (explicit enumeration of macro and micro variables).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... integration of listed macro- and micro-level indicators into the modelling pipel...
Significant empirical gaps remain on long-term impacts (wage trajectories, employment composition, firm-level returns), verification/remediation cost quantification, and public-good risks of insecure code proliferation.
Cross-study synthesis explicitly identifying missing longitudinal and firm-level empirical research in the reviewed literature.
high null result ChatGPT as a Tool for Programming Assistance and Code Develo... absence or paucity of longitudinal studies and firm-level quantitative measureme...
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
high null result AI and Robotics Redefine Output and Growth: The New Producti... strength of causal inference and measurement validity
Methodology used in the paper is a narrative review relying on secondary sources (literature, legal cases, policy reports, empirical perception studies) and conceptual synthesis; no new primary data were collected.
Paper's Data & Methods section explicitly states narrative review and secondary-data analysis.
high null result Ethical and societal challenges to the adoption of generativ... study methodology (use of secondary sources; absence of primary data)
Important empirical research gaps remain (consumer willingness-to-pay for authenticated vs. synthetic content, labor-displacement elasticities, market concentration dynamics, and cost–benefit evaluations of regulatory options).
Explicit statement of limitations and research needs in the paper, based on the authors' narrative review and absence of primary empirical studies within the paper.
high null result Ethical and societal challenges to the adoption of generativ... identified gaps in empirical knowledge and priority research questions
The paper's methodology is a secondary-data, narrative (qualitative) literature review; it contains no original empirical data or primary quantitative analysis.
Explicit methodological statement in the paper describing secondary data analysis and narrative synthesis; absence of primary datasets or statistical analyses.
high null result Ethical and societal challenges to the adoption of generativ... presence or absence of original empirical data
This paper is conceptual/theoretical and does not conduct primary empirical data collection.
Explicit methodological statement in the paper's Data & Methods section.
high null result Continental shift: operations and supply chain management re... study type (conceptual vs empirical)
More granular firm- and household-level panel data are needed to empirically validate the dissertation's theoretical predictions about nonlinear effects and causal channels.
Author recommendation based on limitations noted in Essay 3 (no primary empirical estimation) and the conditional/simulation-based nature of other essays; this is a methodological claim about future research needs rather than an empirical result.
high null result MODELING HOSPITALITY AND TOURISM STRATEGIES empirical identification of nonlinear effects (research/data adequacy)
Further causal, experimental research (randomized deployments) is needed to precisely quantify net productivity and labor reallocation effects of AI agents.
Paper's stated research priorities and explicit acknowledgement of limitations from observational design; no randomized trials reported in the study.
high null result Artificial Intelligence Agents in Knowledge Work: Transformi... need for randomized causal estimates of productivity and labor reallocation
There are measurement challenges for quality-adjusted productivity—errors and downstream effects may reduce net benefits of agent automation and are under-measured in the study.
Authors' noted limitations and concerns about quality-adjusted productivity measurement (error rates, downstream externalities) based on observational deployment experience; no formal measurement of downstream costs reported.
high null result Artificial Intelligence Agents in Knowledge Work: Transformi... quality-adjusted productivity (including errors and downstream effects)
Small-scale, domain-specific deployments of Alfred AI limit external validity to other industries or larger firms.
Deployment context described as small-scale e-commerce; authors note generalizability limitations stemming from domain- and scale-specific nature of the experiments.
high null result Artificial Intelligence Agents in Knowledge Work: Transformi... external validity / generalizability
Because the study is observational and non-randomized, causal claims about the effect of AI agents on productivity and labor are limited.
Study design explicitly described as applied experimentation and observational deployments (non-randomized); potential confounding and selection biases acknowledged by the authors.
high null result Artificial Intelligence Agents in Knowledge Work: Transformi... causal identification ability (limits on attributing observed effects to the age...
Researchers and firms should measure generation throughput, verification throughput, defect accumulation rates, mean time to detection/fix, costs per incident, and the marginal value of additional verification capacity to evaluate the framework's claims.
Prescriptive measurement priorities listed in the paper as recommendations for empirical validation.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... set of recommended metrics (generation throughput, verification throughput, defe...
The abstract reports no empirical tests, simulations, or field experiments; empirical validation of the framework is left for future work.
Direct observation of the paper's abstract and methods description indicating lack of empirical validation.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... presence or absence of empirical validation in the paper
The paper's contribution is primarily conceptual/architectural rather than empirical.
Explicit statement in the paper and absence of reported empirical tests, simulations, or field experiments in the abstract and methods section.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... type of contribution (conceptual vs. empirical)
Priority research areas include evaluating long‑run distributional impacts of AI diffusion in agriculture, interactions between digital technologies and labor markets, inclusive financing models for adoption, and macroeconomic effects on food prices and trade.
Stated research agenda and gap analysis in the paper’s conclusions, derived from the review of existing literature and identified gaps.
high null result MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION research coverage (presence/absence of long‑run distributional studies, labor ma...
The current evidence base has gaps: more rigorous impact evaluations, long‑term soil and emissions accounting, and studies on distributional outcomes are needed.
Meta‑assessment within the paper noting limitations of existing literature (many short‑term pilots, limited long‑run soil/emissions data, few studies on who captures value); the claim is based on the review's appraisal of methods used in cited studies.
high null result MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION research evidence sufficiency (availability of long‑term causal estimates, soil/...
Economists and policymakers should fund long‑run evaluations (RCTs, quasi‑experimental designs) to estimate causal effects of AI interventions on productivity, welfare, and environmental outcomes.
Evidence‑gap analysis and policy recommendations in the paper; explicit call for rigorous impact evaluation methods given current paucity of long‑run causal evidence.
high null result MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION existence and number of long‑run RCTs/quasi‑experimental studies measuring produ...
There are limited long‑run randomized controlled trials (RCTs) on AI/IoT impacts for smallholders and scarce cross‑country data on distributional effects.
Literature review and evidence‑gap identification within the study; explicit statement that long‑run RCTs and cross‑country distributional data are scarce.
high null result MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION availability of long‑run RCT evidence, number of cross‑country distributional st...
Heterogeneous contexts mean impacts vary; careful piloting, monitoring, and adaptive policy are necessary to manage uncertainty in outcomes.
Synthesis and explicit discussion of uncertainties; evidence gaps section noting variable results across regions and interventions.
high null result MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION variation in intervention impacts across contexts (heterogeneity measures), need...
There are limited standardized measures of 'AI capital,' scarce data on firm-level AI investment and implementation quality, and few long-run causal estimates of AI’s effects on managerial productivity and labor outcomes.
Gap analysis based on literature review and methodological discussion within the book; observation about the state of available empirical evidence.
high null result Modern Management in the Age of Artificial Intelligence: Str... availability and standardization of AI investment/asset measures; existence of l...
The paper is primarily conceptual/architectural and does not present large empirical studies quantifying the phenomenon across firms or repositories.
Explicit methodological statement in the paper describing its use of thought experiments, mechanism reasoning, and illustrative examples rather than empirical datasets.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... presence/absence of empirical studies within the paper (binary)
Suggested empirical pathways include lab experiments measuring initiation probability/time-to-start with versus without conversational priming, and field A/B tests in productivity apps measuring task starts and completion conditional on start.
Methodological recommendations in the paper (proposed future empirical work); no data provided.
high null result A Model of Action Initiation Barrier Reduction through AI Co... proposed outcomes to measure in future work: initiation probability, time-to-sta...
The paper lacks quantitative validation; effects and magnitudes of the proposed initiation channel are unmeasured.
Methodological statement in the paper noting it is conceptual/theoretical and that it does not report systematic empirical analysis or randomized evaluation.
high null result A Model of Action Initiation Barrier Reduction through AI Co... absence of measured effect sizes or statistical estimates in the paper
The paper introduces the 'AI Conversation-Based Action Initiation Barrier Reduction Model' as a theoretical framework explaining how conversational AI reduces initiation frictions.
Descriptive/theoretical presentation in the paper (model specification and conceptual framing). No empirical validation provided.
high null result A Model of Action Initiation Barrier Reduction through AI Co... n/a (the claim is about the existence of a theoretical model)
The paper's conclusions are drawn from a mix of evidence types including literature review, surveys/interviews, case studies, usage-log or publication-metric analyses, and controlled experiments—although the abstract does not specify which of these were actually used or the sample sizes.
Explicitly noted in the Data & Methods summary as the likely underlying evidence types; the paper's abstract itself does not document original data or detailed methods.
high null result Artificial Intelligence for Improving Research Productivity ... methodological provenance (types of evidence used; presence/absence of original ...
There is a lack of large‑scale causal evidence on generative AI’s effects; the paper recommends RCTs, difference‑in‑differences, matched employer–employee panels, and longitudinal studies to fill empirical gaps.
Methodological critique and research agenda provided in the review; observation based on the authors' survey of the literature.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (research design recommendation; outcome is future evidence generation)
Policy interventions are needed for data protection, bias mitigation, model transparency, accountability, and public investments in workforce retraining to smooth transitions and reduce inequality.
Normative policy recommendations grounded in the review's synthesis of risks and distributional concerns; not an empirical claim but a recommendation.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... policy adoption (existence of regulations, programs), outcomes: retraining parti...
New productivity metrics are needed to capture AI impacts, including time‑use changes, quality‑adjusted output, and accounting for intangible AI capital.
Methodological recommendation from the conceptual synthesis, motivated by limitations of existing measures discussed in the paper.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (recommendation for metrics: time use, quality‑adjusted output, AI capital a...
The paper is a policy-design and conceptual-architecture work and presents no original microdata or econometric estimates.
Methods section explicitly states absence of original empirical data; document contains policy proposals and modeling agenda only.
high null result Token Taxes: mitigating AGI's economic risks presence/absence of original empirical data in the paper
Token taxes are usage-based surcharges applied at the point of sale for model inference (i.e., charged per token or per inference request).
Paper's definitional specification and conceptual description; policy-design discussion (no empirical data).
high null result Token Taxes: mitigating AGI's economic risks tax charged per token / per inference request (tax base definition)
Further empirical calibration and validation against observed behavioral and economic data are necessary; the framework primarily demonstrates method and emergent phenomena rather than ready predictive deployment.
Paper explicitly notes the necessity of further empirical calibration and frames results as demonstration of method and emergent phenomena. This is an explicit limitation statement in the summary.
high null result An LLM-Driven Multi-Agent Simulation Framework for Coupled E... level of empirical calibration/validation (current framework not yet empirically...
Static equilibrium and representative-agent models neglect dynamic reallocation, task re-bundling, and firm-level heterogeneity, limiting their realism for forecasting labour outcomes under AI adoption.
Theoretical critique offered in the paper and referenced critiques in the literature; evidence is conceptual and based on model assumptions identified across studies.
high null result Recent Methodologies on AI and Labour - a Desk Review completeness/realism of economic models used to forecast labour-market effects
Common empirical strategies (cross-sectional exposure correlations and panel-difference analyses) often lack strong causal identification due to endogeneity of adoption and unobserved confounders.
Surveyed analytical strategies and explicit critique in the paper noting endogeneity and confounding; evidence is methodological critique grounded in the literature's reliance on observational exposure measures.
high null result Recent Methodologies on AI and Labour - a Desk Review validity of causal estimates of AI adoption effects on labour outcomes
Researchers construct AI exposure indices at the task level to indicate susceptibility to AI automation or augmentation.
Cited examples (Felten et al., 2023; Eloundou et al., 2023) that develop task-level scores; evidence basis is methodological papers that publish indices and mapping procedures (often using O*NET tasks, expert labeling, or model-based scoring).
high null result Recent Methodologies on AI and Labour - a Desk Review task-level AI exposure scores
Commonly used data sources for measuring AI exposure include job postings and descriptions, occupational task databases (O*NET-style), employer/household surveys, administrative payroll data, and firm-level productivity measures.
List of data sources compiled in the paper; evidence is a methodological summary of datasets used across the cited literature rather than novel data collection.
high null result Recent Methodologies on AI and Labour - a Desk Review coverage and types of data used for AI exposure and labour-outcome measurement
Many studies rely on static assumptions (fixed comparative advantage, no adaptation) and theoretical models, which limits causal inference and makes projections model-dependent.
Methodological critique cited in the paper (e.g., critique of Acemoglu & Restrepo, 2022; Webb, 2020) and the paper's survey of common modeling choices (static equilibrium or representative-agent models); evidence basis is theoretical critique and literature review rather than new causal estimates.
high null result Recent Methodologies on AI and Labour - a Desk Review strength of causal identification and robustness of projected employment/wage ou...