The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2432 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Labor Markets Remove filter
Implementation of RATs requires instrumentation at the browser/platform level or via plugins and must address privacy/consent, storage/ownership, sharing controls, and interoperable trace formats.
Design and implementation considerations enumerated in the paper; this is a requirements statement rather than an empirical claim.
high null result Chasing RATs: Tracing Reading for and as Creative Activity implementation requirements and privacy/governance needs
Analytical approaches compatible with RATs include sequence/trajectory mining, network analysis of associations/co-read graphs, embedding/clustering of trajectories, qualitative inspection of reflections, and experimental (A/B or RCT) evaluation of downstream effects.
Methods section of the paper listing suggested analytical techniques; these are proposed methods rather than applied analyses.
high null result Chasing RATs: Tracing Reading for and as Creative Activity analytical approaches applicable to RAT data
The paper is primarily theoretical and prescriptive: it synthesizes literature and proposes a framework and design guidelines rather than reporting large-scale empirical datasets or causal identification of economic outcomes.
Meta-claim about the paper's methods explicitly stated in the Data & Methods summary; based on the paper's methodological description.
high null result Toward a science of human–AI teaming for decision-making: A ... presence/absence of empirical datasets or causal identification studies in the p...
Key measurable outcomes to assess Human–AI teams include accuracy/efficiency, robustness to novel cases, decision consistency, trust/misuse rates, training costs, and inequity indicators.
Prescriptive list of metrics offered by the authors as part of the research agenda and evaluation guidance; not empirically derived from a dataset in the paper.
high null result Toward a science of human–AI teaming for decision-making: A ... accuracy, efficiency, robustness, consistency, trust/misuse rates, training cost...
Empirical evaluation strategies for Human–AI teams should include randomized interventions, field trials, lab experiments, phased rollouts (difference-in-differences), and structural models that allow interaction terms between human skill and AI quality.
Methodological recommendation in the paper; suggested study designs rather than implemented analyses.
high null result Toward a science of human–AI teaming for decision-making: A ... appropriate empirical identification of team-level complementarities and causal ...
Research priorities include empirical measurement of task‑level automation rates, firm and industry productivity effects, wage impacts across occupations, and diffusion patterns.
Paper's stated research agenda and identification of measurement gaps; based on methodological critique of current evidence base.
high null result How AI Will Transform the Daily Life of a Techie within 5 Ye... future empirical research outputs on automation rates, productivity, wage impact...
Measuring these productivity gains will be challenging because quality improvements, faster iteration, and creative outputs are harder to price/observe than lines of code.
Methodological argument about measurement difficulty; based on conceptual considerations, not empirical validation.
high null result How AI Will Transform the Daily Life of a Techie within 5 Ye... observability and measurability of productivity gains (availability of suitable ...
Measuring AI's economic impact requires new metrics that account for decision-value uplift, reduced tail-risk exposures, and dynamic gains from continuous learning; causal identification will require experiments or staggered rollouts.
Methodological recommendation backed by conceptual discussion of measurement challenges; no implementation of such measurement approaches is reported in the paper.
high null result Next-Generation Financial Analytics Frameworks for AI-Enable... proposed measurement constructs (decision-value uplift, tail-risk reduction, lea...
Performance and evaluation should be measured using forecast accuracy, decision lift/value added, latency, and false positive/negative rates.
Paper-prescribed evaluation metrics; presented as recommended practice rather than derived from empirical testing within the paper.
high null result Next-Generation Financial Analytics Frameworks for AI-Enable... forecast accuracy, decision lift (value added), system latency, false positive/n...
Core AI techniques for these frameworks include supervised/unsupervised ML, NLP for unstructured text, anomaly detection for control/transaction monitoring, and reinforcement/prescriptive models for recommendations.
Methodological claim listing standard ML/NLP/anomaly-detection techniques and prescriptive approaches; statement of methods rather than an empirical comparison of alternatives.
high null result Next-Generation Financial Analytics Frameworks for AI-Enable... method adoption/type metrics (e.g., frequency of supervised vs. unsupervised met...
Next‑gen frameworks use large-scale structured (transactions, ledgers, KPIs) and unstructured sources (reports, news, contracts, call transcripts) to power models.
Descriptive claim listing data types the paper recommends; presented as design input requirements rather than empirically validated data-integration projects.
high null result Next-Generation Financial Analytics Frameworks for AI-Enable... data coverage and diversity (e.g., proportion of structured vs. unstructured inp...
There is a need for quantitative studies and microdata on firm-level RM practices, AI adoption, and performance outcomes to measure effect sizes and causal pathways.
Stated research gaps and limitations in the review (lack of primary empirical quantification; heterogeneity across contexts).
high null result The Role of Risk Management as an Organizational Management ... availability of quantitative evidence on RM effects (effect sizes, causal estima...
The review's conclusions are limited by reliance on published literature (potential bias toward successful implementations), lack of primary empirical quantification (no effect sizes), and heterogeneity across organizational contexts limiting direct generalizability.
Explicit limitations stated in the paper summarizing scope and method (qualitative literature review, secondary evidence only).
high null result The Role of Risk Management as an Organizational Management ... generalizability and empirical precision of review findings
Heterogeneity in system designs and deployment contexts complicates cross-site comparisons.
Limitations section and observed variation in platform architectures, degrees of automation, and governance across sites reported via descriptive data and interviews.
high null result The Role of Artificial Intelligence in Healthcare Complaint ... comparability across deployment sites (heterogeneity in systems and contexts)
Non-random selection of institutions limits causal inference and external generalizability of the study's findings.
Study limitations explicitly state non-random site selection and heterogeneous deployments; methodological note that causal claims are constrained.
high null result The Role of Artificial Intelligence in Healthcare Complaint ... generalizability and causal inference validity
Estimation/calibration, stability assessment, and global sensitivity methods used: parameters calibrated/estimated on 2016–2023 data; equilibrium located; Jacobian eigenvalues computed for local stability; variance-based global sensitivity analysis performed over parameter space.
Methods section: description of parameter estimation/calibration, equilibrium computation, Jacobian-based stability analysis, and variance-based global sensitivity analysis.
high null result Governance of Technological Transition: A Predator-Prey Anal... methodological procedures applied (estimation, stability analysis, GSA)
The main empirical conclusions are based on a short annual panel (2016–2023) and a stylized aggregate interaction model; results should be interpreted with caution due to potential omitted variables, aggregation bias, and limited sample size.
Explicit limitations listed in the paper: short time series (eight annual observations), national aggregate data, simplified model structure, no firm/sector heterogeneity, possible endogeneity/measurement issues.
high null result Governance of Technological Transition: A Predator-Prey Anal... validity/robustness of empirical conclusions (limitations)
The empirical analysis uses annual, national-level aggregate Chinese series for 2016–2023 as proxies for AI capital, physical capital stock, and labor compensation (wage bill).
Data description in Data & Methods: annual Chinese aggregate series 2016–2023. Implied sample length: 2016–2023 inclusive (8 annual observations); national-level aggregates, no firm-level heterogeneity modeled.
high null result Governance of Technological Transition: A Predator-Prey Anal... AI capital proxy; physical capital stock; labor compensation (wage bill)
The paper models interactions among AI capital, physical capital, and labor using a Lotka–Volterra (predator–prey type) system adapted to include self-limiting (saturation) terms.
Model specification described in Methods: deterministic Lotka–Volterra system with added self-limitation terms for three stocks (AI capital, physical capital, labor).
high null result Governance of Technological Transition: A Predator-Prey Anal... model structure / interaction specification (no single dependent variable)
Instrumental-variable (IV) estimation is used to address endogeneity of AI adoption and to identify causal effects on employment and wages.
Paper states IV identification strategy applied to the 38-country panel; robustness checks and alternative specifications reported (paper refers to instrument details in full text).
high null result Artificial Intelligence and Labor Market Transformation: Emp... Causal estimate identification strategy for employment and wage outcomes
The AI Adoption Index is constructed as a composite measure combining enterprise investment in AI, AI-related patent filings, and workforce/firm surveys on AI use across 38 OECD countries (2019–2025).
Paper's methodological description of the index construction; data sources enumerated as investment, patenting, and survey measures over the panel period.
high null result Artificial Intelligence and Labor Market Transformation: Emp... AI adoption intensity (composite index)
The paper is entirely theoretical/analytical and does not report an empirical dataset.
Paper methodology section and abstract state primary tool is an analytical economic model; no empirical data or sample sizes are reported.
high null result Janus-Faced Technological Progress and the Arms Race in the ... presence/absence of empirical dataset
The same formal framework can be interpreted as a firm-level model where human skill investment maps onto AI/chatbot investment decisions.
Paper provides an alternative interpretation and formally maps agent skill-investment choices into an analogous firm R&D/AI-capital decision problem within the same mathematical framework.
high null result Janus-Faced Technological Progress and the Arms Race in the ... conceptual mapping between individual skill investment and firm AI investment (m...
Research and monitoring priorities for economists include task-level analyses of substitutability/complementarity, modeling adoption as a function of regulatory costs and reimbursement incentives, and evaluating long-run welfare and distributional effects.
Explicit research recommendations stated in the narrative review, based on gaps identified in the literature and evolving empirical questions.
high null result Will AI Replace Physicians in the Near Future? AI Adoption B... research activity in recommended areas; quality of evidence informing policy
Policymakers and payers should consider liability reform, reimbursement models that reward safe human–AI collaboration, funding for independent clinical validation, and measures to prevent market concentration.
Policy recommendations and implications derived from the narrative review's synthesis of regulatory, economic, and implementation challenges.
high null result Will AI Replace Physicians in the Near Future? AI Adoption B... policy actions implemented (liability reform, reimbursement changes, funding all...
There is a need for validated administrative and firm-level data on AI adoption, workplace monitoring, and worker outcomes, and for evaluation of policy interventions (mandated impact assessments, transparency requirements, worker representation rules) using randomized or quasi-experimental designs where feasible.
Research and measurement priorities set out in the commentary based on identified gaps; prescriptive recommendation rather than evidence-based finding.
high null result AI governance under the second Trump administration: implica... availability of validated administrative and firm-level AI adoption data; existe...
The paper is a policy and legal commentary/synthesis and not an empirical causal study; it does not provide microdata on employment or wage effects but identifies plausible channels and institutional dynamics.
Author-stated methodology and limitations section describing type of study and data sources; explicitly reports lack of primary empirical data.
high null result AI governance under the second Trump administration: implica... study type / presence of primary empirical data
The federal U.S. approach to AI governance combines export controls for key AI hardware/software with a relatively permissive domestic regulatory stance that relies on executive guidance, voluntary standards, and sector-specific measures rather than comprehensive federal worker protections.
Comparative policy and legal review of federal-level instruments (export control lists, executive orders, agency guidance, proposed/final rules) described in the commentary; no primary empirical data or sample size.
high null result AI governance under the second Trump administration: implica... regulatory posture / governance instruments at federal level (export controls; p...
The report has limited primary quantitative impact evaluation and relies on policy texts and secondary sources rather than large-scale empirical measurement of AI’s economic effects.
Explicit limitations section in the report describing methods and data constraints.
high null result AI Governance and Data Privacy: Comparative Analysis of U.S.... presence/absence of primary quantitative impact evaluation of AI's economic effe...
Methodological needs for AI-era labor models include dynamic skill taxonomies, high-frequency labor data (job postings, firm-level automation measures), and uncertainty quantification.
Paper's Research & policy recommendations and Methodological needs section (explicit recommendations).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... requirements for model inputs and design (dynamic taxonomies, data frequency, un...
The scenario analysis framework varies economic growth, automation rates, policy interventions, and investment to produce probabilistic demand–supply gaps.
Methods description of scenario analysis components and the variables varied in scenario experiments (explicit in Data & Methods).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic demand–supply gap distributions produced under varied scenario par...
Intended users of the Hub include organizations, educational institutions, and policymakers to inform reskilling/education strategies, regional economic policy, and labor-market interventions.
Explicit statement of target users and use cases in the Key Points / Implications sections.
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... targeting of outputs to specified stakeholder groups (intended adoption/use-case...
The system produces interpretable outputs for stakeholders: demand–supply trend analysis, geospatial hotspot maps, skill-gap radar charts, and policy simulation dashboards.
Paper's description of outputs and interactive visual analytics (listed output modalities).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... generation of interpretable visual/analytic artifacts (trend charts, hotspot map...
The core modeling approach uses probabilistic growth modeling combined with intelligent skill synthesis to estimate future workforce requirements under alternative economic and policy scenarios.
Methods section describing the modeling components: probabilistic growth modeling and intelligent skill synthesis (architectural description).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... probabilistic forecasts of future workforce requirements by sector/region under ...
The platform integrates multiple indicators such as regional economic growth projections, automation velocity, policy intervention strength, investment intensity, and market volatility (macro- and micro-level indicators).
List of input indicators given in the Data & Methods section of the paper (explicit enumeration of macro and micro variables).
high null result AI-Based Predictive Skill Gap Analysis for Workforce Plannin... integration of listed macro- and micro-level indicators into the modelling pipel...
Significant empirical gaps remain on long-term impacts (wage trajectories, employment composition, firm-level returns), verification/remediation cost quantification, and public-good risks of insecure code proliferation.
Cross-study synthesis explicitly identifying missing longitudinal and firm-level empirical research in the reviewed literature.
high null result ChatGPT as a Tool for Programming Assistance and Code Develo... absence or paucity of longitudinal studies and firm-level quantitative measureme...
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
high null result AI and Robotics Redefine Output and Growth: The New Producti... strength of causal inference and measurement validity
Methodology used in the paper is a narrative review relying on secondary sources (literature, legal cases, policy reports, empirical perception studies) and conceptual synthesis; no new primary data were collected.
Paper's Data & Methods section explicitly states narrative review and secondary-data analysis.
high null result Ethical and societal challenges to the adoption of generativ... study methodology (use of secondary sources; absence of primary data)
Important empirical research gaps remain (consumer willingness-to-pay for authenticated vs. synthetic content, labor-displacement elasticities, market concentration dynamics, and cost–benefit evaluations of regulatory options).
Explicit statement of limitations and research needs in the paper, based on the authors' narrative review and absence of primary empirical studies within the paper.
high null result Ethical and societal challenges to the adoption of generativ... identified gaps in empirical knowledge and priority research questions
The paper's methodology is a secondary-data, narrative (qualitative) literature review; it contains no original empirical data or primary quantitative analysis.
Explicit methodological statement in the paper describing secondary data analysis and narrative synthesis; absence of primary datasets or statistical analyses.
high null result Ethical and societal challenges to the adoption of generativ... presence or absence of original empirical data
This paper is conceptual/theoretical and does not conduct primary empirical data collection.
Explicit methodological statement in the paper's Data & Methods section.
high null result Continental shift: operations and supply chain management re... study type (conceptual vs empirical)
The paper is primarily conceptual/architectural and does not present large empirical studies quantifying the phenomenon across firms or repositories.
Explicit methodological statement in the paper describing its use of thought experiments, mechanism reasoning, and illustrative examples rather than empirical datasets.
high null result Overton Framework v1.0: Cognitive Interlocks for Integrity i... presence/absence of empirical studies within the paper (binary)
The paper's conclusions are drawn from a mix of evidence types including literature review, surveys/interviews, case studies, usage-log or publication-metric analyses, and controlled experiments—although the abstract does not specify which of these were actually used or the sample sizes.
Explicitly noted in the Data & Methods summary as the likely underlying evidence types; the paper's abstract itself does not document original data or detailed methods.
high null result Artificial Intelligence for Improving Research Productivity ... methodological provenance (types of evidence used; presence/absence of original ...
There is a lack of large‑scale causal evidence on generative AI’s effects; the paper recommends RCTs, difference‑in‑differences, matched employer–employee panels, and longitudinal studies to fill empirical gaps.
Methodological critique and research agenda provided in the review; observation based on the authors' survey of the literature.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (research design recommendation; outcome is future evidence generation)
Policy interventions are needed for data protection, bias mitigation, model transparency, accountability, and public investments in workforce retraining to smooth transitions and reduce inequality.
Normative policy recommendations grounded in the review's synthesis of risks and distributional concerns; not an empirical claim but a recommendation.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... policy adoption (existence of regulations, programs), outcomes: retraining parti...
New productivity metrics are needed to capture AI impacts, including time‑use changes, quality‑adjusted output, and accounting for intangible AI capital.
Methodological recommendation from the conceptual synthesis, motivated by limitations of existing measures discussed in the paper.
high null result The Use of ChatGPT in Business Productivity and Workflow Opt... n/a (recommendation for metrics: time use, quality‑adjusted output, AI capital a...
The paper is a policy-design and conceptual-architecture work and presents no original microdata or econometric estimates.
Methods section explicitly states absence of original empirical data; document contains policy proposals and modeling agenda only.
high null result Token Taxes: mitigating AGI's economic risks presence/absence of original empirical data in the paper
Token taxes are usage-based surcharges applied at the point of sale for model inference (i.e., charged per token or per inference request).
Paper's definitional specification and conceptual description; policy-design discussion (no empirical data).
high null result Token Taxes: mitigating AGI's economic risks tax charged per token / per inference request (tax base definition)
Static equilibrium and representative-agent models neglect dynamic reallocation, task re-bundling, and firm-level heterogeneity, limiting their realism for forecasting labour outcomes under AI adoption.
Theoretical critique offered in the paper and referenced critiques in the literature; evidence is conceptual and based on model assumptions identified across studies.
high null result Recent Methodologies on AI and Labour - a Desk Review completeness/realism of economic models used to forecast labour-market effects
Common empirical strategies (cross-sectional exposure correlations and panel-difference analyses) often lack strong causal identification due to endogeneity of adoption and unobserved confounders.
Surveyed analytical strategies and explicit critique in the paper noting endogeneity and confounding; evidence is methodological critique grounded in the literature's reliance on observational exposure measures.
high null result Recent Methodologies on AI and Labour - a Desk Review validity of causal estimates of AI adoption effects on labour outcomes