Evidence (4793 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Productivity
Remove filter
Realizing macro gains requires complementary investments in classical compute, data infrastructure, workforce training, and hybrid classical–quantum integration tools.
Model sensitivity analyses showing that augmenting quantum adoption parameters without sufficient complementary inputs yields smaller macro impacts; calibration to historical complements for enabling technologies.
Quantum offers sectoral advantages (optimization, materials discovery, cryptography-safe transitions, drug discovery, finance, logistics) that could raise productivity in targeted industries rather than producing uniform economy-wide shocks.
Productivity mapping that converts sectoral adoption into Hicks-neutral TFP shocks based on micro evidence and case studies (materials discovery, optimization deployments); diffusion models parameterized with sectoral heterogeneity.
Quantum computing has the potential to generate substantial long-run productivity gains across multiple sectors.
Scenario-based macroeconomic modeling that translates sectoral quantum adoption into TFP shocks and simulates outcomes in multi-sector CGE/growth models; parameters calibrated with micro evidence of quantum advantages and historical analogs (cloud, GPUs, AI toolchains); Monte Carlo / scenario ensembles.
The pilot policy is associated with increases in firm-level ESG scores and green-investment flows (direct effects of policy on the mediators).
Reduced-form DID estimates using ESG scores and green-investment flows as dependent variables show positive, statistically significant treatment effects.
When executives have both high green cognition and high digital cognition, the two cognitions reinforce each other, producing a significantly positive enabling effect on the policy's impact (facilitating integrated green+digital innovation and reducing adjustment frictions).
Triple-interaction or subgroup analysis combining high-green and high-digital executive cognition indicators within the DID framework, showing a significant positive effect larger than either cognition alone.
High executive green cognition strengthens the marginal positive effect of the green data center pilot policy on firms' energy utilization efficiency.
Moderation analysis interacting the policy treatment with an executive-level green-cognition measure in DID regressions; positive and significant interaction coefficients reported.
The policy effect on energy utilization efficiency is more pronounced for mature-stage firms than for early-stage firms.
Subsample analysis by firm life-cycle stage (firm-level lifecycle classification) showing statistically larger policy effects for mature firms in the DID estimates.
Firms operating in more competitive industries experience larger energy-efficiency gains from the green data center pilot policy.
Heterogeneity tests by industry competition (industry-level competition measure) within the DID framework, showing larger policy coefficients for firms in high-competition industries.
The policy's positive impact on energy utilization efficiency is stronger in resource-based cities than in non-resource-based cities.
Heterogeneity analysis splitting the sample by city type (resource-based indicator) and estimating DID effects separately; larger and statistically stronger coefficients reported for resource-based city subsample.
Policy-induced increases in firms' green investment constitute another primary channel through which the pilot policy improves energy utilization efficiency.
Mediation/channel analysis using firm green-investment flow measures in DID regressions; policy assignment is associated with increases in green investment and these increases account for part of the policy's effect on energy efficiency.
Improved firm ESG performance mediates part of the positive effect of the green data center pilot policy on corporate energy utilization efficiency.
Regression-based mediation tests within the DID framework using firm-level ESG scores as the mediator; inclusion of ESG reduces the estimated policy coefficient and mediator effects are reported as significant.
Immediate research priorities for AI economists include: field experiments testing NLP‑driven acquisition/personalization (measuring CAC, LTV, retention, consumer welfare); structural/empirical models of adoption that include data access costs and complementarities; and analyses of privacy regulation impacts on external text data availability and value.
Authors' set of recommended research directions derived from identified gaps in the systematic review and implications for AI economics.
Unit costs for bookkeeping and compliance tasks are likely to fall, potentially affecting professional services pricing and leading to consolidation.
Analytic inference from case advantages and industry literature; no empirical market-wide cost study included.
Generative AI can raise labor productivity in finance and tax, shifting work from routine processing to oversight, exceptions handling, and higher-value analysis.
Analytical framing supported by case observations and literature; presented as an expected economic effect rather than measured across a population.
Successful deployment requires new human capital: finance professionals with AI literacy, data governance, model validation, and control expertise.
Paper's labor and skills implications derived from case examples and analytic framing; recommendation-based observation rather than measured workforce data.
Generative AI provided better decision support via scenario analysis and anomaly prioritization.
Descriptive case examples and literature indicating use of LLMs and RAG systems for drafting scenarios and prioritizing anomalies; evidence is qualitative and illustrative.
Generative AI adoption produced cost savings through labor reallocation and task automation.
Qualitative evidence from Xiaomi and Deloitte case analysis and analytic framing suggesting lower labor requirements for routine tasks; no standardized cost-accounting or sample-wide cost metrics provided.
Using generative AI led to higher consistency and reduced human error in repetitive finance/tax tasks.
Case-driven qualitative observations from the two organizational examples and literature synthesis indicating reduced variability in repetitive processes when AI-assisted.
Generative AI deployment increased processing speed and throughput for routine finance and tax tasks.
Observed improvements reported in case studies (Xiaomi and Deloitte) and corroborating industry/literature sources described in the paper; qualitative descriptions rather than standardized time-motion metrics.
Applying generative AI within corporate financial sharing centers (illustrated by Xiaomi’s Financial Sharing Center) and professional services firms (Deloitte) materially improves the efficiency and accuracy of finance and tax operations.
Qualitative case analysis of two organizations (Xiaomi Financial Sharing Center and Deloitte) supplemented by literature review and analytical mapping; no large-scale quantitative measurement reported.
Active participation by digital platforms (e.g., certification, audit trails) is required to operationalize technical standards and enable practical compliance mechanisms.
Argumentation from case studies and scenario analysis highlighting platforms' technical capabilities and governance roles; illustrative examples rather than systematic measurement.
Regional agreements and plurilateral initiatives are being used as testing grounds for harmonizing standards and procedures prior to broader adoption.
Case studies and institutional observations of regional/plurilateral policy experiments (specific agreements referenced in examples but not exhaustively quantified).
AI enables new forms of digital cross-border trade such as AI-as-a-service and algorithmic intermediaries.
Conceptual mapping/theoretical analysis and descriptive case examples drawn from policy and market literature; case study details and counts not specified.
AI lowers traditional trade frictions (search, matching, logistics, customs).
Theoretical/mechanism analysis supported by illustrative case studies and secondary literature on digital platforms and AI applications; no quantitative sample size or econometric estimates reported.
Phased deployment and regulatory sandboxes can lower barriers for startups to pilot lower-risk applications, thereby shaping innovation trajectories.
Comparative policy analysis of sandboxing and phased deployment approaches in other jurisdictions; prescriptive inference without empirical testing in Vietnam.
Properly governed AI can yield large efficiency gains (reduced processing time and lower per-case costs), but those gains depend on redesigning legal processes to accommodate algorithmic workflows.
Analytic synthesis of administrative-process characteristics and AI capabilities; no primary quantitative evidence or measured effect sizes provided.
Establishing a graduated implementation model and clear regulatory pathways reduces regulatory uncertainty and makes public-sector AI procurement and private-market participation more predictable and attractive.
Normative recommendation informed by comparative institutional analysis and economic reasoning; not empirically tested in the paper.
A graduated implementation model—phased deployment, differentiated safeguards by risk, and mandatory human oversight for high-stakes decisions—can balance innovation with rule-of-law protections.
Normative framework development combining doctrinal findings and comparative lessons; prescriptive recommendation rather than empirical validation.
Comparative analysis of international frameworks reveals a range of institutional responses and regulatory instruments that Vietnam could adapt.
Comparative institutional analysis synthesizing governance approaches from liberal and civil-law jurisdictions (review of secondary sources and policy frameworks).
AI can substantially modernize administrative decision-making in civil-law systems (speed, consistency, scalability).
Qualitative doctrinal and comparative institutional analysis using Vietnam as a focused case study; no primary quantitative field data or sample size.
Adoption of AI feedback could lower marginal costs of delivering high-quality feedback and change fixed vs. variable cost structures for instruction delivery.
Economic implication discussed by workshop participants (50 scholars) as a theoretical possibility; no quantitative cost estimates in the report.
Generative AI can enable new feedback modalities (text, hints, worked examples, formative prompts) adaptable to content and learner needs.
Thematic conclusions from the interdisciplinary meeting of 50 scholars, describing possible modality generation capabilities of current generative models; no empirical modality-comparison data provided.
Immediate AI-generated feedback may sustain learner momentum and improve formative assessment cycles (timeliness & engagement).
Expert-opinion synthesis from structured workshop (50 scholars) identifying timely feedback as a potential pedagogical benefit; no empirical trials reported.
Large language and generative models can tailor explanations, scaffolding, and practice to learners' current states and preferences (personalization).
Workshop expert consensus and thematic synthesis from 50 interdisciplinary scholars; illustrative examples discussed rather than empirical evaluation.
Generative AI can produce real-time, individualized feedback at scale, potentially reducing per-student feedback costs and increasing feedback frequency.
Synthesis of expert perspectives from an interdisciplinary workshop of 50 scholars (educational psychology, computer science, learning sciences); qualitative small-group activities and thematic extraction. No primary experimental or quantitative cost data presented.
SlideFormer generalizes beyond a single GPU vendor (the design achieves high utilization on both NVIDIA and AMD GPUs).
Reported experiments and utilization measurements on both NVIDIA (RTX 4090) and AMD GPUs showing sustained >95% peak performance, implying cross-vendor applicability. The summary does not specify which AMD models or the breadth of tested kernels.
Custom Triton kernels and advanced I/O integration remove key bottlenecks in single-GPU fine-tuning pipelines and contribute to the observed throughput gains.
Paper reports the use of custom Triton kernels for performance-critical primitives and improved I/O integration; throughput gains (1.40×–6.27×) are attributed in part to these optimizations. The summary does not isolate ablation results quantifying each optimization's contribution.
Heterogeneous memory management (multi-tier placement across GPU, CPU, and storage) materially reduces peak on-device memory requirements.
Authors describe an efficient memory layout and placement strategy across GPU, host RAM, and storage tiers and report lowered peak device memory use (≈2× reduction). The summary does not include low-level placement parameters or traces.
SlideFormer sustains >95% peak performance (high utilization) on both NVIDIA and AMD GPUs.
Reported sustained peak utilization measurements on experiments run on NVIDIA (e.g., RTX 4090) and AMD GPUs; the summary states >95% peak performance but does not give per-workload/utilization measurement methodology.
SlideFormer supports up to 8× larger batch sizes and up to 6× larger models on the same GPU relative to prior single-GPU baselines.
Reported comparisons to prior single-GPU baselines measuring achievable batch size and model-size capacity on the same GPU; exact baselines, workloads, and experimental configurations are not detailed in the summary.
SlideFormer reduces peak CPU and GPU memory usage by approximately 2× (roughly halving memory requirements).
Authors report peak memory measurements showing about a 2× reduction in both GPU and CPU memory compared to baselines; memory accounting method and baselines are not fully specified in the summary.
SlideFormer achieves 1.40×–6.27× higher throughput versus baseline systems.
Quantitative evaluation comparing throughput (reported as tokens/sec or updates/sec) against state-of-the-art single-GPU and multi-GPU fine-tuning pipelines (baselines are unnamed in the summary). Measurements reported across single-GPU experiments (hardware includes RTX 4090 and AMD GPUs).
SlideFormer enables fine-tuning very large LLMs (reported up to 123B+ parameters) on a single GPU (e.g., RTX 4090).
Authors report experiments and capability claims for single-GPU setups including an NVIDIA RTX 4090; model size stated as 123B+ in the paper summary. Details on exact model family, sequence length, or batch size used for the 123B+ claim are not enumerated in the summary.
The core findings (harm from ToM order mismatches and benefits from A-ToM) are robust to partners beyond LLM-driven agents.
Paper reports robustness checks testing generalization to non-LLM agent classes (details summarized in robustness section); comparisons use the same coordination metrics.
A-ToM recovers coordination performance by aligning its effective ToM depth with partners across a range of multiagent tasks.
Experimental results showing A-ToM achieves coordination levels closer to matched fixed-order pairings across the repeated matrix game, grid navigation tasks, and Overcooked when facing partners with different fixed ToM depths.
An adaptive ToM (A-ToM) agent that infers its partner's ToM order from prior interactions and conditions its predictions and actions on that estimate restores alignment and improves coordination.
Implemented A-ToM (estimation from interaction history + conditioning of partner-action predictions) and evaluated it against fixed-order agents in the four environments; reported improvements in coordination metrics when A-ToM paired with partners of varying ToM orders.
The clarification protocol elicits missing premises or confirms intent rather than producing an ill-aligned response.
Paper describes structured clarification templates (binary checks, multi-choice scaffolds, short clarifying questions) intended to elicit missing information; this is a design assertion without reported user-study evidence.
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.