Evidence (8066 claims)
Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 417 | 113 | 67 | 480 | 1091 |
| Governance & Regulation | 419 | 202 | 124 | 64 | 823 |
| Research Productivity | 261 | 100 | 34 | 303 | 703 |
| Organizational Efficiency | 406 | 96 | 71 | 40 | 616 |
| Technology Adoption Rate | 323 | 128 | 74 | 38 | 568 |
| Firm Productivity | 307 | 38 | 70 | 12 | 432 |
| Output Quality | 260 | 71 | 27 | 29 | 387 |
| AI Safety & Ethics | 118 | 179 | 45 | 24 | 368 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 75 | 37 | 19 | 312 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 74 | 34 | 78 | 9 | 197 |
| Skill Acquisition | 98 | 36 | 40 | 9 | 183 |
| Innovation Output | 121 | 12 | 24 | 13 | 171 |
| Firm Revenue | 98 | 35 | 24 | — | 157 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 87 | 16 | 34 | 7 | 144 |
| Inequality Measures | 25 | 76 | 32 | 5 | 138 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 89 | 7 | 4 | 3 | 103 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 33 | 11 | 7 | 98 |
| Wages & Compensation | 54 | 15 | 20 | 5 | 94 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 27 | 26 | 10 | 6 | 72 |
| Job Displacement | 6 | 39 | 13 | — | 58 |
| Hiring & Recruitment | 40 | 4 | 6 | 3 | 53 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 6 | 9 | — | 27 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
The relationship between IR and IWE is nonlinear — marginal effects vary with the level of robotization or other moderating factors (threshold/diminishing or accelerating returns).
Nonlinearity/threshold analysis reported in the paper (models testing nonlinear functional forms or interaction/threshold terms), showing varying marginal effects of IR on IWE across levels of IR or moderators.
The pollution‑reduction effect of IR operates primarily through higher technical (R&D/technology) expenditure.
Mechanism/mediation tests showing IR is positively associated with provincial technical/R&D expenditure, and that technical expenditure is linked to lower IWE; stepwise regressions used to establish the mediating channel.
The pollution‑reduction effect of IR operates primarily through increased green innovation (measured by green patents).
Mechanism (mediation/stepwise) regressions: IR positively predicts green patenting at the provincial level, and inclusion of green patents in IWE regressions attenuates the IR effect, consistent with mediation.
Green-technology innovation acts as a threshold moderator: DE produces direct carbon-reduction effects (reducing PCE) only after green-technology innovation exceeds a critical threshold; below that threshold DE does not reduce PCE.
Threshold-regression models (panel threshold estimation) using a measured index of green-technology innovation as the threshold variable on the 278-city panel (2011–2022). Results show different coefficient regimes for DE on PCE depending on whether green-innovation is below/above the estimated threshold.
The digital economy (DE) exhibits a U-shaped relationship with carbon emission efficiency (CEE): at early stages of DE development CEE worsens (declines) with DE, but beyond a certain DE level CEE improves as DE expands further.
Panel fixed-effects regressions using the same sample of 278 cities (2011–2022) with DE and DE^2 terms; the estimated coefficients on DE and DE^2 are statistically significant and imply a U-shaped relationship.
The digital economy (DE) exhibits an inverted-U relationship with per capita carbon emissions (PCE): at low levels of DE, PCE initially rises with DE, but after a turning point further DE expansion is associated with falling PCE.
Panel fixed-effects regressions on a balanced panel of 278 Chinese prefecture-level cities observed annually from 2011–2022. Models include DE and DE^2 terms; coefficients on DE and DE^2 are statistically significant in the pattern consistent with an inverted-U and a turning point is estimated from those coefficients.
Evidence of labour reallocation within rural economies following AI-driven productivity changes was observed in the reviewed literature.
Reported findings across several reviewed studies noting shifts in labour allocation and task composition on farms and in related value-chain activities.
Paper‑based regulatory environments slow DT diffusion; digitised compliance and standardised data schemas can accelerate adoption and enable AI‑driven oversight.
Findings in the review noting regulatory friction and proposed solutions; supported by case evidence where digitisation of compliance facilitated digital workflows.
DT adoption is a socio‑technical transformation that requires governance, standards, collaborative delivery models, and workforce capability building — not just technology deployment.
Conceptual synthesis and cross‑study recommendations in the reviewed literature emphasizing organizational, contractual, and governance changes alongside technology.
Both initial trust and inertia have statistically significant effects on GAICS adoption decisions.
Inferential statistical tests reported in the quantitative phase indicating significant pathways from initial trust and from inertia to adoption outcome (exact effect sizes and sample size not provided in the abstract).
Organizations’ adoption of Generative AI–enabled CRM systems (GAICS) is driven by initial trust and inertia.
Quantitative inferential analysis in the study's second phase testing the conceptual model (paper reports statistically significant relationships between initial trust, inertia, and GAICS adoption). Sample size and sector/country scope not reported in the abstract.
Better predictive models can shrink asymmetric‑information wedges and potentially reduce interest spreads for high‑quality but thin‑file borrowers; however, model errors or biased features can systematically exclude certain groups.
Conceptual analysis of model performance, bias risk, and implications for pricing; supported by literature on algorithmic bias and selective case evidence but not empirical causal tests within the paper.
Blockchain applications (tokenization, smart contracts) have potential for transparent, programmable financing and lower transaction costs but remain nascent and face legal and market adoption barriers.
Qualitative synthesis of emerging blockchain use cases and legal/regulatory analysis; characterization is forward‑looking and based on current maturity levels rather than empirical measurement of outcomes.
Crowdfunding is useful for market validation and early‑stage capital but has limited ticket sizes and is not scalable for growth capital needs.
Comparative assessment of financing models and illustrative examples; conclusion based on typical crowdfunding ticket sizes and market practice rather than new representative data.
Revenue‑based financing offers flexible repayments tied to cash flow and suits startups with recurring revenues, but can be more expensive over time and is less regulated.
Qualitative evaluation of product features in the comparative framework and literature synthesis; based on product design characteristics rather than primary quantitative pricing analysis in the paper.
FinTech lending platforms provide high accessibility and speed through alternative data and automated underwriting, with variable costs and scalability but raise regulatory and data‑privacy concerns.
Comparative qualitative assessment and illustrative case studies demonstrating faster approvals and broader reach for thin‑file borrowers; evidence is descriptive and not nationally representative or causally identified.
Traditional sources (bank loans, government schemes) offer lower nominal cost for creditworthy borrowers and regulatory protections, but suffer from collateral requirements, slow processes, and limited outreach to informal/small firms.
Comparative framework evaluation across five variables and institutional/regulatory synthesis; findings are qualitative and built on established banking characteristics rather than new representative quantitative data in the paper.
AI‑driven protein structure prediction will reallocate economic value across the biotech R&D stack—compressing early discovery costs, increasing returns to downstream validation/optimization, and favoring actors combining data, compute, and domain expertise.
Paper summarizes this as an overarching implication in the 'Overall' paragraph, integrating prior methodological and economic arguments; no quantitative economic model or empirical measurement is provided.
Labor demand will shift away from low‑throughput experimental structure determination toward ML model engineers, computational biologists, and integrative experimentalists, requiring retraining in experimental groups.
Paper states this in 'Labor and skill shifts'; it is an inferred labor market consequence without workforce surveys or models in the text.
Single‑sequence protein language models (e.g., ESMFold) trade some accuracy for much higher speed and scalability compared with MSA/template‑based models.
Paper describes single‑sequence approaches that remove MSA dependence and rely on very large pretrained language models, stating they trade accuracy for speed/scalability; no head‑to‑head metrics are presented in the text.
AI transforms learning conditions by enabling on-demand problem-solving help for students.
Review of recent literature on AI tutoring/assistive tools and policy documents describing technology adoption; illustrated in comparative case studies (secondary sources).
There are incentives to develop privacy‑preserving ML (federated learning, split learning) and lightweight secure hardware for edge VR devices; public funding or prizes could accelerate adoption, whereas strict data‑localization constraints might slow innovation or shift R&D to lenient jurisdictions.
Policy and innovation incentives discussion synthesized from reviewed studies and economic reasoning; no empirical innovation rate or funding‑impact analysis presented.
EU coherence (or lack thereof) will influence where firms locate AI R&D and scale platform services, shaping long-term competitiveness in global AI markets.
Qualitative international competitiveness reasoning and scenario analysis; no firm-level relocation or investment data presented.
Changes in platform governance or data-sharing obligations affect availability of training and operational data, with direct impacts on AI model performance and productivity gains.
Policy analysis and scenario reasoning linking governance changes to data access and downstream model performance; no empirical performance metrics provided.
Stricter or fragmented regulation can dampen investment in AI and platform features, while coherent, predictable frameworks can support competition and trustworthy AI deployment.
Scenario/impact reasoning and policy analysis drawing on economic logic; no primary quantitative investment data in the brief.
The Digital Omnibus initiative could materially reshape the coherence and implementation of existing EU digital regulation—notably the Digital Services Act (DSA)—with important consequences for platform governance and AI policy.
Policy and legal review of the Omnibus proposal in relation to the DSA and related EU instruments; scenario/impact reasoning; no primary quantitative data reported.
The EU’s stringent rules may raise compliance costs for firms but can create trustworthy‑AI market advantages.
Policy analysis linking observed EU regulatory stringency to expected economic effects (theoretical inference; not empirically tested in the paper).
Algeria’s emphasis on capacity and technological independence suggests an inward‑looking industrial policy and potential state support for domestic AI firms.
Interpretation of Algeria’s strategy documents and policy signals identified in the document analysis.
Differences in institutional capacity, civil–military interfaces, and normative priorities explain divergent regulatory outcomes between jurisdictions.
Comparative case‑based literature review synthesizing institutional descriptions and normative orientations across the three jurisdictions.
Personalized AI can increase consumer surplus but also enable discriminatory pricing and welfare losses for vulnerable groups; consent design affects distribution of benefits and risks.
Economic theory and ethical analysis discussed during the workshop and in position papers; no empirical welfare analysis provided in the summary.
Strict consent regimes increase compliance costs but may increase user trust and long-run demand; lax regimes favor short-term data capture but expose firms to legal and reputational risk.
Theoretical trade-off described in the workshop's economic implications and policy discussion; presented as a conceptual equilibrium analysis without empirical estimation in the summary.
Effectiveness of ChatGPT varied by discipline; not all course contexts showed significant gains from allowing its use.
Heterogeneous treatment effects observed across the six courses; GLM and non-parametric tests indicated variation in effect sizes and statistical significance by course/discipline.
AI adoption acts as a site of power reconfiguration: roles, relationships, and accountability structures shift as AI is integrated into workflows.
Qualitative workshop data from 15 UX designers describing anticipated or observed shifts in accountability and role boundaries; cross-scale thematic synthesis.
Discourses of efficiency carry ethical and social dimensions—responsibility, trust, and autonomy become central concerns when tools shift who does what and who is accountable.
Recurring themes from the 15 UX designers' discussions and design choices during workshops; thematic coding emphasized responsibility, trust, autonomy linked to efficiency claims.
At the team scale, adoption triggers negotiations over collaboration patterns, division of responsibility, and maintaining design rigor.
Group workshop activities and discussions among UX designers (n=15) where participants described team negotiation scenarios; team-level themes identified in analysis.
At the individual scale, designers expressed trade-offs among efficiency gains, opportunities for skill development, and feelings of professional value.
Individual- and small-group reflections in the 15-person workshop study; thematic coding highlighted these three recurring themes at the individual level.
Organizations frame AI adoption around competitiveness and efficiency, while workers (UX designers) weigh those efficiency framings against professional worth, learning, and autonomy.
Participants' reports during the qualitative design workshops (n=15) showing differences between organizational rhetoric and worker concerns.
Adoption outcomes depend on interactions among individual, team, and organizational incentives and norms (three analytic scales).
Cross-scale coding and synthesis of workshop data from 15 UX designers; analyses grouped themes into individual, team, and organizational scales.
Designers’ decisions about integrating AI reflect trade-offs between efficiency and social/ethical concerns (skill development, autonomy, accountability).
Workshop prompts and group discussions with 15 UX designers; thematic analysis identified recurring trade-off narratives between efficiency and professional/ethical considerations.
AI adoption reconfigures roles, responsibilities, trust, and power within organizations.
Qualitative data from design workshops with 15 UX designers; participants' reflections and group discussions coded using cross-scale thematic analysis (individual, team, organizational).
Analytical inequalities derived in the model delineate parameter regions (functions of AI capability growth rate, diffusion speed, and reinstatement elasticity) that separate stable/convergent adjustments from explosive demand-driven crises.
Closed-form analytical derivations presented in the model section of the paper, supplemented by numerical exploration of parameter space (phase diagrams).
AI-to-AI communities on Moltbook exhibit discourse that is disproportionately introspective, ritualized in interaction, and affectively redirective, distinguishing it from typical human conversation.
Synthesis of empirical findings from topic modeling (concentrated self-reference), lexical/structural analyses (high formulaic comment rate), coherence metrics (rapid decay with depth), and emotion classification (low alignment, frequent affective redirection) on the 23-day Moltbook dataset.
Heterogeneous and changing users (skill, mental models, incentives) produce heterogeneous and time-varying treatment effects, complicating inference from average uplift estimates.
Practitioner descriptions from 16 interviews highlighting user heterogeneity and learning/adaptation over time; authors' implication that averages may be insufficient.
Human uplift studies (typically RCTs measuring how AI changes human performance relative to a status quo) are a useful tool for informing deployment and policy decisions but face systematic validity challenges when applied to frontier AI systems.
Qualitative thematic synthesis of semi-structured interviews with 16 experienced practitioners across biosecurity, cybersecurity, education, and labor; authors' analytic mapping of interview themes to research lifecycle stages.
Governance constraints induce measurable trade-offs between efficiency and compliance; the magnitude of these trade-offs depends on topology and system load.
Simulation experiments in the ablation study varied governance constraint parameters and load, measuring compliance rates and efficiency (value/throughput). Results show systematic reductions in efficiency as compliance constraints tighten, with the effect size modulated by graph topology and load levels.
AI agents are useful as breadth tools and for pre-deployment checks but lack the protocol-specific and adversarial reasoning required to replace human auditors; human-in-the-loop workflows are the best use.
Study observations: agents reliably flag well-known patterns and respond to human-provided context, but fail to perform robust end-to-end exploit generation and are sensitive to scaffolding and configuration.
NFD can raise productivity in expert-heavy tasks by capturing tacit process knowledge and reducing repetitive cognitive effort, but the effect on employment is nuanced—routine parts may be automated while humans remain central to oversight and knowledge contribution.
Claims drawn from implications and the case study where analyst effort per task decreased and practitioners reported value; employment impact discussion is conceptual and speculative.
Highly personalized agents developed via NFD create stronger switching costs because crystallized knowledge assets are sticky, and economies of scale depend on the transferability of those assets across users or firms.
Conceptual reasoning in the paper's market structure and returns sections; supported by qualitative observations from the case study about personalization and reuse limits. No large-scale market data.
NFD shifts the economic tradeoff from large up-front engineering investment to ongoing human-in-the-loop investment; marginal cost of improving an agent becomes tied to practitioner time and crystallization efficiency rather than purely engineering labor.
Implications for AI economics section—conceptual analysis drawing on the NFD model and case study observations. No large-scale economic data provided.
The particular statement’s wording/ambiguity is a dominant source of labeling variability (statement dependence outweighs annotator-level effects).
Variance observed across repeated labeling of the same statements and strong statement-level effects in GEE models that account for repeated observations per statement and per participant.