Evidence (7156 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Currently, the region remains reactive as a 'recipient' rather than a 'creator' or an effective partner in the AI ecosystem.
Characterization reported by the authors based on their regional research and field study (qualitative findings from leaders across public/private sectors).
This gap hinders the ability of many governments in the region to push their countries toward joining the ranks of those benefiting from the AI revolution—both in developing the public sector and supporting economic growth and social development.
Authors' analysis and interpretation based on the regional research/field study described in the report.
The Arab region’s capacity for Artificial Intelligence (AI) governance remains limited relative to the accelerating pace of global AI developments and associated challenges.
Stated conclusion in the executive report based on a regional field study (authors' analysis of interviews/surveys and research across the region).
These harms increasingly translate into financial loss through litigation, enforcement penalties, brand erosion, and failed deployments.
Paper argues this linkage using conceptual reasoning and illustrative examples/case vignettes; cites regulatory and market incidents but does not provide systematic empirical estimates or a sample size.
AI systems can create material harms: discriminatory outcomes, privacy and security failures, opacity in decision logic, and regulatory noncompliance.
Paper lists these harms as core risks based on prior literature, regulatory developments, and conceptual risk analysis. Presented as well-documented categories rather than as new empirical findings; no sample size reported.
As artificial intelligence assumes cognitive labor, no existing quantitative framework predicts when human capability loss becomes catastrophic.
Introductory/background claim asserted by authors motivating the study (literature gap claim).
Broader AI scope lowers the critical threshold K* (i.e., more general AI reduces the K* value at which capability collapse occurs).
Model sensitivity analysis / simulations showing K* varies with assumed scope of AI (reported in model calibration discussion).
The model identifies a critical threshold K* approximately 0.85 (scope-dependent; broader AI scope lowers K*) beyond which capability collapses abruptly — the 'enrichment paradox.'
Model analysis and simulations calibrated across domains (paper reports computed threshold K* ≈ 0.85 and notes dependence on AI scope).
Reliance on massive, schema-heavy prompts results in prohibitive per-token API costs and high latency, hindering scalable production deployment.
Introductory problem statement in the paper arguing that large context prompts increase per-token API costs and latency for API-based LLMs; no quantitative study or sample size provided for this claim within the excerpt.
Fabrication risk is not an anomalous glitch but a foreseeable consequence of the technology's design, with direct implications for the evolving duty of technological competence.
Conclusion drawn from the paper's theoretical/physics-based analysis and the simulated scenario; stated in the abstract as the authors' interpretation and policy/legal implication.
The paper presents the physics-based analysis in a legal-industry setting by walking through a simulated brief-drafting scenario.
Methodological claim explicitly stated in the abstract: use of a simulated brief-drafting scenario to demonstrate the analysis.
Although commonly dismissed as random 'hallucination', recent physics-based analysis of the Transformer's core mechanism reveals a deterministic component: the AI's internal state can cross a calculable threshold, causing its output to flip from reliable legal reasoning to authoritative-sounding fabrication.
Paper cites/relies on 'recent physics-based analysis' of Transformer mechanisms and states that it demonstrates a calculable threshold; the paper also purports to present this science in a legal setting (via simulation). No numeric experimental sample provided in the excerpt.
Courts confront a novel threat to the integrity of the adversarial process due to fabricated authorities produced by generative AI.
Asserted in the abstract as a consequence of fabricated outputs; supported by the paper's conceptual argument and simulation reference rather than empirical court-case analysis.
Attorneys who unknowingly file such fabrications face professional sanctions, malpractice exposure, and reputational harm.
Stated as a legal/consequential claim in the abstract; no empirical evidence, case counts, or legal-statistics provided in the excerpt.
For law in particular, generative AI introduces a perilous failure mode in which the AI fabricates fictitious case law, statutes, and judicial holdings that appear entirely authentic.
Claimed in the paper; supported by the paper's analytic argument and a simulated brief-drafting scenario referenced in the abstract (no numeric sample provided).
AI-enabled, democratised production is more likely to intensify competition and produce winner-take-most outcomes than to generate broadly distributed entrepreneurial success.
Synthesised theoretical prediction based on the unified framework (attention scarcity + free-entry dilution + superstar/preferential attachment dynamics) developed in the paper; no empirical validation provided.
When the framework is extended to include quality heterogeneity and reinforcement dynamics, equilibrium outcomes exhibit declining average payoffs.
Analytical extension of the baseline formal model to incorporate heterogeneous quality and reinforcement (preferential attachment) dynamics; theoretical derivation in the paper; no empirical sample.
In markets with near-zero marginal costs and free entry, increases in the number of producers dilute average attention and returns per producer.
Formal theoretical model introduced in the paper (Builder Saturation Effect) that assumes near-zero marginal costs, free entry, and finite human attention; no empirical sample or experimental data reported.
Agent memories currently remain private and non-transferable because there is no way to validate their value.
Descriptive assertion in the paper about current state of agent memories; no empirical survey or measurement cited.
Insufficient organizational resources significantly inhibit AI adoption in procurement (β = -0.19, p < 0.05).
Same questionnaire survey (n=326) and multiple linear regression analysis; reported coefficient β=-0.19 with p<0.05.
Measuring only technical model performance (such as predictive accuracy) is insufficient for assessing the strategic impact of AI in drug discovery.
Argued in the paper as a critique of current evaluation practices; presented as a conceptual point rather than supported by new empirical data in the excerpt.
Pressure remains high to increase the probability of success to improve the effectiveness of pharmaceutical R&D.
Asserted in the paper as motivational context for the work; framed as an industry pressure point rather than backed by a specific empirical sample or quantified survey in the excerpt.
Increasing cost and failure rates in the pharmaceutical R&D process have not fundamentally improved over the last decade.
Stated as a contextual observation in the paper's opening paragraph; presented as a summary of industry trends (no specific dataset, sample size, or citation included in the excerpt).
Without support, performance stays stable up to three issues but declines as additional issues increase cognitive load.
Empirical study / human-AI negotiation case study in a property rental scenario that varied the number of negotiated issues; the paper reports observed performance across different numbers of issues (no sample size for this specific comparison stated in the abstract).
Reliance on automated content generation introduces risks of cognitive overreliance, algorithmic bias, and strategic misalignment.
The paper articulates these risks as conceptual/qualitative concerns in its discussion; no quantitative estimates or empirical tests of these specific risks are reported in the provided excerpt.
Wide disagreement among AIs created confusion and undermined appropriate reliance on advice.
Reported experimental finding from the paper: manipulating within-panel disagreement across tasks produced wide disagreement conditions that, according to the abstract, led to confusion and reduced appropriate reliance. No quantitative metrics reported in abstract.
High within-panel consensus fostered overreliance on AI advice.
Experimental manipulation of within-panel consensus across the three tasks; the abstract reports that high consensus increased participants' reliance on AI (interpreted as overreliance). Specific measures and sample size not provided in abstract.
Current (pay-upfront) models impose a financial barrier to entry for developers, limiting innovation and excluding actors from emerging economies.
Analytical argument in the paper based on cost-structure reasoning and literature on barriers to entry; no empirical sample or causal estimate provided.
Improvements in AI ('better' AI) amplify the excess automation as well.
Model comparative statics: increased AI capabilities raise private incentives to automate, leading to more displacement than is socially optimal; theoretical analysis only.
More competition amplifies the excess automation (the automation arms race).
Comparative-statics result in the competitive task-based theoretical model showing increased competition raises firms' incentives to automate; no empirical sample.
The resulting loss from excess automation harms both workers and firm owners.
Welfare comparisons from the model showing negative payoff changes for workers (lower wages/less employment) and reduced owner returns when automation is excessive; theoretical analysis, no empirical data.
In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal.
Formal equilibrium analysis in the paper's theoretical competitive task-based model; comparative statics and welfare analysis (no empirical sample).
Knowing that AI-driven displacement can erode demand is not enough for firms to stop automating.
Analytical result from the paper's competitive task-based model showing firms' incentives do not internalize demand externalities; no empirical sample.
If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on.
Theoretical statement in the paper's motivating premise; no empirical sample reported (conceptual argument about aggregate demand effects when displacement outpaces reabsorption).
Fukui is Japan's least-visited prefecture.
Descriptive claim in the paper specifying the study site (Fukui) as the country's least-visited prefecture; no supporting national rankings provided in the excerpt.
We quantify an annual opportunity gap of 865,917 unrealized visits, equivalent to approximately 11.96 billion yen (USD 76.2 million) in lost revenue.
Model-based estimate produced by the DSS using the analyzed datasets and the DHDE-informed optimization; figure reported directly in the paper.
For regions experiencing demographic decline and structural stagnation, the primary risk is 'under-vibrancy', a condition where low visitor density suppresses economic activity and diminishes satisfaction.
Conceptual claim and problem framing provided by the authors (theoretical/qualitative argument in the paper).
Most research in urban informatics and tourism focuses on mitigating overtourism in dense global cities.
Author statement in introduction positioning the paper relative to existing literature; no quantitative literature review or citation counts reported in the excerpt.
Developers and experts still lack a shared view, resulting in repeated coordination, clarification rounds, and error-prone handoffs.
Observational/qualitative claim in paper describing current MSD practice (no numeric sample reported).
Even with AI coding assistants like GitHub Copilot, individual coding tasks are semi-automated, but the workflow connecting domain knowledge to implementation is not.
Qualitative observation/comparative statement in paper (no empirical sample reported).
Multidisciplinary Software Development (MSD) requires domain experts and developers to collaborate across incompatible formalisms and separate artifact sets.
Conceptual/argument in paper framing the problem (no empirical sample reported).
Strict data sovereignty laws fragment regional collaboration between African Union member states and hinder AI development.
Stated in the paper as a policy barrier; supported by the authors' policy review of data sovereignty rules and their implications for cross-border data sharing.
Restricted cloud access due to payment system mismatches and volatile exchange rates is a barrier to AI adoption in Africa.
Claim made in the paper as part of the list of barriers; based on the authors' qualitative and quantitative review and reference to policy/financial constraints across African countries.
Important barriers include limited access to high-performance computing (HPC).
Paper identifies limited HPC access as a key barrier; supported by the authors' collection and consolidation of HPC availability data via the Africa AI Compute Tracker (ACT).
Africa's participation in modern AI development is constrained by severe infrastructural and policy gaps.
Stated as a central argument in the paper; supported by the paper's synthesis of qualitative and quantitative evidence and reference to official declarations on AI adoption across the continent.
Only 12% of AI market value is used in physical activities.
Descriptive aggregate: authors categorize and report that 12% of estimated AI market value maps to physical activities.
Off-the-shelf implementations of DRL have seen mixed success, often plagued by high sensitivity to the hyperparameters used during training.
Statement in the paper's abstract describing observed/prior performance issues with standard DRL implementations; implies literature/empirical experience but no specific experiment/sample given in the abstract.
Coal-based energy consumption structure and a secondary-industry-dominated industrial structure significantly inhibit regional TFCP and have strong negative spatial spillovers.
Control-variable coefficients from Spatial Durbin Model on panel data (30 provinces, 2010–2023) showing statistically significant negative direct and indirect effects for coal-dominant energy structure and secondary-industry share.
Applying them to hardware-in-the-loop (HIL) embedded and Internet-of-Things (IoT) systems remains challenging due to the tight coupling between software logic and physical hardware behavior; code that compiles successfully may still fail when deployed on real devices because of timing constraints, peripheral initialization requirements, or hardware-specific behaviors.
Conceptual/engineering reasoning stated in the paper describing known HIL/IoT failure modes (no experimental quantification provided in this excerpt).
The most vulnerable occupational groups to AI-driven transformation are office workers, data entry operators, call center workers, accountants, and administrative staff with routine analytical and administrative tasks.
Results of the envelope-model assessment for the sampled European Union countries that identify occupations with high exposure/vulnerability to AI-driven change; occupations are listed explicitly in the paper.