Evidence (7395 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Adoption
Remove filter
Because deception effectiveness declines with transparency and attacker learning, strategic externalities can arise across actors (e.g., disclosures by one actor can reduce deception value for others), suggesting roles for coordination or insurance markets.
Conceptual implication and economic argument in the discussion section; not supported by explicit multi-actor modeling or empirical market analysis in the paper (argumentative/theoretical).
More granular and auditable credentials may shift signaling dynamics and risk credential inflation; regulators should monitor credential proliferation and market value.
Conceptual warning in paper (theoretical); no empirical credential-market study included.
These infrastructural and access constraints create unequal starting points that can amplify later disparities in labor-market preparedness.
Inference drawn from observed survey disparities in access, hands-on training, and preparedness; the study did not directly measure labor-market outcomes but links preparedness to potential labor-market effects in discussion.
Top-down AI guidance from institutions is common, while grassroots input from educators and students is often missing, which reduces policy relevance and uptake.
Survey items and thematic coding indicating the origin and participatory nature of institutional AI guidelines; comparative prevalence reported in open and closed responses.
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
Organizational compliance, governance, and transaction costs shape which AI uses are feasible, producing heterogeneity in adoption across firms; trust and accountability frictions can slow adoption even when productivity gains exist.
Workshop participants (n=15) reported compliance and governance considerations; authors infer broader organizational heterogeneity and friction effects from these qualitative data.
Designers’ expressed concerns about skill development suggest potential long-term effects on human capital accumulation; adoption that reduces learning opportunities could lower future wages or employability.
Participants' concerns captured in qualitative workshops (n=15); claim is an extrapolation to labor-market outcomes rather than direct measurement in the study.
Legacy systems and siloed incentives create switching frictions that slow diffusion of AI-enabled ISP; early adopters may achieve sustained cost and service advantages and vendors bundling technology with change management could capture large rents.
Authors' argument informed by case observations of switching costs and vendor roles; no causal market-level evidence provided.
Returns to AI investments may exhibit increasing returns to scale, reinforcing winner‑take‑most dynamics unless offset by platformization or open‑source diffusion.
Economic scenario reasoning on capital intensity and platform effects; no empirical calibration or econometric evidence provided.
Job insecurity rises when FDI is short‑term, footloose, or concentrated in capital‑intensive extractive projects.
Conceptual arguments and empirical examples in the review linking investment temporariness and capital intensity to higher job instability; empirical evidence less comprehensive and context-specific.
Private governance and firm-level solutions (internal standards, bargaining with unions) may proliferate, but these can entrench firm-specific norms and increase market power asymmetries.
Conceptual argument drawing on governance and industrial organization literature; no empirical measurement of prevalence or market-power effects included.
Inadequate protections reduce public trust in mobile-AI services, which can slow diffusion and undercut the growth trajectories that policy narratives anticipate.
Inferred from stakeholder commentary and policy discourse combined with communication-rights theory; the paper does not present survey or adoption-rate data.
Low-wage and platform workers are particularly exposed to algorithmic management and surveillance, with potential downward pressure on wages, bargaining power, and job quality.
The paper's qualitative analysis of stakeholder comments and policy omissions, combined with literature-based inference about platform labor dynamics; no primary labor-market survey or quantitative wage data provided.
Soft‑law governance and growth-first narratives risk concentrating benefits (investment, productivity gains) while externalizing costs (privacy harms, biased decisioning) onto vulnerable populations, exacerbating inequality and reducing inclusive economic development.
Analytic inference from qualitative review of governance instruments and policy narratives combined with communications-ecology and political-economy reasoning; not based on quantitative economic measurement in the paper.
Legal liability and cyber-insurance markets will need to adapt as machine-generated code becomes pervasive, with pricing internalizing risk from inadequate verification processes.
Speculative legal/economic implication discussed in the paper; no actuarial or legal-case data provided.
Individual developers or firms may underinvest in verification because defect accumulation imposes external costs on downstream actors, creating market failures that can justify standards, certifications, or regulation mandating interlocks or minimum verification practices.
Policy and market-failure argument based on externalities presented conceptually; no modeling or empirical evidence of such externalities provided.
Short-run productivity gains from generative AI may be offset by longer-run increases in maintenance, security breaches, and reliability costs if verification lags.
Economic reasoning and forward-looking implications discussed in the paper; no empirical cost-benefit or longitudinal data presented.
Small, unverified errors, insecure patterns, and brittle interactions accumulate over time (latent accumulation), increasing operational fragility and long-run maintenance costs.
Theoretical argument and illustrative examples in the paper; no longitudinal defect accumulation studies or empirical cost analysis provided.
Time pressure and productivity incentives lead developers to accept plausible AI outputs without full validation, a behavioral/institutional failure mode called the 'micro-coercion of speed' that effectively reverses the burden of proof.
Behavioral diagnosis and incentive analysis presented conceptually in the paper; no behavioral experiments, surveys, or observational data reported.
Hallucination and error risk introduce potential liabilities in client engagements and may change contracting, insurance, and pricing practices in consulting services.
Derived from practitioner concerns reported in interviews and authors' normative discussion; no contractual or insurance-market data presented.
Effective deployment requires governance, verification processes, and liability management to manage hallucination risk, creating adoption costs that may advantage larger firms and affect market concentration and pricing power.
Argument based on interviews about necessary organizational safeguards and the resource requirements to implement them; speculative market-structure implications are not empirically tested in the paper.
Widespread GenAI use may accelerate skill obsolescence for routine competencies and increase the premium on monitoring, critical evaluation, and AI‑integration skills, shifting investment toward retraining and upskilling.
Projection based on qualitative interviews and the authors' economic interpretation of TGAIF; no longitudinal or wage/skill data provided.
Upfront integration and recurring governance costs mean smaller firms may face higher relative costs — potentially increasing scale advantages for larger incumbents.
Deployment case studies and cost reports indicating significant fixed integration and governance costs; inference to market structure is speculative.
There is a risk of deskilling through excessive reliance on AI, implying a need for continuous training and certification to preserve human judgment.
Qualitative interview evidence and observed concerns about overreliance; authors recommend training/governance based on identified risks; no direct longitudinal measurement of deskilling provided in summary.
Recommendation algorithms and widespread automated advice can induce herding or increase common exposures across retail investor portfolios, with potential macroprudential implications.
Theoretical discussion supported by examples from retail trading episodes and algorithmic amplification literature referenced in the review (conceptual and anecdotal evidence; limited systematic empirical quantification).
Vendors offering integrated governed hyperautomation stacks may capture premium pricing and increase switching costs, potentially widening adoption gaps between large incumbents and SMEs.
Market-structure and competitive dynamics discussed theoretically in the Implications section; no market-share or pricing data provided.
Higher compliance and liability costs may be passed to districts, potentially affecting the affordability of EdTech for underfunded schools unless federal guidance or subsidies offset costs — a distributional concern.
Economic distributional reasoning (theoretical), not supported by empirical pricing or budget impact data in the Article.
Exposure to AI and platform work produces psychosocial effects for workers, including increased job insecurity, stress, and changing task content in surviving occupations.
Surveys, qualitative case studies, and workplace studies summarized in the review reporting worker‑reported insecurity and stress; the review also highlights inconsistent measurement and limited systematic evidence on psychosocial outcomes.
Regulators and standard-setters who value transparency and auditability will need to account for the gap between evaluation results and actionable fixes; firms may require incentives or rules to ensure evaluation leads to remediation, not just documentation.
Authors' policy implication derived from the study's finding of a results-actionability gap and discussion of auditability concerns; speculative recommendation rather than empirical finding.
Delegation of oversight and reallocation of monitoring tasks due to AI integration changes transaction costs and affects organizational design and governance needs (e.g., more verification/audit effort or specialist oversight roles).
Based on participants' reported shifts in who performed monitoring/oversight tasks in the 40 interviews and the authors' interpretation of those shifts in organizational/economic terms.
The paper likely includes ablation studies and standard metrics (task success rate, step-wise error, plan coherence) to isolate contributions of the two training stages and to evaluate performance.
Summary states these analyses as 'likely additional methods' (i.e., typical but not fully detailed in the abstract); no direct confirmation or results provided in the provided text.
This study represents the first attempt to conduct a comprehensive evaluation of artificial intelligence (AI) and its influence on job displacement based on the existing body of literature.
Author assertion in the paper; the excerpt provides no external verification (no citation of prior reviews/meta-analyses to justify the 'first attempt' claim).
This research is one of the first large-scale quantitative studies to empirically validate the mediating pathways through which GenAI influences business performance in the UK market.
Positioning/originality claim in the paper's literature review and contribution statement asserting relative novelty and sample size (n = 312) compared to prior studies.
Results are robust across the authors' reported robustness checks.
Author statement that multiple robustness checks were performed and the main findings persist (the summary does not enumerate the checks or report their outcomes).
This study is the first systematic presentation of factual data describing employment outcomes of Russian university AI graduates.
Authors' stated novelty claim in the paper (asserted uniqueness of systematic institutional-level employment outcome data for Russian AI graduates).
Pidgin should not be treated as 'broken English' but as necessary linguistic infrastructure for repaired, sustainable development; failures often reflect language-sovereignty crises requiring political solutions.
Normative claim supported by mixed-methods findings on comprehension, adoption, and legitimacy, and Critical Discourse Analysis of institutional language hierarchies.
The paper advances a new conceptual framework called 'Developmental Sociolinguistics' and formalizes Three Laws of Linguistic Justice (Epistemic Access, Discursive Parity, Sovereignty), operationalized via a proposed 'Pidgin Protocol' for decolonized development practice.
Conceptual/theoretical contribution based on synthesis of field results and literature; proposal of framework and laws as normative prescriptions rather than empirically tested policy interventions.
Standards for provenance, labeling of AI-generated content, and interoperable evidence formats would lower verification costs and create beneficial network effects.
Policy recommendation derived from identified verification frictions and the study's analysis of data/model governance needs.
There is growing market demand for AI-assisted fact-checking tools, creating opportunities for software, monitoring services, and labeled datasets.
Analytic implication drawn from findings about increasing AI use and needs for automation/labeling; based on interviews and market inference in the study.
Hybrid agency implies complementarity between GenAI and managerial/knowledge‑worker skills (curation, evaluation, coordination), potentially increasing returns to those skills while automating routine cognitive tasks—consistent with skill‑biased technological change.
Synthesis of recurring themes linking GenAI capabilities with managerial skill topics in the thematic clusters; positioned as an implication for labour demand and skill composition rather than an empirically tested effect.
Policy prescriptions for developing countries to mitigate these vulnerabilities include: diversify supply sources, invest in local human capital and mid-stream capabilities, create legal/regulatory flexibility to navigate competing standards, and pursue regional cooperation to build bargaining leverage.
Policy analysis and recommendations grounded in the mechanisms identified via process tracing and comparative cases; intended as prescriptive synthesis rather than empirically demonstrated interventions in the paper. (Based on inferred best-practice interventions; no empirical evaluation/sample size provided.)
There is demand for tooling that bridges evaluation outputs to actionable fixes (e.g., failure-mode libraries, standardized remediation templates, evaluation-to-priority mapping), signaling economic opportunities for third-party tools and consulting services.
Authors' inference based on the documented results-actionability gap and participants' descriptions of pain points; presented as a market implication rather than direct market measurement.
Firms that invest in instrumentation, cross-functional processes, and remediation levers capture more value from LLMs; organizations with better evaluation-to-action pipelines will obtain higher productivity gains and market edge.
Authors' inference from observed heterogeneity among teams in the interviews and comparison of practices in teams that reported more success converting evaluations into changes.
Structured errors (SERF) enable automated recovery, reducing human-in-the-loop remediation and the marginal cost of scaling agent fleets.
Reasoned implication from the design of SERF; proposed as an expected operational benefit rather than demonstrated quantitative result in the summary.
Adaptive budgeting (ATBA) can reduce wasted latency and cost by optimizing timeouts and retries across tool chains, improving throughput and reducing per-interaction resource spend.
Algorithmic claim supported by theoretical framing and proposed reproducible benchmarks; no concrete field-level cost/throughput numbers provided in the summary.
Improved identity propagation (via CABP) reduces risk and compliance costs by lowering misattributed actions and improving audit trails, thereby reducing expected liability and incident-resolution overhead.
Analytical / economic argument in the implications section; no reported quantitative field results in the summary to directly measure cost reduction.
Humans who configure and teach agents gain understanding and skills themselves — learning-by-teaching generates human capital accumulation endogenous to agent deployment (bidirectional scaffolding).
Qualitative, naturalistic observations and comparative documentation of users configuring/teaching agents during the one-month study; no randomized assignment or pre/post quantitative skill testing reported.
By lowering single-GPU resource requirements and improving throughput, SlideFormer can democratize domain adaptation and fine-tuning of large models on commodity single-GPU hardware (reducing the need for multi-GPU clusters).
Argumentative implication based on reported throughput, memory, and capacity improvements (e.g., enabling 123B+ models on a single RTX 4090 and reducing memory usage). This is an extrapolation from experimental results rather than a directly measured socio-economic outcome.
Regulators may prefer systems that support contestability and audit trails and could mandate argumentation-style explainability in certain sectors.
Speculative policy prediction; no regulatory statements or empirical policy adoption evidence cited.