The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5539 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Adoption Remove filter
A structured three-stage framework (input/process/output) clarifies where different risks and regulatory rules apply to generative audiovisual systems.
Framework presented in the paper as a conceptual synthesis of reviewed literatures; supported by cross-references to legal, technical, and ethical sources within the review.
high positive Ethical and societal challenges to the adoption of generativ... clarity and mapping of risk types to development/use stages
The paper introduces IJOPM’s Africa Initiative (AfIn) to support Africa-based OSCM research, outlining motivation, objectives, review process, and researcher support mechanisms.
Descriptive account within the paper (administrative/initiative description rather than empirical evidence).
high positive Continental shift: operations and supply chain management re... institutional support mechanisms for Africa-based OSCM research and publication ...
Cognitive interlocks include concrete mechanisms such as policy-enforced gates, automated verification thresholds, role-based checks, and mandatory rebuttal workflows to force verification before outputs are trusted or deployed.
Design details and enumerated mechanisms within the Overton Framework as presented in the paper; no implementation case studies reported.
high positive Overton Framework v1.0: Cognitive Interlocks for Integrity i... existence and configuration of interlock mechanisms; number of outputs blocked u...
The Overton Framework is an architectural remedy that embeds 'cognitive interlocks' into development environments to enforce verification boundaries and restore system integrity.
Prescriptive architectural proposal described in the paper (design specification and principles); presented conceptually without empirical validation.
high positive Overton Framework v1.0: Cognitive Interlocks for Integrity i... presence/implementation of cognitive interlocks in dev environments; intended re...
High‑frequency sensor and satellite data, processed with AI, improve precision in measuring yields, input use, and environmental externalities, enhancing the quality of economic impact evaluations and policy targeting.
Methodological and validation studies using high‑resolution satellite imagery and field sensors that show improved measurement accuracy versus traditional survey methods; referenced empirical demonstrations in the literature.
high positive MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION measurement precision for yields, input use, emissions/environmental externaliti...
Enhanced gross‑flows estimation using longitudinal microdata can better track transitions (job-to-job, upskilling, unemployment spells) and measure occupational churn and reallocation.
Established econometric practice cited in paper; recommendation to use panel/admin microdata (CPS longitudinal supplements, LEHD/LODES, UI records); no new empirical results but aligns with standard methods.
high positive Enhancing BLS Methodologies for Projecting AI's Impact on Em... transition rates, spell durations, occupation-to-occupation flows, upskilling in...
DAR produces ten falsifiable propositions explicitly mapped to measurement constructs, making the framework empirically testable.
Derivation and listing of ten testable propositions in the paper, each linked to observable measures and prioritized by feasibility. Theoretical derivation, no empirical tests provided.
high positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... testable_hypotheses_count; mapping_quality_to_measures
BERT-family encoders provide superior contextual understanding for sentiment analysis, intent detection, behavioural segmentation, and feature extraction from user signals compared to simpler feature pipelines.
Use of BERT encoders for classification tasks with offline metrics reported such as classification accuracy for intent/sentiment and user embedding quality for segmentation. (Specific datasets and sample sizes are not provided.)
high positive Personalized Content Selection in Marketing Using BERT and G... intent classification accuracy, sentiment scoring accuracy, quality of user embe...
Automated equivalency systems require algorithmic oversight features (audit trails, human-in-the-loop checks) to maintain trust and labor-market legitimacy.
Governance recommendation following best practices in algorithmic accountability; not supported by empirical testing of oversight mechanisms in this context.
high positive Establishes a technical and academic bridge between the educ... user trust metrics, appeal/review rates, correctness of overturned automated dec...
AI tools (automated document parsing/NLP, translation, equivalency-prediction classifiers, anomaly detection) can scale credential processing and reduce transaction costs and processing time.
Paper cites potential AI capabilities and application areas; the claim is inferential from known AI functionalities, with no implementation benchmark or throughput numbers provided.
high positive Establishes a technical and academic bridge between the educ... processing throughput, average processing time per credential, operational costs
Continuous monitoring and observability for performance, compliance, and drift are essential to maintain operational stability and detect model or process degradation.
Prescriptive claim grounded in engineering practice and comparative analysis of failure modes; supported by illustrative deployments; no quantitative evaluation of monitoring impact reported.
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... detection rate/time for performance degradation, compliance violations, model dr...
Core governance components should include policy enforcement integrated into development and deployment pipelines, risk controls for data/model behavior/automated actions, explicit human-in-the-loop and human-on-the-loop oversight, continuous monitoring/logging/incident-response, and role-based governance structures linking legal, compliance, IT, and business units.
Prescriptive design based on literature synthesis and practitioner experience; described as core components in the proposed reference pattern (conceptual, case-illustrated).
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... presence and integration of specified governance controls and organizational rol...
Research needs include empirically measuring prevalence and average loss from prompt fraud incidents, evaluating effectiveness and cost-effectiveness of technical mitigations (watermarking, provenance), and modeling firm-level investment decisions under varying regulatory/insurance regimes.
Authors' recommended agenda for further research based on identified gaps in the paper's qualitative analysis.
high positive Prompt Engineering or Prompt Fraud? Governance Challenges fo... existence and quality of empirical datasets and models addressing prevalence, lo...
The United States manages the openness–security trade-off via a decentralized, rights‑based coordination emphasizing procedural transparency and public accountability.
Qualitative content analysis of national‑level policy texts: 18 U.S. policy documents coded across the same four analytical dimensions.
high positive Balancing openness and security in scientific data governanc... governance logic / institutional coordination type (decentralized, rights‑based)
If companies are treated as recipients, they would be required to comply with nondiscrimination obligations (e.g., Title VI, Title IX, Section 504) in education contexts and may be subject to enforcement actions, corrective requirements, and private suits where applicable.
Interpretation of recipient obligations under existing civil‑rights statutes and enforcement mechanisms; doctrinal analysis and illustrative case law.
high positive Civil Rights and the EdTech Revolution scope of compliance and enforcement obligations imposed on vendors
Systems biology, constraint‑based metabolic modeling (e.g., FBA), kinetic modeling, and hybrid models are effective tools to predict fluxes and identify metabolic bottlenecks.
Discussion and aggregation of modeling studies using COBRA/OptFlux frameworks, FBA simulations, and kinetic/dynamic modeling applied to engineered strains to predict flux changes and suggest genetic interventions; validated in multiple reported DBTL cycles.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... accuracy/usefulness of flux predictions and identification of bottlenecks leadin...
Engineered microorganisms are maturing into modular, programmable “microbial factories” capable of producing complex chemicals, specialty compounds, and next‑generation biofuels.
Synthesis of multiple experimental case studies reported in the literature (bench and pilot scale fermentations) demonstrating microbial production of natural products, specialty chemicals, and biofuel molecules using engineered strains and heterologous pathways; methods include pathway assembly, enzyme engineering, and fermentation optimization.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... demonstrated ability to produce target complex molecules (presence/identity of p...
Cluster-level interpretation can be performed via LLM-based semantic decoding to generate concise human-readable labels and descriptions for discovered themes.
Pipeline step implemented: use of an LLM to decode cluster content and produce labels/descriptions; reported in experimental workflow on ICML and ACL abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... quality of cluster labels / human-readability of cluster descriptions
Normalized representations can be embedded into a continuous vector space and then clustered using density-based clustering to identify latent themes without pre-specifying the number of topics.
Methodological pipeline: embedding model applied to normalized representations followed by density-based clustering (algorithmic property: density-based methods do not require pre-specified cluster count). Demonstrated in experiments on ICML and ACL 2025 abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... latent theme detection (cluster discovery) without predefining cluster count
The authors introduce clinical-model instruments such as the Model Temperament Index (behavioral profiling), Model Semiology (structured symptom lexicon), and M-CARE (standardized case reporting).
Proposed indices and reporting formats presented in the methods and applied in demonstrations/cases within the paper.
high positive Model Medicine: A Clinical Framework for Understanding, Diag... Availability and application of Model Temperament Index, Model Semiology, and M-...
The paper proposes a five-layer diagnostic framework: staged assessment from symptom description to mechanistic localization and prognosis.
Framework design documented in the paper and applied in case demonstrations (descriptive pipeline combining symptom elicitation, profiling, semiology, imaging/localization, and reporting).
high positive Model Medicine: A Clinical Framework for Understanding, Diag... Presence of a five-stage diagnostic assessment pipeline for model evaluation
Neural MRI (Model Resonance Imaging) maps five medical neuroimaging modalities to corresponding AI interpretability techniques (e.g., structural → weight-space maps, functional → activation dynamics, connectivity → representational similarity).
Methodological mapping and toolkit design described in the paper (conceptual mapping and implemented open-source toolkit).
high positive Model Medicine: A Clinical Framework for Understanding, Diag... Completeness of mapping between five neuroimaging modalities and corresponding i...
The authors present a discipline taxonomy comprising 15 subdisciplines grouped into four divisions: Basic Model Sciences, Clinical Model Sciences, Model Public Health, and Model Architectural Medicine.
Taxonomic synthesis produced by the authors from interpretability, reliability, governance, and architecture literatures (documented taxonomy in the paper).
high positive Model Medicine: A Clinical Framework for Understanding, Diag... Presence and organization of a 15-subdiscipline taxonomy into four divisions
The paper defines 'Model Medicine' as a unified research program treating AI models like organisms with diagnosable, classifiable, and treatable states.
Conceptual framing and theoretical synthesis presented in the paper (literature-driven argumentation; no empirical sample required).
high positive Model Medicine: A Clinical Framework for Understanding, Diag... Existence of a unified conceptual framework (Model Medicine) for treating AI mod...
China’s National Public Cultural Service System Demonstration Zone program raised employment in the cultural sector.
Multi-period difference-in-differences (DID) analysis exploiting staggered adoption of the Demonstration Zone designation across 280 prefecture-level Chinese cities, 2008–2021; primary outcome measured: city-level cultural-sector employment; models include city and year fixed effects.
high positive Redefining Policy Effectiveness in the Digital Era: From Cor... city-level cultural-sector employment
Training improved exam scores by 0.27 grade points relative to optional access without training (p = 0.027).
Intent-to-treat comparison between the optional-access-with-training arm and the optional-access-without-training arm in the randomized trial (n = 164); reported effect size = +0.27 grade points and p-value = 0.027.
high positive Training for Technology: Adoption and Productive Use of Gene... Exam score (grade points) on a law-school issue-spotting exam
A brief, targeted training increased voluntary LLM use from 26% (optional access without training) to 41% (optional access with training).
Randomized experiment with 164 law students assigned to three arms (no access, optional access, optional access + ~10-minute training). Observed adoption rates in the two optional-access arms were 26% (untrained) vs. 41% (trained).
high positive Training for Technology: Adoption and Productive Use of Gene... LLM adoption (whether the student used the LLM)
A research agenda prioritizing empirical evaluation, model transparency, and rigorous impact assessment is required to translate conceptual promise into measurable public value.
Explicit recommendation in the blurb identifying research priorities; not an empirical claim but a proposed course of action.
high positive Governing The Future existence and uptake of empirical evaluations, transparency practices, and rigor...
Illustrative vignettes show AI in action: logistics optimization for trade, AI models for national fiscal decision-making, and algorithmic job-acceleration for individual labor market navigation.
Reference to specific case vignettes contained in the book; these are illustrative scenarios rather than empirical case studies with measured outcomes.
high positive Governing The Future demonstrated feasibility of AI applications in logistics, fiscal decision-making...
Ten defining policy questions structure the book’s approach, turning abstract AI capabilities into operational policy choices.
Descriptive claim about the book's organization; verifiable by inspecting the book's table of contents (no external empirical data).
high positive Governing The Future existence and use of ten policy questions as an organizing framework
International comparability in these analyses is achieved using PPP adjustments for monetary measures and standardized occupation/task classifications (ISCO/ISCO-08) with harmonized baseline years and variable definitions.
Described data harmonization procedures across multi-country firm and worker datasets, including PPP adjustments and use of ISCO classification for occupations.
high positive S-TCO: A Sustainable Teacher Context Ontology for Educationa... comparability/consistency of monetary and occupational measures across countries
Adoption of advanced AI tools (especially generative AI) raises firm-level productivity on average.
Meta-analysis of firm-level panel studies using administrative tax and manufacturing surveys and proprietary AI-usage logs; difference-in-differences and event-study estimates comparing adopters vs non-adopters with firm fixed effects and robustness checks.
high positive S-TCO: A Sustainable Teacher Context Ontology for Educationa... firm-level labor productivity (measured output per worker or per hour)
The compendium issues specific policy-design recommendations for economic policymakers: deploy proportional compliance obligations and regulatory sandboxes, subsidize or certify third‑party auditors, monitor credit availability and pricing post‑implementation, and coordinate cross‑border standards.
Explicit policy recommendations listed in the "Policy design recommendations" subsection; derived from the paper's interdisciplinary analysis.
high positive Diego Saucedo Portillo Sauceport Research adoption of recommended policy tools (proportional obligations, sandboxes, audit...
The protocol has been prepared/indexed across 15 strategic languages to facilitate international diffusion and comparative uptake.
Stated multilingual/global indexing claim in the compendium (15 languages).
high positive Diego Saucedo Portillo Sauceport Research number of languages in which the protocol is indexed (15)
The paper implements a "White Box" regulatory protocol for AI in Mexico's financial sector requiring algorithmic transparency, auditability, explainability, and non‑discrimination standards for credit/FinTech algorithms.
Output of the technical protocol described in the compendium; developed from a forensic audit of source materials and legal-methodological synthesis (doctrinal/comparative analysis).
high positive Diego Saucedo Portillo Sauceport Research presence and breadth of mandated transparency/auditability/explainability/non‑di...
The compendium proposes recognizing "Digital Sovereignty" as a new fundamental human right that protects individuals’ autonomy, data sovereignty, due process, and non-discrimination in algorithmic financial decision‑making.
Normative definitional claim in the protocol; grounded in the author's doctrinal and comparative legal analysis across 12 years (2014–2026).
high positive Diego Saucedo Portillo Sauceport Research legal recognition/status of a new fundamental right ("Digital Sovereignty") and ...
Recommended policy approach: run pilots to empirically measure trade‑offs, combine obligations with capacity building (technical assistance, shared datasets, sandboxes), harmonize with international frameworks, and use staged implementation with cost‑benefit analyses.
Policy recommendations derived from the compendium’s interdisciplinary synthesis and economic/policy analysis (prescriptive, not empirically validated within the paper).
high positive Diego Saucedo Portillo Sauceport Research existence and outcomes of pilot studies, capacity building programs, harmonizati...
Policy operationalization should include algorithmic impact assessments, audit logs, disclosure regimes to regulators/judiciary, redress/grievance mechanisms, and governance principles (open, transparent, accountable).
Prescriptive policy instruments and standards proposed in the compendium based on the forensic audit and normative design work; descriptive claim about the protocol’s recommended instruments.
high positive Diego Saucedo Portillo Sauceport Research presence/adoption of specified regulatory instruments (impact assessments, audit...
There is a need for standardized metrics to quantify benefits and costs of governed hyperautomation (e.g., ROI adjusted for compliance risk, incident rate per automation scale, oversight hours per automated transaction, model drift frequency and remediation cost).
Paper's recommendations and research agenda calling for standardized metrics and empirical studies; prescriptive statement rather than empirical finding.
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... availability of standardized metrics for evaluating governed automation outcomes
Researchers and policymakers should promote auditable, privacy-preserving attribution standards and independent audits while supporting randomized trials and field experiments under privacy constraints.
Policy/actionable takeaways informed by methodological challenges and literature on randomized trials and privacy-preserving methods; prescriptive guidance rather than an empirically tested program.
high positive Artificial Intelligence for Personalized Digital Advertising... feasibility and use of auditable privacy-preserving attribution and field experi...
There is a need for standardized benchmarks and privacy-preserving shared datasets to enable independent economic evaluation of ad-tech.
Methodological recommendation informed by stated data access asymmetries and reproducibility concerns; not accompanied by a new benchmark in the paper.
high positive Artificial Intelligence for Personalized Digital Advertising... availability of benchmarks and shared datasets for independent evaluation
Antitrust analysis of ad-tech should incorporate algorithmic effects such as endogenous use of ML to entrench platform position and data network effects.
Theoretical and policy argument drawing on platform economics and ML scale advantages; recommendation rather than empirical finding.
high positive Artificial Intelligence for Personalized Digital Advertising... scope of factors considered in antitrust analysis
The positive effect of digital rural development on AGTFP is robust to alternative variable constructions, sample adjustments, and endogeneity treatments (e.g., instrumental-variable/other methods).
Robustness exercises reported in the paper: re-specification of the digitalization measure, re-sampling/alternative sample specifications, and use of instrumental/other methods to address endogeneity.
Digital rural development in China significantly increases agricultural green total factor productivity (AGTFP).
Fixed-effects panel regression using provincial panel data for 30 Chinese provinces from 2012–2022 (≈330 province-year observations), with reported significance and robustness checks (alternative measures, sample adjustments, and endogeneity tests).
high positive Digital rural development and agricultural green total facto... Agricultural green total factor productivity (AGTFP)
There is a widespread consensus across the reviewed literature on the need for worker upskilling, active labor‑market policies, and targeted support for displaced workers.
Policy recommendations recurring in the majority of the 17 peer‑reviewed papers synthesized in the review.
high positive The role of generative artificial intelligence on labor mark... policy recommendations (upskilling / labor-market interventions)
Alternative training channels (self-education and professional retraining) are nontrivial contributors to the AI workforce supply.
Comparative analysis showing inclusion of self-education and retraining contributions in the aggregate coverage estimate (the 43.9% figure explicitly includes these channels); descriptive counts/estimates of non-degree trained entrants.
high positive Employment og Graduates of Educational Programs in the Field... Contribution of non-degree training channels to total AI-capable personnel (head...
A subset of universities performs markedly better on employment effectiveness, graduate wages, and placement into popular AI roles (i.e., identifiable high-performing institutions).
Comparative analysis across the 191 universities, including employment rates, observed wage outcomes, and placement distributions; identification and reporting of key/high-performing institutions and their metrics.
high positive Employment og Graduates of Educational Programs in the Field... University-level employment effectiveness (employment rate into AI roles), gradu...
Russian universities that run AI-related educational programs are contributing substantially to the national AI workforce supply.
Institutional-level monitoring data from n = 191 universities showing program enrollments, graduate counts and graduate employment into AI-related roles (descriptive analysis of supply from degree programs).
high positive Employment og Graduates of Educational Programs in the Field... Number of AI-capable graduates supplied by university programs (aggregate contri...
AI complements high-skill labor and raises returns to advanced cognitive and creative skills.
Microdata wage analyses and task-complementarity mappings that link AI-exposed tasks with skill groups, supported by panel regressions showing higher wages/earnings growth for higher-skill workers and by theoretical task-based models predicting complementarity.
high positive Intelligence and Labor Market Transformation: A Critical Ana... wages/earnings of high-skill workers
The platform's algorithmic content distribution mechanism can moderate the competing interests between AIGC scale and consumer preference for HGC.
Deeper analysis of distribution mechanisms reported in the paper indicating that algorithmic ranking/distribution influences how AIGC and HGC are surfaced and can therefore affect their relative reach and engagement.
medium mixed Scale over Preference: The Impact of AI-Generated Content on... engagement allocation between AIGC and HGC as mediated by the content distributi...