The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4857 claims)

Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 417 113 67 480 1091
Governance & Regulation 419 202 124 64 823
Research Productivity 261 100 34 303 703
Organizational Efficiency 406 96 71 40 616
Technology Adoption Rate 323 128 74 38 568
Firm Productivity 307 38 70 12 432
Output Quality 260 71 27 29 387
AI Safety & Ethics 118 179 45 24 368
Market Structure 107 128 85 14 339
Decision Quality 177 75 37 19 312
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 74 34 78 9 197
Skill Acquisition 98 36 40 9 183
Innovation Output 121 12 24 13 171
Firm Revenue 98 35 24 157
Consumer Welfare 73 31 37 7 148
Task Allocation 87 16 34 7 144
Inequality Measures 25 76 32 5 138
Regulatory Compliance 54 61 13 3 131
Task Completion Time 89 7 4 3 103
Error Rate 44 51 6 101
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 33 11 7 98
Wages & Compensation 54 15 20 5 94
Team Performance 47 12 15 7 82
Automation Exposure 27 26 10 6 72
Job Displacement 6 39 13 58
Hiring & Recruitment 40 4 6 3 53
Developer Productivity 34 4 3 1 42
Social Protection 22 11 6 2 41
Creative Output 16 7 5 1 29
Labor Share of Income 12 6 9 27
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Productivity Remove filter
Observed AI techniques used in ERP contexts include supervised and unsupervised machine learning, predictive forecasting, anomaly/fraud detection, optimization, and explainable AI.
Systematic review of peer-reviewed articles, technical evaluations, and practitioner reports (2020–2025) documenting the methods applied in ERP and enterprise planning/control systems.
high positive Integrating Artificial Intelligence and Enterprise Resource ... presence and reporting of specific AI techniques within ERP implementations (fre...
Durable benefits require the co‑evolution of technology, people, and process capabilities rather than technology deployment alone.
Interpretive framing and synthesis of multiple empirical case studies and best-practice guidance included in the 2020–2025 literature review; recurring theme across studies.
high positive Integrating Artificial Intelligence and Enterprise Resource ... durability of performance improvements following AI deployment (e.g., sustained ...
Continuous monitoring and observability for performance, compliance, and drift are essential to maintain operational stability and detect model or process degradation.
Prescriptive claim grounded in engineering practice and comparative analysis of failure modes; supported by illustrative deployments; no quantitative evaluation of monitoring impact reported.
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... detection rate/time for performance degradation, compliance violations, model dr...
Core governance components should include policy enforcement integrated into development and deployment pipelines, risk controls for data/model behavior/automated actions, explicit human-in-the-loop and human-on-the-loop oversight, continuous monitoring/logging/incident-response, and role-based governance structures linking legal, compliance, IT, and business units.
Prescriptive design based on literature synthesis and practitioner experience; described as core components in the proposed reference pattern (conceptual, case-illustrated).
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... presence and integration of specified governance controls and organizational rol...
Research needs include empirically measuring prevalence and average loss from prompt fraud incidents, evaluating effectiveness and cost-effectiveness of technical mitigations (watermarking, provenance), and modeling firm-level investment decisions under varying regulatory/insurance regimes.
Authors' recommended agenda for further research based on identified gaps in the paper's qualitative analysis.
high positive Prompt Engineering or Prompt Fraud? Governance Challenges fo... existence and quality of empirical datasets and models addressing prevalence, lo...
The United States manages the openness–security trade-off via a decentralized, rights‑based coordination emphasizing procedural transparency and public accountability.
Qualitative content analysis of national‑level policy texts: 18 U.S. policy documents coded across the same four analytical dimensions.
high positive Balancing openness and security in scientific data governanc... governance logic / institutional coordination type (decentralized, rights‑based)
Systems biology, constraint‑based metabolic modeling (e.g., FBA), kinetic modeling, and hybrid models are effective tools to predict fluxes and identify metabolic bottlenecks.
Discussion and aggregation of modeling studies using COBRA/OptFlux frameworks, FBA simulations, and kinetic/dynamic modeling applied to engineered strains to predict flux changes and suggest genetic interventions; validated in multiple reported DBTL cycles.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... accuracy/usefulness of flux predictions and identification of bottlenecks leadin...
Engineered microorganisms are maturing into modular, programmable “microbial factories” capable of producing complex chemicals, specialty compounds, and next‑generation biofuels.
Synthesis of multiple experimental case studies reported in the literature (bench and pilot scale fermentations) demonstrating microbial production of natural products, specialty chemicals, and biofuel molecules using engineered strains and heterologous pathways; methods include pathway assembly, enzyme engineering, and fermentation optimization.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... demonstrated ability to produce target complex molecules (presence/identity of p...
Cluster-level interpretation can be performed via LLM-based semantic decoding to generate concise human-readable labels and descriptions for discovered themes.
Pipeline step implemented: use of an LLM to decode cluster content and produce labels/descriptions; reported in experimental workflow on ICML and ACL abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... quality of cluster labels / human-readability of cluster descriptions
Normalized representations can be embedded into a continuous vector space and then clustered using density-based clustering to identify latent themes without pre-specifying the number of topics.
Methodological pipeline: embedding model applied to normalized representations followed by density-based clustering (algorithmic property: density-based methods do not require pre-specified cluster count). Demonstrated in experiments on ICML and ACL 2025 abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... latent theme detection (cluster discovery) without predefining cluster count
Training improved exam scores by 0.27 grade points relative to optional access without training (p = 0.027).
Intent-to-treat comparison between the optional-access-with-training arm and the optional-access-without-training arm in the randomized trial (n = 164); reported effect size = +0.27 grade points and p-value = 0.027.
high positive Training for Technology: Adoption and Productive Use of Gene... Exam score (grade points) on a law-school issue-spotting exam
A brief, targeted training increased voluntary LLM use from 26% (optional access without training) to 41% (optional access with training).
Randomized experiment with 164 law students assigned to three arms (no access, optional access, optional access + ~10-minute training). Observed adoption rates in the two optional-access arms were 26% (untrained) vs. 41% (trained).
high positive Training for Technology: Adoption and Productive Use of Gene... LLM adoption (whether the student used the LLM)
A one standard deviation increase in regional AI exposure raises total factor energy efficiency (TFEE) by about 3.2% in Chinese cities.
Panel analysis of 274 Chinese cities over 2007–2021 using an AI exposure index and TFEE as outcome; causal estimation relies on an instrumental-variables strategy (instruments: U.S. robot-adoption patterns and geographic proximity to external AI clusters).
high positive Artificial intelligence, greening of occupational structure ... Total factor energy efficiency (TFEE) at the city/regional level
A research agenda prioritizing empirical evaluation, model transparency, and rigorous impact assessment is required to translate conceptual promise into measurable public value.
Explicit recommendation in the blurb identifying research priorities; not an empirical claim but a proposed course of action.
high positive Governing The Future existence and uptake of empirical evaluations, transparency practices, and rigor...
Illustrative vignettes show AI in action: logistics optimization for trade, AI models for national fiscal decision-making, and algorithmic job-acceleration for individual labor market navigation.
Reference to specific case vignettes contained in the book; these are illustrative scenarios rather than empirical case studies with measured outcomes.
high positive Governing The Future demonstrated feasibility of AI applications in logistics, fiscal decision-making...
Ten defining policy questions structure the book’s approach, turning abstract AI capabilities into operational policy choices.
Descriptive claim about the book's organization; verifiable by inspecting the book's table of contents (no external empirical data).
high positive Governing The Future existence and use of ten policy questions as an organizing framework
International comparability in these analyses is achieved using PPP adjustments for monetary measures and standardized occupation/task classifications (ISCO/ISCO-08) with harmonized baseline years and variable definitions.
Described data harmonization procedures across multi-country firm and worker datasets, including PPP adjustments and use of ISCO classification for occupations.
high positive S-TCO: A Sustainable Teacher Context Ontology for Educationa... comparability/consistency of monetary and occupational measures across countries
Adoption of advanced AI tools (especially generative AI) raises firm-level productivity on average.
Meta-analysis of firm-level panel studies using administrative tax and manufacturing surveys and proprietary AI-usage logs; difference-in-differences and event-study estimates comparing adopters vs non-adopters with firm fixed effects and robustness checks.
high positive S-TCO: A Sustainable Teacher Context Ontology for Educationa... firm-level labor productivity (measured output per worker or per hour)
There is a need for standardized metrics to quantify benefits and costs of governed hyperautomation (e.g., ROI adjusted for compliance risk, incident rate per automation scale, oversight hours per automated transaction, model drift frequency and remediation cost).
Paper's recommendations and research agenda calling for standardized metrics and empirical studies; prescriptive statement rather than empirical finding.
high positive Governed Hyperautomation for CRM and ERP: A Reference Patter... availability of standardized metrics for evaluating governed automation outcomes
The positive effect of digital rural development on AGTFP is robust to alternative variable constructions, sample adjustments, and endogeneity treatments (e.g., instrumental-variable/other methods).
Robustness exercises reported in the paper: re-specification of the digitalization measure, re-sampling/alternative sample specifications, and use of instrumental/other methods to address endogeneity.
Digital rural development in China significantly increases agricultural green total factor productivity (AGTFP).
Fixed-effects panel regression using provincial panel data for 30 Chinese provinces from 2012–2022 (≈330 province-year observations), with reported significance and robustness checks (alternative measures, sample adjustments, and endogeneity tests).
high positive Digital rural development and agricultural green total facto... Agricultural green total factor productivity (AGTFP)
VIS produces interrelated metrics that explicitly include indirect labor embodied throughout the supply chain rather than only direct labor employed in a reported sector.
Computation of vertically integrated sector vectors from input–output matrices and allocation of upstream labor inputs to final-sector output; reported construction of VIS-based labor input metrics.
high positive Measuring labor productivity dynamics in U.S. industrial and... VIS labor input metrics (direct + indirect labor embodied per final-sector outpu...
Adapting Pasinetti’s vertically integrated sectors framework enables production of time-series productivity measures at the subsystem level.
Methodological adaptation described and applied to annual data (2014–2023) to produce VIS-based productivity time series for subsystems (e.g., electric generation subsystem).
high positive Measuring labor productivity dynamics in U.S. industrial and... time-series labor productivity metrics at the subsystem (VIS) level
The VIS approach captures both direct and indirect (upstream) labor effects by attributing upstream labor requirements to final-sector outputs using Leontief-type inverses / vertically integrated sector vectors.
Methodology constructs annual input–output matrices (BEA + IMPLAN mapping) and computes Leontief-type inverses/vertically integrated sector vectors to allocate direct and indirect requirements; upstream labor is attributed to final output using BLS employment/hours data.
high positive Measuring labor productivity dynamics in U.S. industrial and... attribution of upstream (indirect) labor embodied per unit of final-sector outpu...
There is a widespread consensus across the reviewed literature on the need for worker upskilling, active labor‑market policies, and targeted support for displaced workers.
Policy recommendations recurring in the majority of the 17 peer‑reviewed papers synthesized in the review.
high positive The role of generative artificial intelligence on labor mark... policy recommendations (upskilling / labor-market interventions)
The framework supports counterfactual scenario simulations that vary capability diffusion, adoption rates, policy interventions, and firm behavior to explore how exposures might translate into outcomes.
Description of scenario and simulation capabilities in the methods: Agent-based experiments run with parameterized counterfactuals for diffusion, adoption, and policy levers.
high positive The Iceberg Index: Measuring Workforce Exposure in the AI Ec... simulated labor-market trajectories under alternative counterfactual parameteriz...
We present, to the best of our knowledge, the first large-scale study of real-world conversational programming in IDE-native settings.
Authors' assertion about novelty; study scope described (analysis of messages from Cursor and GitHub Copilot across public repositories).
medium mixed Programming by Chat: A Large-Scale Behavioral Analysis of 11... existence/novelty of a large-scale empirical study of IDE-native conversational ...
The observed behaviors stem from a root cause: current models are trained as monolithic agents, so splitting them into director/worker roles conflicts with their training distribution; retaining each model close to its trained mode (text generation for the manager, tool use for the worker) and externalizing organizational structure to code enables the pipeline to succeed.
Qualitative analysis and interpretation of experimental results and pipeline design choices reported in the paper (comparison of different pipeline structures and model modes).
medium mixed Can AI Models Direct Each Other? Organizational Structure as... compatibility between model training distribution and assigned role (qualitative...
This abstraction (logical compute) helps explain both why the laws travel so well across settings and why they give rise to a persistent efficiency game in hardware, algorithms, and systems.
Paper-provided explanatory argument connecting abstraction to empirical observations of cross-setting regularity and continued efficiency-focused innovation (no numerical evidence in excerpt).
medium mixed The Unreasonable Effectiveness of Scaling Laws in AI extent of efficiency-driven innovation and cross-setting generality of scaling l...
Keyword-style queries persist even among experienced users.
Analysis of query types across experience levels in the Asta dataset showing continued presence of keyword-style queries among users labeled as experienced.
medium mixed Understanding Usage and Engagement in AI-Powered Scientific ... prevalence of keyword-style queries by user experience level
Prior research has primarily focused on automating user actions through clicks and keystrokes, this paradigm overlooks human intention, where users value the ability to explore, iterate, and refine their ideas while maintaining agency.
Literature characterization and conceptual argument presented in the paper's introduction (qualitative claim based on authors' synthesis of prior work and user values).
medium mixed GUIDE: A Benchmark for Understanding and Assisting Users in ... alignment of prior research focus with user values (automation vs. intention-pre...
Macroeconomic effects remain hard to observe because of a 'productivity J-curve': firms often must invest in organizational changes first and only later realize measurable financial/productivity gains from AI.
Conceptual synthesis supported by firm-level case studies and empirical papers in the reviewed literature indicating implementation lags; the brief frames this as an interpretation of mixed short-run macro evidence rather than a single causal estimate.
medium mixed AI, Productivity, and Labor Markets: A Review of the Empiric... timing/lags in firm productivity and realization of financial gains from AI inve...
The rapid adoption of big data and AI is transforming economies and raises ethical concerns such as data privacy breaches and algorithmic bias.
Framing/background statements in the paper referencing broader literature and policy discourse on big data/AI adoption and associated ethical issues.
medium mixed How Big Data Enhances Firm Value Under Data Privacy Regulati... economic transformation; ethical risk indicators (conceptual)
AI coding agents can resolve real-world software issues, yet they frequently introduce regressions, breaking tests that previously passed.
Stated as background/motivation in the paper; references general observations about agent behavior and prior work (no specific dataset/sample cited in the provided excerpt).
medium mixed TDAD: Test-Driven Agentic Development - Reducing Code Regres... ability to resolve issues (resolution rate) and regression rate (tests broken)
PIER is forecast‑independent: unlike A* path optimization whose wave protection degrades 4.5× under realistic forecast uncertainty, PIER maintains constant performance using only local observations.
Controlled experiments simulating realistic forecast uncertainty comparing A* path optimization and PIER; reported 4.5× degradation for A* and constant PIER performance when using local observations only (details of uncertainty model and sample sizes in paper).
medium mixed Physics-informed offline reinforcement learning eliminates c... robustness of routing performance under forecast uncertainty (degradation factor...
Organisational rules, regulatory constraints, and transparency requirements materially shape micro-level human–AI interactions and can alter adoption incentives and accountability outcomes.
Conceptual governance argument linking institutional constraints to human–AI design choices; theoretical reasoning, no empirical policy evaluation provided.
medium mixed Comparative analysis of strategic vs. computational thinking... human–AI interaction patterns, algorithm adoption incentives, and accountability...
Potential productivity gains from automating routine informational tasks are conditional: net gains depend on managerial capacity to integrate AI outputs into systemic decision-making and on governance structures.
Conceptual conditional claim derived from integration of systems thinking and algorithmic optimisation literatures; no empirical measurement of productivity effects.
medium mixed Comparative analysis of strategic vs. computational thinking... firm-level productivity gains conditional on managerial integration capacity and...
Information-processing and optimisation tasks exhibit clear substitution pressure (are most automatable), whereas relational and normative tasks remain complementary to human labour.
Theory-driven claim combining managerial role analysis with general automation/complementarity logic from AI economics; conceptual prediction without empirical quantification.
medium mixed Comparative analysis of strategic vs. computational thinking... automation potential/substitution pressure vs complementarity of different task ...
Human–algorithm architectures can take three forms—augment (assist), displace (replace), or reconfigure (redistribute) cognitive tasks—and their design depends on organisational design, regulation, and decision-structure rules.
Taxonomic conceptualization derived from cross-framework analysis; prescriptive mapping rather than empirical classification; no sample.
medium mixed Comparative analysis of strategic vs. computational thinking... distribution of human–algorithm architectures (augment/displace/reconfigure) con...
Interpersonal coordination roles (disturbance handler, liaison, leader) retain strong human elements (influence, ethics, legitimacy) that are difficult to fully algorithmise.
Conceptual argument based on the nature of relational and legitimacy-based tasks within Mintzberg’s framework and limits of algorithmic substitution; theoretical only.
medium mixed Comparative analysis of strategic vs. computational thinking... degree of algorithmisability (substitutability) of interpersonal coordination ta...
Entrepreneurial and disturbance-handling roles become hybrid decision zones requiring integrated strategic and computational reasoning (modelling, simulation, anomaly detection plus contextual interpretation and values-based trade-offs).
Analytical synthesis of role demands and computational affordances; cross-framework analysis producing a hybrid strategic–computational characterization; no primary data.
medium mixed Comparative analysis of strategic vs. computational thinking... hybridity of decision processes in entrepreneurial and disturbance-handler roles...
Roles that rely on relational intelligence, ethical judgement, and influence (leader, liaison, figurehead, negotiator) remain primarily strategic but are increasingly supported by predictive and diagnostic analytics.
Role-specific effects derived from cross-framework conceptual mapping (Mintzberg roles × computational thinking); theoretical argumentation rather than empirical measurement.
medium mixed Comparative analysis of strategic vs. computational thinking... degree of strategic primacy vs algorithmic support for relational/ethical manage...
AI systematically reconfigures managerial work by augmenting, displacing, or reconfiguring cognitive tasks across Mintzberg’s ten managerial roles.
Conceptual synthesis and comparative role mapping integrating Mintzberg’s ten managerial roles with Senge’s Five Disciplines and computational thinking; theoretical analysis only (no primary empirical data; no sample).
medium mixed Comparative analysis of strategic vs. computational thinking... pattern of task reconfiguration across Mintzberg's ten managerial roles (augment...
Commercial platforms' incentives may not align with public-interest verification, so economic policies (transparency mandates, data portability, competition policy) can reshape incentives and improve information ecosystems.
Policy implication drawn from the study's analysis of platform governance and incentive misalignment, supported by interviews and documents discussing platform interactions.
medium mixed Fact-Checking Platforms in the Middle East: A Comparative St... alignment of platform incentives with public-interest verification
Platforms selectively adopt automated tools for triage, detection, and monitoring while keeping human judgment central to verification.
Interviews and workflow analyses indicating selective automation (for triage/monitoring) combined with human-led verification steps.
medium mixed Fact-Checking Platforms in the Middle East: A Comparative St... degree of automation in verification workflows and reliance on human judgment
Each platform (Akeed, Teyit, Factnameh) adapts its scope and tactics according to national constraints.
Platform-level descriptions derived from interviews with staff/editors and analysis of platform outputs and workflows for each of the three organizations.
medium mixed Fact-Checking Platforms in the Middle East: A Comparative St... scope of investigation and tactical choices
Fact-checking platforms in Jordan (Akeed), Turkey (Teyit), and Iran (Factnameh) face similar operational constraints—censorship, limited access to data, and difficulties engaging audiences—but respond with different strategies shaped by local politics.
Comparative interpretive analysis based on document analysis of platform outputs/guidelines and semi-structured interviews with staff, editors, and stakeholders from the three platforms (Akeed, Teyit, Factnameh).
medium mixed Fact-Checking Platforms in the Middle East: A Comparative St... operational constraints (censorship, data access, audience engagement) and adapt...
Better aligned systems can enhance productivity and decision quality, but misaligned systems can displace or harm workers unevenly; justice‑oriented deployment and active redistribution/retraining policies are needed to manage distributional impacts.
Argument synthesizing literature on technology's labor effects and distributive justice; the paper does not present original empirical labor-market analysis.
medium mixed LLM Alignment should go beyond Harmlessness–Helpfulness and ... productivity/decision quality improvements and differential labor displacement o...
Firms face tradeoffs between customization (to capture users) and pluralism (serving diverse values); market competition may either improve or degrade alignment depending on incentives.
Conceptual economic analysis and literature synthesis on market incentives and product differentiation; presented as theorized tradeoffs rather than empirically resolved.
medium mixed LLM Alignment should go beyond Harmlessness–Helpfulness and ... market-level alignment quality under differing competitive incentive structures
Operational choices (data selection, reward modeling, deployment constraints) are strategic decisions by firms balancing cost, speed to market, and risk, and these choices materially affect alignment outcomes.
Analytical argument supported by examples and literature on product development tradeoffs; no new firm‑level empirical analysis is provided.
medium mixed LLM Alignment should go beyond Harmlessness–Helpfulness and ... alignment outcomes as a function of firm operational choices (e.g., data curatio...