The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2954 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Human Ai Collab Remove filter
Labor complementarities with agentic AI will shift resources toward oversight, interpretation, and coordination roles rather than routine task execution.
Economic and organizational reasoning; literature synthesis on skill complementarities; no empirical labor-market data analyzed in the paper.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... allocation of labor hours/roles toward oversight and coordination tasks
Principal–agent contracting frameworks must be extended to account for evolving agent objectives and open-ended action spaces; contracts should be dynamic and include continuous renegotiation and monitoring.
Theoretical extension and recommendations based on economic reasoning; proposed formal models for future work.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... adequacy of static contracting frameworks vs. proposed dynamic contracts
Projection congruence — alignment of forecasts/plans across heterogeneous agents — becomes a central metric for assessing alignment in agentic human–AI teams.
Conceptual modeling and proposal in the paper; introduced as a new measurable construct (projection congruence indices) for future empirical work.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... degree of congruence in projected trajectories between human and AI teammates
The DAR framework reframes human oversight as a dynamic, auditable process whose micro-level mechanics and macro-level legitimacy have direct economic consequences for productivity, contracting, regulation, and welfare.
Synthesis claim based on the conceptual framework, formal modeling, derived propositions, and policy/economics implications sections. The claim is theoretical and synthesizing rather than empirically validated.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... productivity_metrics; contracting_outcomes; regulatory_costs; welfare_measures (...
The Reversal Register will create granular, time-stamped administrative data valuable for structural estimation of trust, error externalities, and productivity comparisons between automation and human judgment.
Design claim linking register contents to potential econometric uses; no empirical data shown—claim about potential data utility.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... data_granularity (timestamped_entries per decision); suitability_for_structural_...
Reversal Register logs can enable descriptive and causal analyses of handovers and support experimental/quasi-experimental tests (e.g., randomized hysteresis thresholds, A/B override policies).
Implied empirical strategies and instrumentation described; paper outlines how register data would be used for experiments and causal inference. No empirical implementation or sample reported.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... feasibility_of_experiments; causal_identification_quality; availability_of_time-...
Operationalizing reversible AI leadership via DAR can preserve human accountability while enabling AI-led decisions where appropriate.
Conceptual argument supported by the combined use of authority states, Reversal Register logging, and override mechanisms; no field validation provided.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... human_accountability_metrics (e.g., attribution clarity); reversibility_rate; co...
DAR incorporates stabilizing mechanisms—hysteresis bands and safe-exit timers—to reduce rapid oscillation of authority and improve stability of handovers.
Formal model components and design proposals (hysteresis and timers) with conceptual argument that these damp oscillation; no empirical validation reported.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... oscillation_frequency / authority_state_stability; handover_rate; dwell_time
Continuous human-in-the-loop oversight, monitoring, and retraining are required to maintain quality and prevent model drift.
Practitioner reports and conceptual literature synthesized in the review advocating monitoring and retraining; no longitudinal empirical study provided here.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... model performance over time, incidence of drift, quality-control metrics
Transparent disclosure to customers about AI involvement helps preserve trust.
Conceptual analyses and referenced empirical/regulatory discussions in the literature aggregated by the review; this paper presents no new experimental evidence on disclosure effects.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... consumer trust/satisfaction as a function of disclosure of AI use
Hybrid designs that automate low-risk, high-volume tasks while routing complex, judgment-sensitive cases to humans produce the best operational outcomes.
Inferred best-practice from aggregated empirical studies, industry examples, and conceptual reasoning; no controlled comparative trials presented in this review.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... operational outcomes including cost, resolution quality, customer trust, and esc...
Agent augmentation via suggested responses, summarization, and information retrieval improves agent productivity.
Aggregated evidence from prior empirical research and practitioner reports cited in the review; no new measurements or sample sizes presented here.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... agent productivity metrics (e.g., response time, task throughput, resolution rat...
Generative AI enables personalization at scale through automated tailoring of messaging and recommendations.
Qualitative synthesis of empirical studies and industry reports showing automated personalization use-cases; no systematic effect-size estimates or new quantitative data in this review.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... degree of message personalization/recommendation relevance and scale (number of ...
Generative AI provides 24/7 availability and cost-effective scaling of routine interactions.
Industry case examples and prior empirical studies aggregated in the review; no original data or quantified sample sizes provided in this paper.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... availability (hours of operation), cost per interaction, throughput for routine ...
Generative AI can materially transform customer service and strategic communication by enabling continuous automation, scalable hyper-personalization, and effective agent augmentation.
Nano review: qualitative aggregation and synthesis of existing empirical studies, industry case examples, and conceptual analyses. No novel primary data or sample size; conclusion drawn from heterogeneous secondary sources and practitioner reports (not a systematic meta-analysis).
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... degree of automation, personalization scale, and agent productivity in customer ...
There is a need for standards around evaluation, bias mitigation, provenance, and accountability in AI-assisted ideation and design.
Policy recommendation motivated by documented biases, errors, and provenance issues in the reviewed studies; grounded in the synthesis's critique of existing practice.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... existence and adoption of evaluation/mitigation/provenance/accountability standa...
There will likely be complementarity-driven increases in demand for evaluative, integrative, and domain-expert roles (curators, synthesizers, implementation experts).
Inference from task-level studies and economic reasoning about complementarities between AI generative capability and human evaluative skills; empirical labor-market evidence is limited in the reviewed literature.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... employment demand for evaluative/integrative/domain-expert roles
Lower search and idea-generation costs enabled by LLMs may speed early-stage R&D and increase the gross flow of candidate innovations.
Theoretical economic interpretation supported by empirical findings of increased idea volumes in experimental/field studies summarized in the review; no long-run causal firm-level evidence presented.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... volume/rate of candidate ideas generated and pace of early-stage R&D activity
Generative AI accelerates early-stage hypothesis and prototype development by providing scaffolded prompts and procedural suggestions.
Applied case evidence and experimental studies summarized in the review showing reduced time or increased productivity in early-stage experimental/design tasks when using LLM assistance; no pooled effect size presented.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... time-to-hypothesis or prototype, number of prototype iterations in early-stage d...
Empirical studies document that AI-assisted tools can help break cognitive fixation and generate cross-domain analogies.
Cited experimental tasks and lab studies in the literature showing higher incidence of analogical or cross-domain suggestions from LLMs and improvements on fixation-related task metrics; heterogeneity across tasks and measures.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... frequency/quality of cross-domain analogies and fixation-related performance met...
Generative AI provides scaffolded, structured support that aids systematic hypothesis formation, prototyping steps, and decomposition of complex problems.
Review of design/ideation studies and applied case evidence where LLMs produced stepwise plans, decomposition prompts, or hypothesis scaffolds; evidence drawn from multiple short-term experimental and applied studies, sample sizes and exact designs vary by study.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... speed and/or quality of early-stage hypothesis generation and prototype developm...
Generative models rapidly produce many candidate ideas, analogies, and associative prompts that help overcome cognitive fixation.
Synthesis of experimental ideation and design studies reporting increases in number of ideas and examples of reduced fixation when participants used LLM outputs; heterogeneous sample sizes across cited studies (not reported in review).
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... idea quantity and measures of fixation (e.g., fixation errors, number of distinc...
Generative AI can raise per-worker productivity for tasks involving brainstorming, drafting, and prototyping, but realized gains depend on downstream filtering and implementation costs.
User studies showing higher output on specific tasks (brainstorming/drafting), combined with qualitative reports of filtering/implementation effort; many studies measure immediate task output but not net realized productivity after implementation.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... task output (ideas/drafts) per worker; downstream filtering effort; implemented ...
Generative AI can increase creative output in both lab and field tasks as judged by external raters.
Controlled experiments and field studies reporting higher judged creativity/novelty scores for AI-assisted outputs versus controls; judged creativity/novelty is typically assessed by human raters using rubric-based scoring.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... rated creativity/novelty scores; externally judged idea quality
AI assistance helps people overcome fixation and produces cross-domain analogies that they might not generate alone.
Experimental studies and qualitative analyses documenting reductions in fixation effects and increases in cross-domain analogical suggestions when participants use generative models.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... measures of fixation (e.g., repetition of prior solutions); count/quality of cro...
Generative AI supports systematic problem breakdown and early-stage prototyping, accelerating hypothesis generation and prototype development.
Field case studies of AI-supported prototyping and lab/user studies reporting reduced time-to-prototype and generated hypotheses; measures include time-to-prototype and user-reported usefulness.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... time-to-prototype; number/quality of generated hypotheses/prototypes; user-perce...
Generative AI boosts ideational fluency—the quantity and diversity of ideas produced in brainstorming tasks.
Controlled experiments and user studies measuring number and diversity of ideas with and without AI assistance; typical study designs compare participant idea counts/uniqueness across conditions (note: many studies use small or convenience samples).
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... number of ideas generated; diversity indices of ideas
When used as a 'cognitive co-pilot' that expands the solution space and challenges assumptions while humans curate and evaluate, generative AI generates economic value.
Inferred from experimental and field findings showing increased idea quantity/diversity and faster prototyping combined with qualitative studies showing human curation is needed; economic interpretation drawn from the review rather than direct macroeconomic measurement.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... idea space breadth; time-to-prototype; downstream implemented/valued ideas (larg...
Generative AI serves a dual cognitive role: (1) a high-volume catalyst for divergent idea generation and cross-domain analogy-making, and (2) a structured assistant for deconstructing complex problems and scaffolding hypotheses and prototypes.
Synthesis of controlled experiments, lab studies, field case studies, and qualitative analyses summarized in the review; evidence includes measures of idea fluency/diversity, examples of analogy production, and observations of AI-assisted problem decomposition in prototyping tasks. (Note: underlying studies are heterogeneous and often short-term or convenience samples.)
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... ideational fluency/diversity; incidence of cross-domain analogies; quality/speed...
Agent augmentation (drafting replies, summarizing histories, suggesting actions) raises frontline productivity and can improve response consistency.
Pilot deployments and internal A/B tests cited that measure time saved by agents and improvements in draft quality/consistency; mostly short-run and firm-specific reports.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... agent productivity (time per case saved), consistency of responses
Hyper-personalization at scale can increase relevance of responses and customer engagement when fed high-quality signals.
Case studies and pilot deployments that applied personalization signals (customer history, behavioral data) and reported improved relevance/engagement metrics; evidence conditional on availability and quality of signals and largely non-randomized.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... response relevance; customer engagement (clicks, session length, follow-up conta...
24/7 automation reduces routine handling time and operational costs for simple, repetitive queries.
Operational deployments and pilot studies reporting reduced handling times and cost-per-interaction for routine queries; some vendor-supplied before/after or A/B comparisons, but heterogeneous measurements and limited randomized evidence.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... routine handling time; operational cost per interaction
Perceptions—specifically trust and perceived accuracy—are central frictions in AI adoption within finance; interventions that raise perceived and demonstrable accuracy (e.g., explainability, transparent validation) will increase uptake and productivity gains.
Study finds correlations between perceptions and adoption/productivity proxies from questionnaire and performance data; authors combine these empirical associations with qualitative insights to recommend explainability/validation as interventions. Evidence is correlational and inferential (causal impact of interventions not estimated in summary).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... AI uptake/adoption; productivity gains
Higher perceived accuracy of AI outputs is associated with increased perceived utility of AI for forecasting and risk-management tasks.
Survey items measuring perceived accuracy and perceived utility for specific tasks (forecasting, risk management) and quantitative association analysis; supported by interview excerpts illustrating task-specific utility; exact effect sizes and sample counts not provided in summary.
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... perceived utility for forecasting and risk-management tasks
Greater trust in AI correlates with greater willingness to adopt AI tools and to incorporate AI recommendations into decisions.
Correlational findings from structured questionnaires linking measures of trust with adoption intentions and self-reported incorporation of AI recommendations; supported by qualitative interview evidence; sample across multinational financial institutions (size not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... willingness to adopt AI tools; incorporation of AI recommendations into decision...
When trust and accuracy are high, human–AI collaboration improves organizational agility, enabling faster, data-driven strategic pivots and better risk management.
Quantitative analysis estimating relationships between perceived trust/accuracy and organizational agility indicators (speed of strategic pivots, risk-management metrics) augmented by interview accounts describing faster responses; sample: finance professionals across multinational financial institutions (sample size and exact agility metrics not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... organizational agility (speed of strategic pivots, risk management performance)
Perceived accuracy of AI-generated insights increases decision confidence and perceived utility for forecasting and risk management.
Quantitative questionnaire measures of perceived accuracy correlated with self-reported decision confidence and perceived utility for forecasting/risk management, with qualitative interviews used to explain mechanisms; sample: finance professionals across multinational financial institutions (sample size not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... decision confidence; perceived utility for forecasting and risk management
Perceived trust in AI tools is a key driver of finance professionals' willingness to use AI and their confidence in AI-assisted decisions.
Mixed-methods: quantitative analysis of structured questionnaires measuring perceived trust together with measures of willingness to use AI and decision confidence, supplemented by semi-structured interview evidence; sample described as finance professionals across multinational financial institutions (sample size not specified in summary).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... willingness to use AI tools; confidence in AI-assisted decision-making
The Adaptive Agent Routing and Coordination (AARC) module performs intent recognition with confidence scoring, triggers proactive clarification dialogues on low confidence, and provides a planning feedback loop to refine plans during execution.
System design description: AARC includes intent classifier confidence thresholds, clarification dialogue behavior, and a feedback loop. Its role is supported by routing/coordination performance improvements and ablation experiments, but the summary lacks quantitative measures of clarification frequency or confidence calibration.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Agent Routing Success Rate; frequency and effectiveness of clarifications; plan ...
The Multi-Modal Contextual Memory (MMCM) stores multi-modal (visual, linguistic, temporal) contextual memory units in a relational graph and uses an advanced retrieval mechanism with temporal decay weighting to support multi-hop reasoning.
System design and implementation description: MMCM encodes modality, timestamp, and relational links; retrieval uses similarity plus temporal decay. Its effectiveness for multi-hop QA is supported by the reported improvement in Knowledge Base Response Validity and ablation results, though quantitative retrieval performance metrics are not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Multi-hop question-answering validity (Knowledge Base Response Total Validity); ...
The Semantic-Enhanced Task Planning (SETP) module enriches LLM-generated plans with object-relationship graphs, hierarchical task decomposition, and implicit physical/affordance constraints to improve plan plausibility.
System design description: SETP augments LLM plans with semantic object graphs and hierarchy enforcement. Its contribution is supported indirectly by ablation results showing performance drop when SETP is removed; direct quantitative attribution to specific SETP mechanisms not detailed in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Plan plausibility/validity and Task Planning Accuracy
An ablation study shows that removing any of the three core modules (SETP, MMCM, AARC) degrades CRAEA's performance; each module contributes meaningfully to overall gains.
Ablation experiments reported in the paper where SETP, MMCM, and AARC were each removed in turn and performance degradation was observed across metrics. The summary describes the qualitative outcome but omits numerical ablation results and sample sizes.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Change in performance metrics (Task Planning Accuracy, KB Response Validity, Rou...
Human evaluators rate CRAEA higher on perceived coherence, naturalness, and user satisfaction compared to baselines.
Subjective human evaluation studies reported in the paper—comparative ratings on coherence, naturalness, and satisfaction. The summary does not specify number of human raters, rating scales, or statistical significance.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Human subjective ratings: coherence, naturalness, user satisfaction
CRAEA improves Agent Routing and Coordination success relative to baseline agents.
Objective metric 'Agent Routing Success Rate' measured in simulation; CRAEA compared to baseline LLM-driven agents (e.g., memoryless or statically routed controllers) with reported higher routing success. Exact task counts and effect sizes not included in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Agent Routing Success Rate
CRAEA yields higher Knowledge Base Response Total Validity (improved multi-hop question answering from memory) than baselines.
Simulated multi-hop QA evaluations using the system's memory; comparisons to baseline agents reported improved 'Knowledge Base Response Total Validity'. Experimental details (number of QA items, statistical tests) not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Knowledge Base Response Total Validity (multi-hop QA accuracy/validity)
CRAEA outperforms baseline LLM-driven embodied agents on Task Planning Accuracy in simulated household tidying tasks.
Objective metric 'Task Planning Accuracy' measured in simulation and compared against baseline LLM-driven agents lacking one or more CRAEA components. The summary reports consistent improvements but does not provide sample size or effect magnitude.
CRAEA substantially improves home-robot performance on long-horizon, high-level natural language instructions by combining semantic task planning, multi-modal contextual memory, and adaptive routing/coordination.
Experimental evaluation in a simulated household tidying environment comparing CRAEA to baseline LLM-driven embodied agents; reported consistent improvements across multiple objective metrics (Task Planning Accuracy, Knowledge Base Response Validity, Agent Routing Success Rate). Specific task counts, effect sizes, and statistical details not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Overall home-robot performance on long-horizon, high-level NL instructions (aggr...
With appropriate policies and ecosystem building, AI offers strategic opportunities for 'leapfrogging' in service delivery (for example, healthcare diagnostics and precision agriculture) that can raise productivity and welfare.
Synthesis of case studies and prior empirical work showing promising AI applications; the assertion remains inferential and the paper calls for pilots and empirical validation.
medium positive Towards Responsible Artificial Intelligence Adoption: Emergi... service delivery performance (diagnostic rates, agricultural yields), productivi...
Investing in human capital—technical skills, digital literacy, and institutional capacity—is critical for African actors to capture value from AI and to design culturally aligned systems.
Policy and academic literature synthesis linking human capital investment to technology adoption and innovation; no primary training program evaluation in the paper.
medium positive Towards Responsible Artificial Intelligence Adoption: Emergi... number of trained AI professionals, digital literacy rates, local innovation out...
Context‑sensitive interventions—stronger governance, capacity building, multi‑stakeholder collaboration, and locally tailored strategies—are necessary to steer AI toward inclusive outcomes in Africa.
Policy and literature synthesis recommending interventions; recommendations are normative and inferential without empirical pilots in this paper.
medium positive Towards Responsible Artificial Intelligence Adoption: Emergi... local capacity metrics (skills, institutions), stakeholder participation rates, ...