The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5157 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Human Ai Collab Remove filter
The AR-MLLM system achieved high measurement/feature-activity accuracy (participants performed correct measurements under AR-MLLM guidance).
Measurement/feature activity correctness was measured in the CMM case study; authors report high measurement accuracy under the AR-MLLM condition. (Exact rates and sample size not provided in the summary.)
medium positive Augmented Reality-Based Training System Using Multimodal Lan... Measurement/feature activity accuracy (correctness of performed measurements)
The AR-MLLM system achieved high task-recognition accuracy (the system correctly identified the current task/step).
Measured task recognition accuracy in the CMM case study; authors report 'high' recognition accuracy for the system. (Exact numeric accuracy and sample size not specified in the summary.)
medium positive Augmented Reality-Based Training System Using Multimodal Lan... Task recognition accuracy (system correctly identifying current task/step)
An AR + multimodal LLM (AR-MLLM) training system can substantially improve training and execution in complex machine operations (demonstrated on a Coordinate Measuring Machine).
Case-study experiment in the paper where human participants performed CMM measurement tasks both with and without the AR-MLLM system; metrics collected included task recognition accuracy, measurement activity correctness, task completion time, and subjective workload/usability. (Participant sample size not specified in the provided summary.)
medium positive Augmented Reality-Based Training System Using Multimodal Lan... Overall training and execution performance (aggregated: task accuracy, task comp...
Labor complementarities with agentic AI will shift resources toward oversight, interpretation, and coordination roles rather than routine task execution.
Economic and organizational reasoning; literature synthesis on skill complementarities; no empirical labor-market data analyzed in the paper.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... allocation of labor hours/roles toward oversight and coordination tasks
Principal–agent contracting frameworks must be extended to account for evolving agent objectives and open-ended action spaces; contracts should be dynamic and include continuous renegotiation and monitoring.
Theoretical extension and recommendations based on economic reasoning; proposed formal models for future work.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... adequacy of static contracting frameworks vs. proposed dynamic contracts
Projection congruence — alignment of forecasts/plans across heterogeneous agents — becomes a central metric for assessing alignment in agentic human–AI teams.
Conceptual modeling and proposal in the paper; introduced as a new measurable construct (projection congruence indices) for future empirical work.
medium positive Visioning Human-Agentic AI Teaming: Continuity, Tension, and... degree of congruence in projected trajectories between human and AI teammates
The DAR framework reframes human oversight as a dynamic, auditable process whose micro-level mechanics and macro-level legitimacy have direct economic consequences for productivity, contracting, regulation, and welfare.
Synthesis claim based on the conceptual framework, formal modeling, derived propositions, and policy/economics implications sections. The claim is theoretical and synthesizing rather than empirically validated.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... productivity_metrics; contracting_outcomes; regulatory_costs; welfare_measures (...
The Reversal Register will create granular, time-stamped administrative data valuable for structural estimation of trust, error externalities, and productivity comparisons between automation and human judgment.
Design claim linking register contents to potential econometric uses; no empirical data shown—claim about potential data utility.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... data_granularity (timestamped_entries per decision); suitability_for_structural_...
Reversal Register logs can enable descriptive and causal analyses of handovers and support experimental/quasi-experimental tests (e.g., randomized hysteresis thresholds, A/B override policies).
Implied empirical strategies and instrumentation described; paper outlines how register data would be used for experiments and causal inference. No empirical implementation or sample reported.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... feasibility_of_experiments; causal_identification_quality; availability_of_time-...
Operationalizing reversible AI leadership via DAR can preserve human accountability while enabling AI-led decisions where appropriate.
Conceptual argument supported by the combined use of authority states, Reversal Register logging, and override mechanisms; no field validation provided.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... human_accountability_metrics (e.g., attribution clarity); reversibility_rate; co...
DAR incorporates stabilizing mechanisms—hysteresis bands and safe-exit timers—to reduce rapid oscillation of authority and improve stability of handovers.
Formal model components and design proposals (hysteresis and timers) with conceptual argument that these damp oscillation; no empirical validation reported.
medium positive Human–AI Handovers: A Dynamic Authority Reversal Framework f... oscillation_frequency / authority_state_stability; handover_rate; dwell_time
Continuous human-in-the-loop oversight, monitoring, and retraining are required to maintain quality and prevent model drift.
Practitioner reports and conceptual literature synthesized in the review advocating monitoring and retraining; no longitudinal empirical study provided here.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... model performance over time, incidence of drift, quality-control metrics
Transparent disclosure to customers about AI involvement helps preserve trust.
Conceptual analyses and referenced empirical/regulatory discussions in the literature aggregated by the review; this paper presents no new experimental evidence on disclosure effects.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... consumer trust/satisfaction as a function of disclosure of AI use
Hybrid designs that automate low-risk, high-volume tasks while routing complex, judgment-sensitive cases to humans produce the best operational outcomes.
Inferred best-practice from aggregated empirical studies, industry examples, and conceptual reasoning; no controlled comparative trials presented in this review.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... operational outcomes including cost, resolution quality, customer trust, and esc...
Agent augmentation via suggested responses, summarization, and information retrieval improves agent productivity.
Aggregated evidence from prior empirical research and practitioner reports cited in the review; no new measurements or sample sizes presented here.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... agent productivity metrics (e.g., response time, task throughput, resolution rat...
Generative AI enables personalization at scale through automated tailoring of messaging and recommendations.
Qualitative synthesis of empirical studies and industry reports showing automated personalization use-cases; no systematic effect-size estimates or new quantitative data in this review.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... degree of message personalization/recommendation relevance and scale (number of ...
Generative AI provides 24/7 availability and cost-effective scaling of routine interactions.
Industry case examples and prior empirical studies aggregated in the review; no original data or quantified sample sizes provided in this paper.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... availability (hours of operation), cost per interaction, throughput for routine ...
Generative AI can materially transform customer service and strategic communication by enabling continuous automation, scalable hyper-personalization, and effective agent augmentation.
Nano review: qualitative aggregation and synthesis of existing empirical studies, industry case examples, and conceptual analyses. No novel primary data or sample size; conclusion drawn from heterogeneous secondary sources and practitioner reports (not a systematic meta-analysis).
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... degree of automation, personalization scale, and agent productivity in customer ...
There is a need for standards around evaluation, bias mitigation, provenance, and accountability in AI-assisted ideation and design.
Policy recommendation motivated by documented biases, errors, and provenance issues in the reviewed studies; grounded in the synthesis's critique of existing practice.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... existence and adoption of evaluation/mitigation/provenance/accountability standa...
There will likely be complementarity-driven increases in demand for evaluative, integrative, and domain-expert roles (curators, synthesizers, implementation experts).
Inference from task-level studies and economic reasoning about complementarities between AI generative capability and human evaluative skills; empirical labor-market evidence is limited in the reviewed literature.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... employment demand for evaluative/integrative/domain-expert roles
Lower search and idea-generation costs enabled by LLMs may speed early-stage R&D and increase the gross flow of candidate innovations.
Theoretical economic interpretation supported by empirical findings of increased idea volumes in experimental/field studies summarized in the review; no long-run causal firm-level evidence presented.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... volume/rate of candidate ideas generated and pace of early-stage R&D activity
Generative AI accelerates early-stage hypothesis and prototype development by providing scaffolded prompts and procedural suggestions.
Applied case evidence and experimental studies summarized in the review showing reduced time or increased productivity in early-stage experimental/design tasks when using LLM assistance; no pooled effect size presented.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... time-to-hypothesis or prototype, number of prototype iterations in early-stage d...
Empirical studies document that AI-assisted tools can help break cognitive fixation and generate cross-domain analogies.
Cited experimental tasks and lab studies in the literature showing higher incidence of analogical or cross-domain suggestions from LLMs and improvements on fixation-related task metrics; heterogeneity across tasks and measures.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... frequency/quality of cross-domain analogies and fixation-related performance met...
Generative AI provides scaffolded, structured support that aids systematic hypothesis formation, prototyping steps, and decomposition of complex problems.
Review of design/ideation studies and applied case evidence where LLMs produced stepwise plans, decomposition prompts, or hypothesis scaffolds; evidence drawn from multiple short-term experimental and applied studies, sample sizes and exact designs vary by study.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... speed and/or quality of early-stage hypothesis generation and prototype developm...
Generative models rapidly produce many candidate ideas, analogies, and associative prompts that help overcome cognitive fixation.
Synthesis of experimental ideation and design studies reporting increases in number of ideas and examples of reduced fixation when participants used LLM outputs; heterogeneous sample sizes across cited studies (not reported in review).
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... idea quantity and measures of fixation (e.g., fixation errors, number of distinc...
Generative AI can raise per-worker productivity for tasks involving brainstorming, drafting, and prototyping, but realized gains depend on downstream filtering and implementation costs.
User studies showing higher output on specific tasks (brainstorming/drafting), combined with qualitative reports of filtering/implementation effort; many studies measure immediate task output but not net realized productivity after implementation.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... task output (ideas/drafts) per worker; downstream filtering effort; implemented ...
Generative AI can increase creative output in both lab and field tasks as judged by external raters.
Controlled experiments and field studies reporting higher judged creativity/novelty scores for AI-assisted outputs versus controls; judged creativity/novelty is typically assessed by human raters using rubric-based scoring.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... rated creativity/novelty scores; externally judged idea quality
AI assistance helps people overcome fixation and produces cross-domain analogies that they might not generate alone.
Experimental studies and qualitative analyses documenting reductions in fixation effects and increases in cross-domain analogical suggestions when participants use generative models.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... measures of fixation (e.g., repetition of prior solutions); count/quality of cro...
Generative AI supports systematic problem breakdown and early-stage prototyping, accelerating hypothesis generation and prototype development.
Field case studies of AI-supported prototyping and lab/user studies reporting reduced time-to-prototype and generated hypotheses; measures include time-to-prototype and user-reported usefulness.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... time-to-prototype; number/quality of generated hypotheses/prototypes; user-perce...
Generative AI boosts ideational fluency—the quantity and diversity of ideas produced in brainstorming tasks.
Controlled experiments and user studies measuring number and diversity of ideas with and without AI assistance; typical study designs compare participant idea counts/uniqueness across conditions (note: many studies use small or convenience samples).
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... number of ideas generated; diversity indices of ideas
When used as a 'cognitive co-pilot' that expands the solution space and challenges assumptions while humans curate and evaluate, generative AI generates economic value.
Inferred from experimental and field findings showing increased idea quantity/diversity and faster prototyping combined with qualitative studies showing human curation is needed; economic interpretation drawn from the review rather than direct macroeconomic measurement.
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... idea space breadth; time-to-prototype; downstream implemented/valued ideas (larg...
Generative AI serves a dual cognitive role: (1) a high-volume catalyst for divergent idea generation and cross-domain analogy-making, and (2) a structured assistant for deconstructing complex problems and scaffolding hypotheses and prototypes.
Synthesis of controlled experiments, lab studies, field case studies, and qualitative analyses summarized in the review; evidence includes measures of idea fluency/diversity, examples of analogy production, and observations of AI-assisted problem decomposition in prototyping tasks. (Note: underlying studies are heterogeneous and often short-term or convenience samples.)
medium positive ChatGPT as an Innovative Tool for Idea Generation and Proble... ideational fluency/diversity; incidence of cross-domain analogies; quality/speed...
Agent augmentation (drafting replies, summarizing histories, suggesting actions) raises frontline productivity and can improve response consistency.
Pilot deployments and internal A/B tests cited that measure time saved by agents and improvements in draft quality/consistency; mostly short-run and firm-specific reports.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... agent productivity (time per case saved), consistency of responses
Hyper-personalization at scale can increase relevance of responses and customer engagement when fed high-quality signals.
Case studies and pilot deployments that applied personalization signals (customer history, behavioral data) and reported improved relevance/engagement metrics; evidence conditional on availability and quality of signals and largely non-randomized.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... response relevance; customer engagement (clicks, session length, follow-up conta...
24/7 automation reduces routine handling time and operational costs for simple, repetitive queries.
Operational deployments and pilot studies reporting reduced handling times and cost-per-interaction for routine queries; some vendor-supplied before/after or A/B comparisons, but heterogeneous measurements and limited randomized evidence.
medium positive The Effectiveness of ChatGPT in Customer Service and Communi... routine handling time; operational cost per interaction
Perceptions—specifically trust and perceived accuracy—are central frictions in AI adoption within finance; interventions that raise perceived and demonstrable accuracy (e.g., explainability, transparent validation) will increase uptake and productivity gains.
Study finds correlations between perceptions and adoption/productivity proxies from questionnaire and performance data; authors combine these empirical associations with qualitative insights to recommend explainability/validation as interventions. Evidence is correlational and inferential (causal impact of interventions not estimated in summary).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... AI uptake/adoption; productivity gains
Higher perceived accuracy of AI outputs is associated with increased perceived utility of AI for forecasting and risk-management tasks.
Survey items measuring perceived accuracy and perceived utility for specific tasks (forecasting, risk management) and quantitative association analysis; supported by interview excerpts illustrating task-specific utility; exact effect sizes and sample counts not provided in summary.
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... perceived utility for forecasting and risk-management tasks
Greater trust in AI correlates with greater willingness to adopt AI tools and to incorporate AI recommendations into decisions.
Correlational findings from structured questionnaires linking measures of trust with adoption intentions and self-reported incorporation of AI recommendations; supported by qualitative interview evidence; sample across multinational financial institutions (size not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... willingness to adopt AI tools; incorporation of AI recommendations into decision...
When trust and accuracy are high, human–AI collaboration improves organizational agility, enabling faster, data-driven strategic pivots and better risk management.
Quantitative analysis estimating relationships between perceived trust/accuracy and organizational agility indicators (speed of strategic pivots, risk-management metrics) augmented by interview accounts describing faster responses; sample: finance professionals across multinational financial institutions (sample size and exact agility metrics not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... organizational agility (speed of strategic pivots, risk management performance)
Perceived accuracy of AI-generated insights increases decision confidence and perceived utility for forecasting and risk management.
Quantitative questionnaire measures of perceived accuracy correlated with self-reported decision confidence and perceived utility for forecasting/risk management, with qualitative interviews used to explain mechanisms; sample: finance professionals across multinational financial institutions (sample size not specified).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... decision confidence; perceived utility for forecasting and risk management
Perceived trust in AI tools is a key driver of finance professionals' willingness to use AI and their confidence in AI-assisted decisions.
Mixed-methods: quantitative analysis of structured questionnaires measuring perceived trust together with measures of willingness to use AI and decision confidence, supplemented by semi-structured interview evidence; sample described as finance professionals across multinational financial institutions (sample size not specified in summary).
medium positive Human-AI Synergy in Financial Decision-Making: Exploring Tru... willingness to use AI tools; confidence in AI-assisted decision-making
The Adaptive Agent Routing and Coordination (AARC) module performs intent recognition with confidence scoring, triggers proactive clarification dialogues on low confidence, and provides a planning feedback loop to refine plans during execution.
System design description: AARC includes intent classifier confidence thresholds, clarification dialogue behavior, and a feedback loop. Its role is supported by routing/coordination performance improvements and ablation experiments, but the summary lacks quantitative measures of clarification frequency or confidence calibration.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Agent Routing Success Rate; frequency and effectiveness of clarifications; plan ...
The Multi-Modal Contextual Memory (MMCM) stores multi-modal (visual, linguistic, temporal) contextual memory units in a relational graph and uses an advanced retrieval mechanism with temporal decay weighting to support multi-hop reasoning.
System design and implementation description: MMCM encodes modality, timestamp, and relational links; retrieval uses similarity plus temporal decay. Its effectiveness for multi-hop QA is supported by the reported improvement in Knowledge Base Response Validity and ablation results, though quantitative retrieval performance metrics are not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Multi-hop question-answering validity (Knowledge Base Response Total Validity); ...
The Semantic-Enhanced Task Planning (SETP) module enriches LLM-generated plans with object-relationship graphs, hierarchical task decomposition, and implicit physical/affordance constraints to improve plan plausibility.
System design description: SETP augments LLM plans with semantic object graphs and hierarchy enforcement. Its contribution is supported indirectly by ablation results showing performance drop when SETP is removed; direct quantitative attribution to specific SETP mechanisms not detailed in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Plan plausibility/validity and Task Planning Accuracy
An ablation study shows that removing any of the three core modules (SETP, MMCM, AARC) degrades CRAEA's performance; each module contributes meaningfully to overall gains.
Ablation experiments reported in the paper where SETP, MMCM, and AARC were each removed in turn and performance degradation was observed across metrics. The summary describes the qualitative outcome but omits numerical ablation results and sample sizes.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Change in performance metrics (Task Planning Accuracy, KB Response Validity, Rou...
Human evaluators rate CRAEA higher on perceived coherence, naturalness, and user satisfaction compared to baselines.
Subjective human evaluation studies reported in the paper—comparative ratings on coherence, naturalness, and satisfaction. The summary does not specify number of human raters, rating scales, or statistical significance.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Human subjective ratings: coherence, naturalness, user satisfaction
CRAEA improves Agent Routing and Coordination success relative to baseline agents.
Objective metric 'Agent Routing Success Rate' measured in simulation; CRAEA compared to baseline LLM-driven agents (e.g., memoryless or statically routed controllers) with reported higher routing success. Exact task counts and effect sizes not included in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Agent Routing Success Rate
CRAEA yields higher Knowledge Base Response Total Validity (improved multi-hop question answering from memory) than baselines.
Simulated multi-hop QA evaluations using the system's memory; comparisons to baseline agents reported improved 'Knowledge Base Response Total Validity'. Experimental details (number of QA items, statistical tests) not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Knowledge Base Response Total Validity (multi-hop QA accuracy/validity)
CRAEA outperforms baseline LLM-driven embodied agents on Task Planning Accuracy in simulated household tidying tasks.
Objective metric 'Task Planning Accuracy' measured in simulation and compared against baseline LLM-driven agents lacking one or more CRAEA components. The summary reports consistent improvements but does not provide sample size or effect magnitude.
CRAEA substantially improves home-robot performance on long-horizon, high-level natural language instructions by combining semantic task planning, multi-modal contextual memory, and adaptive routing/coordination.
Experimental evaluation in a simulated household tidying environment comparing CRAEA to baseline LLM-driven embodied agents; reported consistent improvements across multiple objective metrics (Task Planning Accuracy, Knowledge Base Response Validity, Agent Routing Success Rate). Specific task counts, effect sizes, and statistical details not provided in the summary.
medium positive Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Overall home-robot performance on long-horizon, high-level NL instructions (aggr...