The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2954 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Human Ai Collab Remove filter
A culturally grounded responsible‑AI governance framework based on Afro‑communitarianism (Ubuntu) and stakeholder theory—emphasizing collective well‑being and participatory governance—can help align AI deployment with inclusive and sustainable economic outcomes.
Theoretical integration and framework development based on normative literature in ethics, Afro‑communitarian thought, and stakeholder governance; framework is conceptual and not empirically validated in this paper.
low-medium positive Towards Responsible Artificial Intelligence Adoption: Emergi... governance inclusivity, alignment of AI outcomes with communal values, perceived...
Firms with large, integrated datasets and standardized processes can gain disproportionate returns, creating potential scale economies and winner-take-most dynamics.
Resource-based theoretical interpretation and illustrative patterns in the reviewed literature; the paper notes empirical evidence is limited and calls for further study.
speculative positive Integrating Artificial Intelligence and Enterprise Resource ... scale-dependent returns (e.g., differential ROI by firm data scale/integration l...
Explainable EEG tools can shift clinician workflows by enabling faster decision-making and reducing the requirement for specialized interpretation, with implications for training, staffing, and productivity.
Projected operational impacts discussed as implications of improved explainability; no longitudinal workflow study provided in the reviewed literature.
speculative positive Explainable Artificial Intelligence (XAI) for EEG Analysis: ... clinician workflow efficiency, training/staffing needs, productivity
Policy and managerial implication suggested: investing in short, targeted onboarding/training for GenAI tools (rather than only providing access) may deliver measurable performance gains and increase voluntary adoption.
Authors derive this implication from the randomized trial results showing increased adoption and improved scores with brief training (n = 164); this is an extrapolation from the trial findings.
speculative positive Training for Technology: Adoption and Productive Use of Gene... Organizational adoption and productivity (extrapolated from student trial outcom...