The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI in energy delivers only when organisations invest in people, processes and fit: broad non-specialist upskilling, transparent assurance with appeal rights, and workflow-friendly design are the three levers that convert AI tools into trusted, sustained use and credible reliability and emissions benefits.

Overcoming Resistance to Change: Artificial Intelligence in the Energy Sector
Jerome Lambert · April 16, 2026 · Suranaree Journal of Social Science
openalex descriptive low evidence 7/10 relevance DOI Source PDF
In safety-critical energy organisations, durable AI value arises not from model accuracy alone but from broad-based workforce capability building, communicative governance (transparency plus contestability), and tight workflow integration that together increase trust, reduce shadow workarounds, and link adoption to reliability and emissions outcomes.

Background and Objectives: Artificial Intelligence (AI) promises productivity, safety, and sustainability gains in asset-intensive sectors; however, outcomes in the energy sector remain uneven. The sector's safety-critical operations, capital intensity, and stringent regulatory requirements make it a particularly demanding context for AI adoption, where technical performance alone is insufficient to ensure value. This study treats adoption as a socio-technical process rather than a tooling decision. It addresses three research questions: (RQ1) how workforce development and change leadership shape acceptance and sustained use; (RQ2) which organisational and governance conditions mitigate resistance and enable legitimate deployment; and (RQ3) under what conditions adoption yields operational reliability and environmental performance aligned with decarbonisation goals. Methodology: A qualitative, multi-case design triangulated a semi-structured interview with a senior manager, Likert-scale surveys of mid-level managers and technical staff, and analysis of internal policies and strategy documents. Data were anonymised, thematically coded using a blended inductive–deductive approach, organised in a shared codebook, and synthesised across cases to map convergences and divergences in readiness, workforce development, and governance. Intercoder reliability was assessed, and disagreements were resolved through adjudication and iterative refinement of the codebook across cases. Triangulation maintained a transparent chain of evidence. Ethical safeguards included obtaining informed consent, maintaining confidentiality, and obtaining prior approval from the relevant institutional authorities. Main Results: Three reinforcing levers shape adoption outcomes. First, broad-based capability building beyond specialist teams prevents benefits from concentrating in expert enclaves and reduces brittle scale. Second, communicative governance that couples transparency with contestability, through model cards, bias tests, validation reports, and explicit appeal rights, earns trust, curbs shadow workarounds, and improves safety culture. Third, a tight workflow fit that minimises cognitive overhead at the decision point accelerates legitimate use and strengthens links to emissions monitoring and predictive-maintenance outcomes. Thin training coverage fosters anxiety about substitution and slows diffusion; structured upskilling and precise recourse mechanisms are associated with higher confidence, productivity, and clearer sustainability pathways. Discussions: Algorithmic accuracy alone does not determine value; legitimacy and uptake hinge on people's and process readiness. The three levers translate literature on dynamic capabilities, AI readiness, and human responses to automation into operational guidance: invest in non-specialist literacy, institutionalise assurance and recourse, and engineer for workflow ergonomics in safety-critical contexts. Environmental gains materialise where oversight intensity, data quality, and targeted use cases align, indicating that governance quality conditions the conversion of adoption into credible emissions reductions. A responsible scale is pragmatic: build organisation-wide competence, communicate for legitimacy, and design for workflow fit. Conclusions: Leaders should fund training coverage and design rather than headline hours, equip non-specialists to interpret model outputs, pair performance artefacts with participatory routines, and treat explainability as a usability requirement. Policymakers can reinforce these conditions by shifting from technology-neutral principles to auditable process standards that couple AI investment with reskilling and data-quality obligations. Future research should extend the design longitudinally and incorporate behavioural metrics to test causal links. The contribution is a field-tested playbook linking human capability, assurance, and workflow design to durable, auditable value in safety-critical energy contexts.

Summary

Main Finding

AI adoption in safety-critical, asset-intensive energy firms succeeds only when three reinforcing organizational levers are in place: (1) broad-based capability building beyond specialist teams, (2) communicative governance that couples transparency with contestability (e.g., model cards, validation reports, appeal rights), and (3) tight workflow fit that minimizes cognitive overhead at decision points. Algorithmic accuracy alone is insufficient; workforce readiness, institutional assurance, and workflow ergonomics together determine whether AI delivers reliable operational performance and credible emissions reductions.

Key Points

  • Three reinforcing levers determine adoption outcomes:
    • Workforce capability at scale (non-specialist literacy reduces brittle expert enclaves and speeds diffusion).
    • Communicative governance (transparency + contestability) builds trust, reduces shadow workarounds, and improves safety culture.
    • Workflow ergonomics (design for minimal additional cognitive load) increases legitimate, sustained use.
  • Thin or narrow training coverage fosters substitution anxiety, slows diffusion, and concentrates benefits in specialists.
  • Structured upskilling and explicit recourse mechanisms correlate with higher confidence, productivity, and clearer paths to sustainability outcomes.
  • Environmental gains from AI are conditional: they materialize where oversight intensity, data quality, and targeted use cases align; governance quality mediates conversion of AI investment into credible emissions reductions.
  • Practical managerial guidance: invest in organization-wide competence (not just specialist teams), institutionalize assurance and appeal mechanisms, and treat explainability as a usability requirement integrated into workflows.
  • Policy implication: shift from high-level, technology-neutral principles to auditable process standards linking AI deployment to reskilling and data-quality obligations.
  • Research gap: need for longitudinal, behavioral, and causal studies to verify pathways from governance and workforce practices to measurable operational and environmental outcomes.

Data & Methods

  • Design: Qualitative, multi-case comparative study of three energy organisations operating under different ownership structures and market logics.
  • Data sources:
    • Semi-structured interviews with senior managers.
    • Likert-scale surveys of mid-level managers and technical staff.
    • Internal policy and strategy documents (e.g., governance artefacts, model documentation).
  • Analysis:
    • Blended inductive–deductive thematic coding organized via a shared codebook.
    • Intercoder reliability assessed; disagreements resolved by adjudication and iterative codebook refinement.
    • Triangulation across interviews, surveys, and documents to maintain a transparent chain of evidence.
  • Ethics: informed consent, confidentiality, and institutional approvals obtained.
  • Limitations: qualitative multi-case design provides field-tested operational insights but does not establish causal magnitudes; sample specifics (e.g., exact N) are not reported, and longitudinal/behavioral metrics were not included.

Implications for AI Economics

  • Complementarities and returns to AI:
    • Human capital, governance capacity, and workflow integration are essential complements to algorithmic investment; returns to AI are endogenous to these firm-level investments.
    • Models of AI productivity should treat governance and training as input factors that shift the production frontier rather than peripheral costs.
  • Diffusion and intra-firm distribution:
    • Narrow, specialist-only upskilling produces expert enclaves and brittle scale—implications for within-firm inequality of benefits and for aggregate productivity growth from AI.
    • Broader training coverage increases diffusion speed and likely raises aggregate firm-level returns.
  • Labor-market effects:
    • Structured reskilling and clear recourse reduce substitution anxiety and turnover intent; labor-cost models should incorporate the complementarity of reskilling investments in moderating displacement and in supporting productivity gains.
    • Task redesign that clarifies human–AI boundaries can preserve autonomy buffers and encourage adaptive upskilling.
  • Environmental and policy economics:
    • The efficacy of AI as a climate tool is conditional on governance quality, oversight intensity, and data infrastructure; policy that neglects these will overstate AI’s abatement potential.
    • Policymakers should consider auditable process standards (reskilling mandates, data-quality obligations, documentation/audit trails) that change firms’ cost structures but raise the credibility of reported emissions reductions.
  • Empirical research directions:
    • Economists should measure governance maturity, training breadth, and workflow fit as moderators in empirical studies of AI’s productivity and environmental impacts.
    • Longitudinal and causal designs (e.g., staggered rollouts, instrumentation of governance changes) and behavioral metrics (trust, contestation incidents, shadow-work frequency) are needed to estimate causal pathways and heterogeneous treatment effects across jurisdictions and ownership forms.
  • Macro- and regulatory implications:
    • Country- and sector-level capacity (data stewardship, procurement expertise, regulatory oversight) will shape national-level returns to AI investments in energy; heterogeneity in state capacity implies uneven progress toward decarbonisation that depends on institutional complements.

Actionable takeaway for economists and policymakers: when evaluating or promoting AI in energy, treat governance, workforce development, and workflow design as core investment inputs that materially alter the economic returns and environmental credibility of AI deployments.

Assessment

Paper Typedescriptive Evidence Strengthlow — Findings are based on a qualitative multi-case design combining interviews, surveys, and document analysis which supports rich, contextual inference but does not establish causal effects; sample size, case selection, and outcome measurement (largely self-reported and perceptual) limit claims about causal impact on productivity, safety, or emissions. Methods Rigormedium — The study uses good qualitative practice—triangulation across data sources, a shared codebook, blended inductive–deductive coding, intercoder reliability assessment, and adjudication—improving trustworthiness; however, rigor is constrained by unclear sampling strategy/number of cases, potential selection and response biases, and absence of longitudinal or behavioral outcome data. SampleMulti-case field study in the energy sector (number of firms/cases not specified); data comprised a semi-structured interview with a senior manager per case, Likert-scale surveys of mid-level managers and technical staff, and internal policies/strategy documents; data were anonymised and thematically coded (sample composition, firm size, and geographic/regulatory scope not reported). Themeshuman_ai_collab org_design skills_training adoption governance productivity GeneralizabilityLimited to safety-critical, capital-intensive energy organisations — findings may not transfer to non-energy or less regulated sectors, Unclear geographic and regulatory scope reduces applicability across jurisdictions, Non-random, likely purposive case selection and small/unspecified sample size limit external validity, Reliance on self-reported perceptions and internal documents may not generalise to measured productivity or emissions outcomes, Cross-sectional design prevents inference about long-term adoption dynamics

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
Three reinforcing levers shape adoption outcomes: (1) broad-based capability building beyond specialist teams, (2) communicative governance that couples transparency with contestability, and (3) a tight workflow fit that minimises cognitive overhead at the decision point. Adoption Rate positive high adoption outcomes / legitimate use
0.18
Broad-based capability building beyond specialist teams prevents benefits from concentrating in expert enclaves and reduces brittle scale. Organizational Efficiency positive high distribution of benefits across organisation and scalability of AI use
0.18
Communicative governance — e.g. model cards, bias tests, validation reports, and explicit appeal rights — earns trust, curbs shadow workarounds, and improves safety culture. Worker Satisfaction positive high trust, incidence of shadow workarounds, and safety culture
0.18
A tight workflow fit that minimises cognitive overhead at the decision point accelerates legitimate use and strengthens links to emissions monitoring and predictive-maintenance outcomes. Firm Productivity positive high rate of legitimate use (adoption) and effectiveness of emissions monitoring and predictive maintenance
0.18
Thin training coverage fosters anxiety about substitution and slows diffusion of AI tools. Adoption Rate negative high worker anxiety and speed of diffusion/adoption
0.18
Structured upskilling and precise recourse mechanisms are associated with higher confidence, productivity, and clearer sustainability pathways. Developer Productivity positive high worker confidence and productivity; clarity of sustainability pathways
0.18
Algorithmic accuracy alone does not determine value; legitimacy and uptake hinge on people's and process readiness. Adoption Rate null_result high value realised / uptake of AI systems
0.18
Environmental gains materialise where oversight intensity, data quality, and targeted use cases align — governance quality conditions the conversion of adoption into credible emissions reductions. Firm Productivity positive (conditional) high emissions reductions / environmental performance
0.18
Leaders should fund training coverage and design (not just headline hours), equip non-specialists to interpret model outputs, pair performance artefacts with participatory routines, and treat explainability as a usability requirement to achieve durable, auditable value in safety-critical energy contexts. Organizational Efficiency positive high durable, auditable value / legitimacy and sustained use
0.09
Policymakers can reinforce these conditions by shifting from technology-neutral principles to auditable process standards that couple AI investment with reskilling and data-quality obligations. Governance And Regulation positive high policy effectiveness in reinforcing safe, equitable AI adoption
0.03

Notes