LLMs reframe strategic thinking but don't boost foresight: in a randomized 2×2 experiment, both time pressure and LLM assistance changed participants' mental representations in a startup evaluation task, yet neither improved forecasting accuracy—LLM use also increased reported information overload and reduced psychological ownership.
Strategic foresight—that is, the ability to predict strategic outcomes—depends on how decision-makers represent strategic problems. Time constraints and large language models (LLMs) are increasingly salient factors shaping this process. We study how both jointly affect mental representations and strategic foresight in a startup evaluation task (N = 348). Using a 2 × 2 experimental design, we show that both time constraints and LLM use significantly alter the characteristics of mental representations. Despite these representational shifts, neither time constraints nor LLM use are found to significantly change strategic foresight. Additional analyses indicate, for instance, that LLM use increases information overload and reduces psychological ownership. Our findings can be viewed as a cautionary case for the effectiveness of LLM use in strategic decision-making. Thus, our findings suggest several avenues for future research on LLM use and strategic foresight, particularly regarding the interplay between individual cognitive processes and the contextual factors of strategic decisions. History: Accepted for the Special Issue: Can AI Do Strategy? Funding: This study received partial funding from Freunde und Förderer der TU Bergakademie Freiberg e.V., Faculty of Business Administration at the TU Bergakademie Freiberg.
Summary
Main Finding
Both time pressure and the use of large language models (LLMs) systematically change how decision-makers mentally represent a startup evaluation task, but these representational changes do not translate into measurable improvements or declines in strategic foresight (the ability to predict strategic outcomes). LLM use additionally increases reported information overload and reduces psychological ownership of decisions.
Key Points
- Experimental design: 2 × 2 between-subjects manipulation of (1) time constraint vs. no time constraint and (2) LLM use vs. no LLM use in a startup evaluation task.
- Sample: N = 348 participants.
- Representations: Time pressure and LLM assistance both significantly altered the characteristics of participants’ mental representations of the strategic problem.
- Strategic foresight: Neither time constraints nor LLM use produced a significant change in participants’ ability to forecast strategic outcomes.
- Secondary effects: LLM use was associated with higher information overload and lower psychological ownership of the task/decision.
- Interpretation: LLMs can reshape cognitive framing without reliably improving strategic prediction — a cautionary result for relying on LLMs for strategic decision-making.
Data & Methods
- Task: Participants evaluated startups and made strategic forecasts (startup evaluation task).
- Design: 2 × 2 experimental design crossing time constraint (present vs. absent) with LLM use (present vs. absent).
- Sample size: 348 participants (between-subjects allocation across four conditions).
- Outcomes measured:
- Characteristics of mental representations (changes observed across manipulations).
- Strategic foresight (forecasting accuracy or comparable prediction measure; no significant effects found).
- Additional psychological measures (information overload, psychological ownership; LLM use increased the former and reduced the latter).
- Publication/funding note: Accepted for the Special Issue "Can AI Do Strategy?"; partial funding from Freunde und Förderer der TU Bergakademie Freiberg e.V., Faculty of Business Administration at the TU Bergakademie Freiberg.
Implications for AI Economics
- Caution on performance claims: LLMs can alter cognitive framing but do not automatically improve strategic forecasting performance — economic models assuming straightforward productivity gains from LLM adoption in strategy may be overstated.
- Productivity vs. cognition: LLMs may produce nonmonotonic effects on decision quality (e.g., increased information processing burden, reduced ownership) that should be modeled when estimating returns to AI in managerial tasks.
- Adoption externalities: Lower psychological ownership and higher information overload could affect downstream behaviors (risk-taking, accountability, implementation effort), implying second-order economic effects of AI tools beyond immediate task outcomes.
- Research priorities:
- Identify task types and assistance modalities where LLMs yield positive foresight returns (vs. those that mainly change representations).
- Examine interaction effects (team decision-making, repeated use, training, interface design) to understand when representational shifts translate into improved outcomes.
- Model dynamic adoption: account for learning, changes in reliance, and organizational processes that mediate LLM impacts on firm-level strategic performance.
- Measure broader welfare effects: consider implementation costs, accountability, and organizational incentives when evaluating AI’s economic value in strategic roles.
If you want, I can (a) draft specific hypotheses for follow-up experiments, (b) outline an economic model that incorporates information overload and reduced ownership from LLM use, or (c) extract practical recommendations for firms considering LLM adoption in strategic decision processes.
Assessment
Claims (8)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| The study used a sample of N = 348 participants. Other | null_result | high | sample size / study participants |
n=348
0.6
|
| The study employed a 2 × 2 experimental design manipulating time constraints and LLM use. Other | null_result | high | experimental design (manipulations: time constraints; LLM use) |
n=348
0.6
|
| Both time constraints and LLM use significantly alter the characteristics of decision-makers' mental representations. Decision Quality | mixed | high | characteristics of mental representations (representation-related measures collected in the task) |
n=348
0.6
|
| Neither time constraints nor LLM use significantly change strategic foresight in the startup evaluation task. Decision Quality | null_result | high | strategic foresight (performance/forecasts in the startup evaluation task) |
n=348
0.6
|
| LLM use increases information overload (additional analyses). Decision Quality | positive | medium | information overload (self-report or task-based overload measure) |
n=348
0.36
|
| LLM use reduces psychological ownership (additional analyses). Worker Satisfaction | negative | medium | psychological ownership (self-report or task-related ownership measure) |
n=348
0.36
|
| The findings constitute a cautionary case for the effectiveness of LLM use in strategic decision-making. Decision Quality | negative | medium | perceived/evaluated effectiveness of LLM use in strategic decision-making (interpretive conclusion) |
n=348
0.36
|
| The results suggest several avenues for future research on LLM use and strategic foresight, especially the interplay between individual cognitive processes and contextual factors of strategic decisions. Research Productivity | mixed | low | research agenda / suggested future research topics |
0.18
|