The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

A game-theoretic framework quantifies when cyber deception pays: deception raises defender utility versus matched non-deceptive baselines but its value erodes as system observability increases, with closed-form break-even conditions and parameter regimes where simple heuristics nearly match optimum.

Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception
Mohammad Shahin, Mazdak Maghanaki, Fengshan Frank Chen · March 10, 2026 · Sensors
openalex theoretical n/a evidence 7/10 relevance DOI Source PDF
The paper develops a paired-game framework that quantifies the operational value of cyber deception and the marginal loss from increased observability, deriving defender-optimal allocations, bounds, and regimes where simple heuristics approach optimal performance.

The study develops a game-theoretic evaluation framework for cyber deception that quantifies deception benefit relative to an otherwise matched non-deceptive baseline and links strategic outcomes to information disclosure. A defender–attacker interaction is modeled through a paired design consisting of a baseline game without deception and a corresponding decoy-enabled deception game, enabling direct measurement of deception impact through two operational metrics: the value of deception, defined as the baseline-referenced change in defender equilibrium utility attributable to deception, and the price of transparency, defined as the marginal loss induced by increased observability of the true system state. The analysis characterizes defender-optimal deception strategies, derives interpretable bounds and break-even conditions under which deception becomes ineffective due to cost or detectability, and establishes approximation properties that support scalable allocation rules. To complement equilibrium-based evaluation, the study introduces an information-theoretic uncertainty construct that captures the extent to which deception preserves attacker uncertainty after observation, providing a mechanism-level interpretation of when and why value of deception degrades as transparency increases. Computational experiments across heterogeneous scenarios demonstrate consistent cross-setting comparability, reveal tradeoffs among decoy realism, budget, and attacker rationality, and identify regimes in which simplified allocation heuristics approach optimal performance.

Summary

Main Finding

The paper provides a principled, game-theoretic framework to measure and compare the operational value of cyber deception relative to a matched non-deceptive baseline, and to quantify how that value degrades as the true system state becomes more observable. It introduces two operational metrics—value of deception and price of transparency—derives defender-optimal strategies and bounds (including break-even conditions), and links equilibrium outcomes to an information-theoretic measure of residual attacker uncertainty. Computational experiments show robust tradeoffs among decoy realism, budget, and attacker rationality, and identify parameter regimes where simple allocation heuristics approximate optimal policies.

Key Points

  • Paired-game design: each defender–attacker interaction is represented by a baseline (no deception) game and a matched decoy-enabled deception game, enabling direct, causal measurement of deception impact.
  • Two operational metrics:
    • Value of deception: the change in defender equilibrium utility attributable to deception, measured relative to the baseline game.
    • Price of transparency: the marginal loss in deception value induced by increased observability of the true system state.
  • Analytical contributions:
    • Characterization of defender-optimal deception allocations.
    • Closed-form bounds and break-even conditions delineating when deception is ineffective due to cost or detectability.
    • Approximation/approximation-accuracy results that justify scalable allocation rules and heuristics.
  • Information-theoretic mechanism: an uncertainty construct (entropy-like) captures how much attacker uncertainty remains after observation, explaining mechanistically why deception value falls as transparency increases.
  • Computational results:
    • Experiments across heterogeneous scenarios produce consistent cross-setting comparability of the proposed metrics.
    • Tradeoffs highlighted between decoy realism (how convincing decoys are), defender budget constraints, and attacker rationality/modeling assumptions.
    • Identification of regimes where simple heuristics nearly match optimal allocations, supporting practical deployment.

Data & Methods

  • Theoretical model:
    • Paired strategic games (baseline vs deception) modeling defender and attacker payoffs and informational structure.
    • Equilibrium solution concepts used to compute defender and attacker utilities under each game (equilibrium type stated in the paper).
    • Definition and formalization of value of deception and price of transparency as baseline-referenced equilibrium utility differences and derivatives with respect to observability.
  • Analytical work:
    • Derivation of defender-optimal strategies under resource constraints.
    • Proofs of bounds and break-even conditions tying deception costs, detectability, and observability to value.
    • Approximation guarantees showing when simple allocation rules achieve provable performance relative to optimal.
  • Information-theoretic component:
    • Construction of an uncertainty metric (entropy/mutual-information style) measuring residual attacker uncertainty after observations; used to connect mechanism-level effects to equilibrium utility changes.
  • Computational experiments:
    • Simulated heterogeneous scenarios sweeping parameters: decoy realism, budget levels, attacker rationality (degree of optimality/learning), and observability/transparency levels.
    • Evaluation metrics: value of deception, price of transparency, defender utility, and heuristic vs optimal performance gaps.
    • Sensitivity analyses to identify robust regimes and tradeoffs.

Implications for AI Economics

  • Valuing defensive AI investments: the value-of-deception metric provides a direct, game-theoretic way to monetize the benefit of deception technologies relative to non-deceptive alternatives, supporting investment decisions and cost–benefit comparisons.
  • Pricing and incentives around transparency: the price-of-transparency formalizes how increased observability (from regulation, disclosure, or transparency tools) can reduce the effectiveness of deception-based defenses, informing policy tradeoffs between transparency and security.
  • Resource allocation and mechanism design: the approximation results and identified heuristic regimes enable scalable, economically efficient allocation of limited defensive resources in large-scale systems where computing exact optima is infeasible.
  • Heterogeneity and attacker modeling: results emphasize that defender returns depend critically on attacker rationality and information-processing; economic models of cybersecurity should incorporate strategic heterogeneity and bounded rationality for accurate valuation.
  • Externalities and market outcomes: because deception effectiveness declines with transparency and attacker learning, there can be strategic externalities across firms and platforms (e.g., if one actor’s disclosures reduce the market value of deception-based defenses elsewhere), suggesting roles for coordination or insurance markets.
  • Link to information economics: the information-theoretic uncertainty measure connects to classical value-of-information concepts, offering a bridge between mechanism-level security analysis and economic theories of information, signaling, and screening.

Assessment

Paper Typetheoretical Evidence Strengthn/a — The contribution is primarily analytical and simulation-based rather than empirical; effects are demonstrated within a formal model and synthetic experiments, so there is no real-world causal identification or observational/experimental evidence to rate. Methods Rigorhigh — The paper combines formal equilibrium analysis, analytic bounds and proofs, an information-theoretic uncertainty construct linking observability to value, approximation guarantees for scalable rules, and comprehensive simulated sensitivity analyses, demonstrating methodological thoroughness and internal consistency. SampleNo observational sample; the paper uses a formal paired strategic-game model and extensive simulated scenarios that sweep parameters (decoy realism, defender budget, attacker rationality/optimality, and system observability) to evaluate value-of-deception, price-of-transparency, defender utility, and heuristic vs optimal gaps. Themesgovernance adoption IdentificationPaired-game counterfactual: each defender–attacker interaction is modeled twice (baseline without deception and matched game with deception); the causal effect of deception is defined as the difference in equilibrium defender utility between these paired games under the model's informational and payoff assumptions. GeneralizabilityResults hold within the paper's game-theoretic payoff and information-structure assumptions; real-world payoffs or attacker objectives may differ., Attacker models (degree of rationality, learning dynamics) are stylized; human or adaptive adversaries in the wild may behave differently., Decoy realism and detectability are parameterized abstractly; operational costs and detection processes in deployed systems may not map cleanly to model parameters., Simulations use synthetic parameter sweeps rather than field data, so empirical magnitudes and break-even points require validation in deployment., Network effects, multi-stage attacks, defender signaling across firms, and market-level externalities are only partially modeled or abstracted.

Claims (17)

ClaimDirectionConfidenceOutcomeDetails
The paper provides a principled, game-theoretic framework to measure and compare the operational value of cyber deception relative to a matched non-deceptive baseline. Firm Productivity null_result high value of deception (defender equilibrium utility difference between deception and baseline games)
0.02
The paper introduces two operational metrics: (1) value of deception (change in defender equilibrium utility attributable to deception relative to baseline) and (2) price of transparency (marginal loss in deception value induced by increased observability). Organizational Efficiency null_result high value of deception; price of transparency (derivative of value of deception with respect to observability)
0.02
Defender-optimal deception allocations are characterized analytically (closed-form/structural characterization of optimal resource allocation under constraints). Organizational Efficiency null_result high defender equilibrium utility (optimal allocation that maximizes it subject to constraints)
0.02
The paper derives closed-form bounds and break-even conditions that delineate when deception is ineffective due to cost or detectability. Organizational Efficiency negative high value of deception (conditions where value ≤ 0 or falls below cost thresholds)
0.02
Approximation guarantees are provided that justify scalable allocation rules and heuristics (i.e., provable performance bounds versus optimal). Organizational Efficiency positive high approximation ratio / performance gap between heuristic allocation and optimal defender utility
0.02
Equilibrium outcomes are linked to an information-theoretic uncertainty construct (entropy-like) that captures residual attacker uncertainty after observation. Decision Quality null_result high residual attacker uncertainty (entropy-like quantity) and its relationship to defender utility/value of deception
0.02
The value of deception degrades (falls) as the true system state becomes more observable; this degradation is quantifiable via the price-of-transparency metric. Organizational Efficiency negative high value of deception as a function of observability; price of transparency (marginal loss)
0.02
Paired-game design (baseline and matched decoy-enabled game per interaction) enables direct, causal measurement of deception impact. Other null_result high causal effect on defender equilibrium utility (value of deception)
0.02
Computational experiments across heterogeneous simulated scenarios produce consistent cross-setting comparability of the proposed metrics (value of deception and price of transparency). Other positive medium consistency of measured metrics (value of deception, price of transparency) across simulated scenarios
0.01
Computational results highlight tradeoffs among decoy realism, defender budget, and attacker rationality (attacker model), affecting deception value. Other mixed medium value of deception and defender utility as functions of decoy realism, budget, and attacker rationality
0.01
There exist parameter regimes where simple allocation heuristics nearly match optimal allocations (heuristics are practically sufficient in some regimes). Organizational Efficiency positive medium performance gap / approximation ratio between heuristics and optimal defender utility across parameter regimes
0.01
The information-theoretic uncertainty measure provides a mechanism-level explanation for why deception value falls as transparency increases (residual uncertainty explains utility changes). Decision Quality negative high relationship between residual attacker uncertainty (entropy-like) and change in value of deception
0.02
The value-of-deception metric can be used to monetize the benefit of deception technologies relative to non-deceptive alternatives, supporting investment and cost–benefit comparisons. Firm Revenue positive medium monetized benefit (value of deception mapped to economic decision criteria; no empirical calibration provided)
0.01
The price-of-transparency quantifies how increased observability (e.g., from disclosure or regulation) can reduce the effectiveness of deception-based defenses, informing policy tradeoffs. Governance And Regulation negative medium marginal loss in value of deception due to increased observability
0.01
Defender returns depend critically on attacker rationality and information-processing; economic/security models should incorporate strategic heterogeneity and bounded rationality for accurate valuation. Organizational Efficiency mixed medium variation in value of deception and defender utility under different attacker rationality models
0.01
Because deception effectiveness declines with transparency and attacker learning, strategic externalities can arise across actors (e.g., disclosures by one actor can reduce deception value for others), suggesting roles for coordination or insurance markets. Market Structure negative low potential externality on value of deception across actors (not directly measured in the paper)
0.01
The proposed uncertainty measure connects to classical value-of-information concepts, bridging security mechanism analysis and economic theories of information, signaling, and screening. Decision Quality null_result medium conceptual/analytical alignment between residual uncertainty metric and value-of-information measures
0.01

Notes