The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Scroll-and-click consent no longer suffices for AI systems: invisible inferences, model updates, and delegated decisions make traditional privacy disclosures ineffective; interdisciplinary design and a new research agenda are needed to create contextual, usable consent that shapes data flows, firm incentives, and market outcomes.

Moving Beyond Clicks: Rethinking Consent and User Control in the Age of AI
William Seymour, Florian Alt, Zinaida Benenson, Sophie Grimme, Farzaneh Karegar, Maija Poikela, Arianna Rossi, Mark Warner · Fetched March 12, 2026 · UCL Discovery (University College London)
openalex descriptive n/a evidence 7/10 relevance Source PDF
Current scroll-and-click consent mechanisms fail to deliver meaningful user control in the AI era, and the workshop calls for contextual, adaptive, human-centered consent designs plus an interdisciplinary research agenda to evaluate their economic and welfare impacts.

Current privacy consent mechanisms often let users down: cookie banners violate informed consent requirements, privacy policies are still difficult to understand, and transparency alone does not guarantee the protection of personal data. In other words, privacy controls are often not user-friendly, let alone felt as mechanisms for empowerment. As AI processes more and more personal data and plays an increasingly important role in society, these challenges are becoming more acute. Emerging systems based on large-scale data and machine learning complicate the boundaries of user control and consent; invisible inferences, decisions delegated to AI agents, and opaque personalisation create new challenges. While prior HCI research has examined the usability of consent and explored ways to improve it, the community still lacks a systematic exploration of consent in the age of AI. Therefore, this workshop brings together experts from AI, HCI, privacy, social sciences, policy, and law fields, to imagine how consent and control must evolve beyond ``scroll-and-click'' towards richer, contextual, and adaptive mechanisms reflecting human capabilities and values. It re-imagines consent and user control in the AI era, distinguishing between explicit decisions and the broader ways in which people can influence how their data is used. Using the Futures Design Toolkit, participants will develop future personas and create design provocations through prototyping. We are seeking position papers that address: novel consent mechanisms, the privacy impact of AI, privacy decision delegation models, and new interaction modalities for user consent and control. We will produce design artefacts and research directions for privacy control tools that are more effective, usable, and accessible than existing mechanisms.

Summary

Main Finding

Current privacy-consent mechanisms (cookie banners, dense policies, transparency-only approaches) fail to deliver meaningful user control. As AI systems increasingly rely on large-scale personal data, opaque personalization, invisible inferences, and delegated automated decisions make traditional “scroll-and-click” consent inadequate. The workshop proposes rethinking consent toward contextual, adaptive, and human-centered mechanisms via interdisciplinary design—producing personas, prototypes, and research directions that better align user capabilities and values with data-driven AI systems.

Key Points

  • Consent failures today:
    • Cookie banners and clickwrap routinely violate informed-consent principles.
    • Privacy policies remain hard to understand; transparency alone doesn’t ensure protection.
    • Existing controls are not user-friendly or empowering.
  • AI-specific complications:
    • Large-scale machine learning enables invisible inferences about users from seemingly innocuous data.
    • Decision delegation to AI agents and opaque personalization blur the scope of consent and control.
    • Dynamic behavior of models (continual learning, model updates) changes the meaning of past consent.
  • Gaps in research/practice:
    • HCI has explored usable consent, but there is no systematic framework for consent in the AI era.
    • Need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
  • Workshop approach and outputs:
    • Interdisciplinary participation (AI, HCI, privacy, social sciences, policy, law).
    • Futures Design Toolkit: create future personas, scenario-based design, and design provocations through prototyping.
    • Solicits position papers on novel consent mechanisms, privacy impacts of AI, delegation models, new interaction modalities.
    • Expected deliverables: design artefacts, prototypes, and a research agenda for more effective, usable, and accessible privacy controls.

Data & Methods

  • Methods used in the workshop:
    • Futures Design Toolkit: scenario planning, persona generation, speculative design.
    • Co-design and participatory prototyping with stakeholders across disciplines.
    • Position papers synthesizing conceptual, legal, and design proposals.
  • Empirical / evaluative methods implied for follow-up research:
    • Qualitative methods: interviews, focus groups, usability studies of consent interfaces.
    • Design prototyping and lab-based user testing of interaction modalities.
    • Field experiments and A/B tests on platforms to measure behavioral responses to consent designs.
    • Policy analysis and legal-technical assessments (compliance under GDPR-like regimes).
    • Mixed-methods: combine qualitative insights with quantitative measures (engagement, opt-in rates, inferred-privacy leakage).

Implications for AI Economics

  • Market and informational failures:
    • Inadequate consent creates information asymmetries and negative externalities (privacy harms, loss of trust) that can distort demand for AI services.
    • High frictions or opaque consent reduce data supply, raising costs of training models and potentially reducing market competition (incumbent advantage via richer legacy data).
  • Value and pricing of data:
    • Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data in data markets—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
    • Modeling the willingness-to-pay for privacy versus personalization becomes central to product design and monetization strategies.
  • Incentives, firm behavior, and regulation:
    • Firms internalize different incentives depending on consent regimes: strict consent increases compliance costs but may increase user trust and long-run demand; lax regimes favor short-term data capture but expose firms to legal and reputational risk.
    • Regulation (e.g., consent standards) shifts equilibria—analysis needed on how rules affect innovation, entry, and welfare.
  • Effects on welfare and distribution:
    • Personalized AI can increase consumer surplus but also enable discriminatory pricing and welfare losses for vulnerable groups; consent design affects who benefits and who bears risks.
    • Delegation models (allowing agents to act on users’ behalf) change control and liability, with implications for insurance, liability allocation, and market structure.
  • Research agenda for AI economics emerging from the workshop:
    • Theoretical: formalize consent as a transaction/contracting problem—model consent friction, delegation, and dynamic consent under learning systems.
    • Empirical: measure how alternative consent designs affect data flows, model accuracy, consumer behavior, and firm profits using RCTs, natural experiments, and platform logs.
    • Mechanism design: design incentive-compatible contracts and market mechanisms for privacy-preserving data sharing (e.g., differential-privacy pricing, data trusts).
    • Policy evaluation: structural welfare analysis of consent regulations (GDPR-style requirements) on innovation, competition, and distributional outcomes.
    • Measurement tools: develop metrics for “meaningful consent,” privacy risk from inferences, and the economic value of control options.
  • Practical takeaways for economists:
    • Treat consent design as a lever that changes data availability and hence the economics of AI—incorporate consent frictions into demand and production-side models.
    • Collaborate with HCI and legal scholars to design experiments that capture both behavioral and welfare effects.

Assessment

Paper Typedescriptive Evidence Strengthn/a — This is a workshop synthesis and design-position document rather than an empirical study; it presents expert judgments, prototypes, and research directions instead of causal estimates or validated empirical findings. Methods Rigorn/a — Methods are primarily speculative design, co‑design, scenario planning, and position-paper synthesis rather than systematic empirical or statistical methods that could be assessed for internal validity or robustness. SampleOutputs derive from an interdisciplinary workshop involving researchers and practitioners in AI, HCI, privacy, social sciences, law, and policy who produced personas, scenario-based design artifacts, prototypes, and position papers; no representative user sample, experimental data, or platform logs were collected or analyzed in this report. Themesgovernance adoption innovation GeneralizabilityNot empirically validated — recommendations are based on expert elicitation and design exercises, not field-tested across populations or platforms., Participant selection and perspectives may be skewed toward academic and advocacy views and may not reflect industry or diverse user populations., Legal and regulatory implications depend on jurisdiction (GDPR vs. other regimes), limiting cross-country generalizability., AI system types vary widely (e.g., recommendation systems vs. autonomous agents), so design proposals may not map uniformly across technical architectures., Prototypes and personas are illustrative and may not predict real-world behavioral responses or market outcomes without subsequent empirical testing.

Claims (20)

ClaimDirectionConfidenceOutcomeDetails
Current privacy-consent mechanisms (cookie banners, dense policies, transparency-only approaches) fail to deliver meaningful user control. Regulatory Compliance negative medium meaningful user control (degree of user control over data use)
0.02
Cookie banners and clickwrap routinely violate informed-consent principles. Regulatory Compliance negative medium adherence to informed-consent principles
0.02
Privacy policies remain hard to understand; transparency alone doesn’t ensure protection. Regulatory Compliance negative medium user comprehension of privacy policies / protection outcomes
0.02
Existing controls are not user-friendly or empowering. Regulatory Compliance negative medium usability / empowerment of privacy controls
0.02
Large-scale machine learning enables invisible inferences about users from seemingly innocuous data. Ai Safety And Ethics negative high privacy risk from inferred attributes (inference accuracy / presence of invisible inferences)
0.03
Decision delegation to AI agents and opaque personalization blur the scope of consent and control. Governance And Regulation negative medium clarity/scope of consent and user control boundaries
0.02
Dynamic behavior of models (continual learning, model updates) changes the meaning of past consent. Governance And Regulation negative medium stability of consent relevance over time
0.02
HCI has explored usable consent, but there is no systematic framework for consent in the AI era. Governance And Regulation null_result medium existence of a systematic AI-era consent framework
0.02
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage). Governance And Regulation positive low feasibility/effectiveness of alternative consent modalities (delegation, preference controls)
0.01
The workshop produced interdisciplinary outputs including personas, prototypes, and a research agenda to better align user capabilities and values with data-driven AI systems. Other positive high deliverables produced (personas, prototypes, research agenda)
0.03
The Futures Design Toolkit (scenario planning, persona generation, speculative design) was used as a primary method in the workshop. Other null_result high use of specified design methods
0.03
Follow-up empirical methods should include qualitative interviews, focus groups, usability studies, field experiments (A/B tests), and policy/legal-technical assessments. Other null_result high recommended empirical methods for future research
0.03
Inadequate consent creates information asymmetries and negative externalities (privacy harms, loss of trust) that can distort demand for AI services. Consumer Welfare negative medium demand for AI services / trust / privacy harms
0.02
High frictions or opaque consent reduce data supply, raising costs of training models and potentially reducing market competition by advantaging incumbents with richer legacy data. Market Structure negative medium data supply, model training costs, market competition
0.02
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends). Market Structure positive low data market liquidity and pricing structures
0.01
Strict consent regimes increase compliance costs but may increase user trust and long-run demand; lax regimes favor short-term data capture but expose firms to legal and reputational risk. Regulatory Compliance mixed medium compliance costs, user trust, data capture, legal/reputational risk
0.02
Personalized AI can increase consumer surplus but also enable discriminatory pricing and welfare losses for vulnerable groups; consent design affects distribution of benefits and risks. Consumer Welfare mixed medium consumer surplus and distributional welfare outcomes
0.02
Delegation models (allowing agents to act on users’ behalf) change control and liability, with implications for insurance, liability allocation, and market structure. Market Structure mixed low control, liability allocation, market structure outcomes
0.01
A research agenda for AI economics should include: formalizing consent as a transaction/contracting problem; empirical RCTs and natural experiments measuring effects of consent designs; mechanism design for privacy-preserving data sharing; and policy evaluation of consent regulations. Other null_result high proposed research topics and methodological approaches
0.03
Practical takeaway: economists should treat consent design as a lever that changes data availability and incorporate consent frictions into demand and production-side models; they should collaborate with HCI and legal scholars to design experiments capturing behavioral and welfare effects. Other positive high integration of consent design into economic models and interdisciplinary collaboration
0.03

Notes