Scroll-and-click consent no longer suffices for AI systems: invisible inferences, model updates, and delegated decisions make traditional privacy disclosures ineffective; interdisciplinary design and a new research agenda are needed to create contextual, usable consent that shapes data flows, firm incentives, and market outcomes.
Current privacy consent mechanisms often let users down: cookie banners violate informed consent requirements, privacy policies are still difficult to understand, and transparency alone does not guarantee the protection of personal data. In other words, privacy controls are often not user-friendly, let alone felt as mechanisms for empowerment. As AI processes more and more personal data and plays an increasingly important role in society, these challenges are becoming more acute. Emerging systems based on large-scale data and machine learning complicate the boundaries of user control and consent; invisible inferences, decisions delegated to AI agents, and opaque personalisation create new challenges. While prior HCI research has examined the usability of consent and explored ways to improve it, the community still lacks a systematic exploration of consent in the age of AI. Therefore, this workshop brings together experts from AI, HCI, privacy, social sciences, policy, and law fields, to imagine how consent and control must evolve beyond ``scroll-and-click'' towards richer, contextual, and adaptive mechanisms reflecting human capabilities and values. It re-imagines consent and user control in the AI era, distinguishing between explicit decisions and the broader ways in which people can influence how their data is used. Using the Futures Design Toolkit, participants will develop future personas and create design provocations through prototyping. We are seeking position papers that address: novel consent mechanisms, the privacy impact of AI, privacy decision delegation models, and new interaction modalities for user consent and control. We will produce design artefacts and research directions for privacy control tools that are more effective, usable, and accessible than existing mechanisms.
Summary
Main Finding
Current privacy consent and control mechanisms (e.g., cookie banners, dense privacy policies, permission prompts) are unusable and frequently manipulative; these failures are amplified as AI increases data collection, opaque personalization, and automated decision-making. The authors propose a focused, multidisciplinary workshop to systematically re-imagine consent and user control for an AI-driven future, using the Futures Design Toolkit to produce future personas and provocative design artefacts that chart research and design directions beyond “scroll-and-click” consent.
Key Points
-
Problem statement
- Common consent mechanisms violate informed consent requirements or are effectively unusable (notice fatigue, dark patterns, incomprehensible policies).
- Emerging modalities (voice, V/X/AR), invisible inferences, and delegated AI agents create novel and harder-to-observe privacy harms.
- Transparency alone is insufficient to protect personal data or empower users.
-
Workshop goals and deliverables
- Bring together AI, HCI, privacy, social science, policy, and law experts to (re)design consent and control.
- Produce future personas and “provotypes” (design provocations) that surface plausible positive and negative futures.
- Produce an open repository (CC BY‑SA) of artifacts, reports, and position papers; pursue broader dissemination with regulators and policy communities.
-
Guiding research questions
- How to define the relevant “context” for privacy decisions and adapt required information thresholds?
- Under what conditions can privacy decision-making be delegated (legally, ethically, practically) to AI agents or cooperatives?
- What taxonomy of control mechanisms and safeguards is needed to distinguish supportive guidance from manipulation?
- How might alternative futures (e.g., limited personalization) impact consent, markets, and social outcomes?
-
Process and structure
- Pre-workshop: call for 3‑page position papers; clustering to form working groups.
- In‑workshop methods: horizon scanning, Time Traveler future-persona method, “provotyping” to produce tangible artifacts; two 90-minute sessions with facilitators, scribes, presenters.
- Post-workshop: public repo, report submission, follow-up meetings to seed collaborations and policy engagement.
Data & Methods
- Nature of the submission: workshop/proposal rather than empirical study—no primary quantitative dataset presented.
- Methods employed in the workshop and required from participants:
- Futures Design Toolkit (horizon scanning, scenario building, persona development).
- Time Traveler method for future-persona plausibility and trajectory mapping.
- Provotyping: rapid prototyping of provocative artifacts to reveal consequences and trade-offs.
- Solicitation of short position papers (peer-reviewed by organizers) to seed working groups.
- Group facilitation practices: structured introductions, scribe/presenter assignments, artifact-based reporting.
- Intended outputs as reusable research resources (personas, provotypes, position papers, report) for later empirical evaluation and design iterations.
Implications for AI Economics
-
Market failures and attention externalities
- Consent fatigue, habituation, and opaque designs create attention-market failures that impede meaningful bargaining over data; economic models should incorporate limited attention and bounded rationality when treating data as a traded good.
- Manipulative UI incentives (dark patterns) can generate welfare losses and distort demand for privacy-preserving alternatives.
-
Data markets and bargaining power
- Delegation models (AI agents, data cooperatives) change the unit of transaction from individual to representative actors, shifting bargaining power and potentially enabling scale efficiencies or collective bargaining over data value.
- Such delegations introduce agency problems, principal–agent frictions, and new certification/reputation externalities to model and regulate.
-
Platform revenue, personalization, and externalities
- Any shift away from current consent regimes (e.g., tighter necessary-data models or opt-in defaults) will affect personalization-driven ad markets, price of services, and platform business models—quantifying trade-offs between privacy, consumer surplus, and platform profits is crucial.
- Alternative consent mechanisms could reduce market opacity, affecting competition and potentially leading to reorganization of data intermediaries (e.g., paid vs. free models, rise of privacy-focused competitors).
-
Policy and mechanism-design opportunities
- Regulatory design: model how consent rules interact with strategic platform behavior; study compliance costs and enforcement incentives.
- Mechanism design: create and test market mechanisms for delegating consent (contracts for agents, reputation systems, delegation marketplaces, certified privacy agents) that preserve user agency and mitigate misaligned incentives.
- Metrics for evaluation: develop economic metrics beyond clicks—e.g., effective informed-consent rate, realized data flows, user welfare-adjusted ad revenue, incidence of manipulative practices, and social-welfare impacts.
-
Empirical and modeling research directions
- Field experiments and randomized controlled trials to measure welfare impacts of different consent UIs and delegation models on uptake, data sharing, and platform revenues.
- Structural econometric models capturing attention constraints, UI-induced choice architecture, and long-run dynamic effects on data supply.
- Laboratory studies to calibrate parameters for behavioral models (e.g., habituation rates, trust in AI agents).
- Cost–benefit analyses comparing (a) current consent regimes, (b) AI-agent delegation, and (c) stricter “necessary-only” data collection regimes, including spillovers to innovation and competition.
-
Practical recommendations for AI economists
- Incorporate bounded attention and interface-driven choice architecture into models of data transactions.
- Treat delegation (AI agents / data cooperatives) as new market institutions with explicit agency constraints to model welfare and distributional outcomes.
- Collaborate with HCI/legal researchers to design experimentally-testable consent mechanisms; use workshop artifacts (personas, provotypes) as stimuli for economics experiments and policy simulations.
- Engage regulators with economic impact assessments that translate design alternatives into quantifiable market and welfare outcomes.
Summary: This workshop frames consent failures as both design and economic problems. For AI economics, it provides research directions and testable artifacts to quantify how redesigned consent and delegation mechanisms will reshape data markets, platform incentives, and social welfare.
Assessment
Claims (20)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Current privacy-consent mechanisms (cookie banners, dense policies, transparency-only approaches) fail to deliver meaningful user control. Regulatory Compliance | negative | medium | meaningful user control (degree of user control over data use) |
0.02
|
| Cookie banners and clickwrap routinely violate informed-consent principles. Regulatory Compliance | negative | medium | adherence to informed-consent principles |
0.02
|
| Privacy policies remain hard to understand; transparency alone doesn’t ensure protection. Regulatory Compliance | negative | medium | user comprehension of privacy policies / protection outcomes |
0.02
|
| Existing controls are not user-friendly or empowering. Regulatory Compliance | negative | medium | usability / empowerment of privacy controls |
0.02
|
| Large-scale machine learning enables invisible inferences about users from seemingly innocuous data. Ai Safety And Ethics | negative | high | privacy risk from inferred attributes (inference accuracy / presence of invisible inferences) |
0.03
|
| Decision delegation to AI agents and opaque personalization blur the scope of consent and control. Governance And Regulation | negative | medium | clarity/scope of consent and user control boundaries |
0.02
|
| Dynamic behavior of models (continual learning, model updates) changes the meaning of past consent. Governance And Regulation | negative | medium | stability of consent relevance over time |
0.02
|
| HCI has explored usable consent, but there is no systematic framework for consent in the AI era. Governance And Regulation | null_result | medium | existence of a systematic AI-era consent framework |
0.02
|
| We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage). Governance And Regulation | positive | low | feasibility/effectiveness of alternative consent modalities (delegation, preference controls) |
0.01
|
| The workshop produced interdisciplinary outputs including personas, prototypes, and a research agenda to better align user capabilities and values with data-driven AI systems. Other | positive | high | deliverables produced (personas, prototypes, research agenda) |
0.03
|
| The Futures Design Toolkit (scenario planning, persona generation, speculative design) was used as a primary method in the workshop. Other | null_result | high | use of specified design methods |
0.03
|
| Follow-up empirical methods should include qualitative interviews, focus groups, usability studies, field experiments (A/B tests), and policy/legal-technical assessments. Other | null_result | high | recommended empirical methods for future research |
0.03
|
| Inadequate consent creates information asymmetries and negative externalities (privacy harms, loss of trust) that can distort demand for AI services. Consumer Welfare | negative | medium | demand for AI services / trust / privacy harms |
0.02
|
| High frictions or opaque consent reduce data supply, raising costs of training models and potentially reducing market competition by advantaging incumbents with richer legacy data. Market Structure | negative | medium | data supply, model training costs, market competition |
0.02
|
| Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends). Market Structure | positive | low | data market liquidity and pricing structures |
0.01
|
| Strict consent regimes increase compliance costs but may increase user trust and long-run demand; lax regimes favor short-term data capture but expose firms to legal and reputational risk. Regulatory Compliance | mixed | medium | compliance costs, user trust, data capture, legal/reputational risk |
0.02
|
| Personalized AI can increase consumer surplus but also enable discriminatory pricing and welfare losses for vulnerable groups; consent design affects distribution of benefits and risks. Consumer Welfare | mixed | medium | consumer surplus and distributional welfare outcomes |
0.02
|
| Delegation models (allowing agents to act on users’ behalf) change control and liability, with implications for insurance, liability allocation, and market structure. Market Structure | mixed | low | control, liability allocation, market structure outcomes |
0.01
|
| A research agenda for AI economics should include: formalizing consent as a transaction/contracting problem; empirical RCTs and natural experiments measuring effects of consent designs; mechanism design for privacy-preserving data sharing; and policy evaluation of consent regulations. Other | null_result | high | proposed research topics and methodological approaches |
0.03
|
| Practical takeaway: economists should treat consent design as a lever that changes data availability and incorporate consent frictions into demand and production-side models; they should collaborate with HCI and legal scholars to design experiments capturing behavioral and welfare effects. Other | positive | high | integration of consent design into economic models and interdisciplinary collaboration |
0.03
|