Generative audiovisual AI promises large efficiency gains in content production but will be hampered — and reshaped — by legal, ethical and trust frictions across training data, deployment and output, concentrating advantage with data- and deployment-rich firms and creating new markets for provenance and verification.
This paper builds up on secondary data analysis to articulate a narrative review describing the challenges to the adoption of artificial intelligence tools for audiovisual communication, identifying issues in regard to ethics, control, transparency and the legal framework. We make a critique of the term and delimit the research to generative neural-network based computational processes. The problems are examined in three proposed areas: the input, which includes the training material; the process, which deals with the systems necessary to run such services, their development and control; and the output, where the use of generated artifacts is analyzed. International legal and judiciary approaches are reviewed in each step, as they are fundamental for incoming unfoldings of the field. Finally, some hypothetical future scenarios based on current AI debates are imagined, which help readers foresee possible situations regarding the use of generative systems. We conclude that although the adoption of such technologies is unstoppable, their use might come with some limitations, regulations or conditions. More importantly, the general use of AI tools will convey a shift in the way we produce and consume information. While we can clearly foresee dramatic productivity increases, it is not clear how this production will be accepted by society. On one hand, studies already revealed that subjects rank AI-generated content higher than the one made by humans. On the other hand, the plethora of artificial media might provoke an overall rejection of the digital world.
Summary
Main Finding
Adoption of generative neural-network–based audiovisual AI is likely inevitable and will significantly raise productivity in content creation, but it poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts). These challenges create regulatory, market, and social-acceptance frictions that will reshape how information is produced, valued and consumed.
Key Points
- Scope and definition
- The paper restricts analysis to generative neural-network processes for audiovisual communication and critiques broader/ambiguous uses of the term “AI.”
- Three-stage framework for risks and issues
- Input: training material concerns — consent, copyright, representativeness, bias, provenance and data ownership.
- Process: system-level issues — governance of model development, control over deployment, transparency, auditing, and operational safety.
- Output: downstream use — authenticity, deception, attribution, reuse rights, reputational harms and societal impacts of abundant generated media.
- Legal and judicial landscape
- International and national legal approaches are surveyed for each stage; differences in IP, privacy, liability and evidence law create fragmentation and uncertainty.
- Social acceptance uncertainty
- While some studies find people may rate AI-generated content equal or superior to human-created content, proliferation of artificial media could also spur distrust or rejection of digital media.
- Future scenarios
- The paper proposes hypothetical trajectories to help anticipate regulatory choices, market responses and social reactions; outcomes depend on litigation, legislation and technological countermeasures (e.g., verification tools).
Data & Methods
- Methodology: narrative review based on secondary data analysis.
- Sources: literature on generative neural networks, legal cases and statutes, policy reports, and empirical studies on content perception and trust (no new primary data collected).
- Analytical approach: conceptual synthesis using the three-stage input/process/output framework plus comparative review of legal/judicial responses and scenario construction.
- Limitations: reliance on secondary sources and narrative synthesis (limits causal claims); scope limited to audiovisual generative models, so findings may not generalize to other AI modalities.
Implications for AI Economics
- Productivity and labor
- Large productivity gains in content production could reduce marginal costs and compress prices for many creative goods, potentially displacing some human labor while raising demand for high-skill oversight, curation and novel creative inputs.
- Market structure and firm strategy
- Platforms and firms controlling model training data and deployment infrastructure gain strategic advantage; increased vertical integration and concentration risks.
- Quality signaling and market for authenticity
- Abundant synthetic media may erode signal value of standard digital content, creating demand for authentication services, certification markets and premium “human-made” labels.
- Attention economy and demand uncertainty
- Proliferation of generated content may increase information supply but could lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust problems.
- Legal/regulatory costs and frictions
- Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure), possibly raising barriers for smaller entrants and affecting prices and diffusion.
- Externalities and public goods
- Negative externalities (misinformation, reputational harm, verification costs) may justify public intervention: standards for provenance, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
- Redistribution and valuation
- Returns may shift toward owners of data, model capacity and verification tech; traditional creators may demand new compensation mechanisms (data-use royalties, collective licensing).
- Policy levers and economic effects
- Potential interventions include mandatory provenance metadata, liability rules, taxes/subsidies to internalize harms, antitrust measures to limit concentration, and funding for public verification tools — each choice will shape incentives, innovation rates and market outcomes.
- Uncertainty & research needs
- Empirical work needed on consumer willingness-to-pay for authenticated vs. synthetic content, labor displacement elasticities, market concentration dynamics, and cost-benefit evaluations of regulatory options.
Assessment
Claims (17)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Adoption of generative neural-network–based audiovisual AI is likely inevitable and will significantly raise productivity in content creation. Firm Productivity | positive | medium | productivity in audiovisual content creation |
0.02
|
| Generative audiovisual AI poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts). Ai Safety And Ethics | negative | high | presence and types of ethical, governance, transparency and legal risks across input/process/output stages |
0.04
|
| Input-stage risks include concerns about consent, copyright, representativeness, bias, provenance and data ownership for training material. Ai Safety And Ethics | negative | high | legal/ethical compliance and risk factors in training datasets |
0.04
|
| Process-stage risks include governance of model development, control over deployment, transparency, auditing, and operational safety. Governance And Regulation | negative | high | governance and operational safety concerns in model development/deployment |
0.04
|
| Output-stage risks include authenticity/deception concerns, attribution and reuse-rights disputes, reputational harms, and broader societal impacts from abundant generated media. Ai Safety And Ethics | negative | high | authenticity, deception potential, attribution disputes, reputational and societal harms |
0.04
|
| International and national legal approaches to these stages are fragmented, creating uncertainty for IP, privacy, liability and evidence law. Governance And Regulation | negative | high | degree of fragmentation and legal uncertainty across jurisdictions |
0.04
|
| Social acceptance is uncertain: some studies find people may rate AI-generated content equal or superior to human-created content, while proliferation of artificial media could also spur distrust or rejection of digital media. Ai Safety And Ethics | mixed | medium | perceived quality of AI-generated content and public trust/acceptance of digital media |
0.02
|
| Large productivity gains in content production could reduce marginal costs and compress prices for many creative goods, potentially displacing some human labor while raising demand for high-skill oversight, curation and novel creative inputs. Firm Productivity | mixed | medium | marginal costs, prices of creative goods, labor displacement, demand for high-skill roles |
0.02
|
| Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration. Market Structure | negative | medium | market concentration, vertical integration, strategic advantage for data/infrastructure owners |
0.02
|
| Abundant synthetic media may erode the signaling value of standard digital content and create demand for authentication services, certification markets and premium 'human-made' labels. Adoption Rate | mixed | medium | demand for authentication/certification services and premiums for 'human-made' content |
0.02
|
| Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues. Firm Revenue | negative | medium | attention per item, willingness-to-pay, content monetization |
0.02
|
| Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models. Regulatory Compliance | negative | medium | compliance costs, market entry barriers, diffusion rates |
0.02
|
| Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure. Governance And Regulation | negative | medium | existence of externalities and scope for public policy interventions |
0.02
|
| Economic returns may shift toward owners of data, model capacity and verification technology, while traditional creators may demand new compensation mechanisms (e.g., data-use royalties, collective licensing). Inequality | mixed | medium | distribution of economic returns and emergence of compensation mechanisms |
0.02
|
| Potential policy levers include mandatory provenance metadata, liability rules, taxes/subsidies to internalize harms, antitrust actions to limit concentration, and funding for public verification tools; each policy choice will shape incentives, innovation rates and market outcomes. Governance And Regulation | mixed | medium | policy impacts on incentives, innovation, market structure and social outcomes |
0.02
|
| Important empirical research gaps remain (consumer willingness-to-pay for authenticated vs. synthetic content, labor-displacement elasticities, market concentration dynamics, and cost–benefit evaluations of regulatory options). Research Productivity | null_result | high | identified gaps in empirical knowledge and priority research questions |
0.04
|
| Methodology used in the paper is a narrative review relying on secondary sources (literature, legal cases, policy reports, empirical perception studies) and conceptual synthesis; no new primary data were collected. Other | null_result | high | study methodology (use of secondary sources; absence of primary data) |
0.04
|