Fact‑checking groups in Jordan, Turkey and Iran find AI a double‑edged sword: it automates monitoring and triage but also fuels sophisticated misinformation, forcing hybrid human–AI workflows and stronger data, legal and safety measures. Local politics and data access shape how platforms adopt tools and where market and policy support is most needed.
Information disorders are a significant global issue but are particularly relevant and underexplored in the Middle East, where political instability contributes to their spread. Despite the critical role fact-checking platforms play in combating information disorders, we need to learn more about how these platforms operate in such a complicated regional context. This study analyzes three fact-checking platforms: Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran) to better understand the differences in how they approach fact-checking, the strategies they use, and the obstacles they face, including social and political conditions but also regarding the impact of AI. Using a multimethod qualitative approach based on document analysis and interviews, the study highlights recurring issues such as censorship, limited access to data, and audience engagement. The findings reveal how these platforms address these challenges and provide valuable insights into effective methodologies for fighting mis-/disinformation. The results offer broader implications for enhancing media literacy, strengthening the role of fact-checking platforms in the Middle East, and providing recommendations for best practices that can be applied regionally.
Summary
Main Finding
Fact-checking platforms in Jordan (Akeed), Turkey (Teyit), and Iran (Factnameh) face similar operational constraints—censorship, limited access to data, and difficulties engaging audiences—but respond with different strategies shaped by local politics. AI is simultaneously a tool that can lower verification costs and scale reach and a threat that amplifies misinformation (deepfakes, synthetic text) and complicates verification. Effective practice in this region relies on hybrid human–AI workflows, local expertise, cross-sector partnerships, and protective measures for staff and sources.
Key Points
- Regional context matters: political instability, legal pressure, and censorship strongly shape what platforms can investigate, publish, and access.
- Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran) each adapt their scope and tactics according to national constraints.
- Recurring operational challenges:
- Censorship and legal risks that constrain reporting and distribution.
- Limited or asymmetric access to primary data (platform APIs, state data, archives).
- Difficulty building and retaining audience trust and engagement, especially where public skepticism or polarization is high.
- Resource constraints: staff time, funding, technical capacity.
- Strategies used by platforms:
- Emphasis on local-language expertise and culturally grounded sourcing.
- Transparent workflows and clear labeling to build credibility.
- Partnerships with media outlets, academic institutions, and civil-society actors to amplify reach and secure data.
- Community reporting and audience-focused formats to improve engagement.
- Selective adoption of automated tools for triage, detection, and monitoring while keeping human judgment central.
- AI-specific dynamics:
- Positive: automation (classification, clustering, alerting) helps prioritize claims, monitor spread, and translate content; AI can reduce some verification costs.
- Negative: generative models increase volume and sophistication of misinformation (deepfakes, fabricated documents), raise false-positive risks, and can be weaponized by state or nonstate actors.
- Operational trade-offs: limited data access and censorship reduce the efficacy of AI tools (training/validation gaps); legal risks complicate the use of some proprietary platforms or cloud services.
- Broader outputs: the study distills context-sensitive best practices for fact-checking in restrictive environments (e.g., safety protocols, local partnerships, hybrid verification workflows).
Data & Methods
- Multimethod qualitative approach:
- Document analysis of platform outputs, guidelines, public reports, and policy statements.
- Semi-structured interviews with staff, editors, and stakeholders from Akeed, Teyit, and Factnameh.
- Comparative, interpretive analysis to identify shared challenges, country-specific constraints, and adaptive strategies.
- Strengths and limitations:
- Strengths: in-depth, context-rich insights into operational realities and decision-making.
- Limitations: qualitative design limits statistical generalizability; findings reflect the sampled platforms and interviewees and may not cover all actors or informal practices in the region.
Implications for AI Economics
- Market demand and supply:
- Growing demand for AI-assisted fact-checking tools creates market opportunities (software, monitoring services, labeled datasets).
- However, censorship, restricted data flows, and government interference can fragment markets and limit economies of scale—favoring well-resourced, internationally connected actors and widening capacity gaps.
- Public-good nature and externalities:
- Information quality exhibits strong positive externalities; misinformation imposes negative externalities (polarization, economic disruption). Left to market forces, underprovision of verification and mitigation is likely—justifying public funding, subsidies, or regulation.
- Cost-benefit trade-offs of automation:
- AI can lower marginal costs of monitoring and triage, increasing coverage. But false positives/negatives and adversarial use of AI impose error costs requiring sustained human oversight.
- Investments should prioritize hybrid models: automation for scale and humans for contextual, adversarial, and legally sensitive judgment.
- Data and model governance:
- Restricted data access (platform API limits, state censorship) reduces the effectiveness of AI models and can bias tools. Policy interventions (mandated data-sharing for research, secure access frameworks) would improve tool performance and fairness.
- Standards for provenance, labeling of AI-generated content, and interoperable evidence formats would lower verification costs and create network effects.
- Labor and capacity:
- AI will shift tasks (more monitoring, less rote verification), creating a need for reskilling and new roles (AI tool operators, analysts). Donor and public investments should fund capacity building for local organizations.
- Incentives and platform governance:
- Commercial platforms’ incentives may not align with public-interest verification. Economic policies (platform transparency mandates, data portability, competition policy) can reshape incentives and improve information ecosystems.
- Policy recommendations relevant to AI economics:
- Fund and subsidize development of locally relevant hybrid fact-checking tools and secure data access for civil-society actors.
- Support creation of open, multilingual labeled datasets representative of the region to improve model accuracy and reduce centralization.
- Design regulatory frameworks requiring provenance labeling for AI-generated content and greater API/data access for research under privacy and safety safeguards.
- Promote international technical assistance and knowledge-sharing to reduce capacity asymmetries and counteract market concentration.
- Treat fact-checking and misinformation mitigation partly as public goods; consider public funding, procurement, or coordinated donor programs to internalize positive externalities.
Overall, the study underscores that AI can materially improve fact-checking efficiency in the Middle East but only if paired with investments in data access, local capacity, legal protections, and governance measures that address the political and economic frictions unique to the region.
Assessment
Claims (24)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Fact-checking platforms in Jordan (Akeed), Turkey (Teyit), and Iran (Factnameh) face similar operational constraints—censorship, limited access to data, and difficulties engaging audiences—but respond with different strategies shaped by local politics. Organizational Efficiency | mixed | medium | operational constraints (censorship, data access, audience engagement) and adaptive strategies |
0.05
|
| Political instability, legal pressure, and censorship strongly shape what platforms can investigate, publish, and access in the region. Regulatory Compliance | negative | medium | ability to investigate, publish, and access information |
0.05
|
| Each platform (Akeed, Teyit, Factnameh) adapts its scope and tactics according to national constraints. Organizational Efficiency | mixed | medium | scope of investigation and tactical choices |
0.05
|
| Censorship and legal risks constrain reporting and distribution for these fact-checking platforms. Regulatory Compliance | negative | medium | reporting frequency, distribution channels, and content choices |
0.05
|
| Platforms face limited or asymmetric access to primary data sources such as platform APIs, state data, and archives. Organizational Efficiency | negative | medium | access to primary data sources |
0.05
|
| Platforms experience difficulty building and retaining audience trust and engagement, especially in contexts of high public skepticism or polarization. Adoption Rate | negative | medium | audience trust and engagement levels |
0.05
|
| Resource constraints—limited staff time, funding, and technical capacity—are recurring operational challenges for these platforms. Organizational Efficiency | negative | medium | staffing levels, funding availability, technical capacity |
0.05
|
| Platforms emphasize local-language expertise and culturally grounded sourcing as a strategy to improve verification and credibility. Output Quality | positive | medium | verification quality and perceived credibility |
0.05
|
| Transparent workflows and clear labeling are used to build credibility with audiences. Adoption Rate | positive | medium | audience perceptions of credibility/trust |
0.05
|
| Platforms form partnerships with media outlets, academic institutions, and civil-society actors to amplify reach and secure data. Adoption Rate | positive | medium | audience reach and data access through partnerships |
0.05
|
| Community reporting and audience-focused formats are used to improve engagement. Adoption Rate | positive | medium | audience engagement |
0.05
|
| Platforms selectively adopt automated tools for triage, detection, and monitoring while keeping human judgment central to verification. Automation Exposure | mixed | medium | degree of automation in verification workflows and reliance on human judgment |
0.05
|
| AI can lower verification costs and scale reach by automating tasks such as classification, clustering, alerting, and translation. Task Completion Time | positive | medium | verification cost/time and monitoring/translation capacity |
0.05
|
| Generative AI increases the volume and sophistication of misinformation (deepfakes, fabricated documents), raises false-positive risks, and can be weaponized by state or nonstate actors. Ai Safety And Ethics | negative | medium | misinformation volume/sophistication and verification error risk |
0.05
|
| Limited data access and censorship reduce the efficacy of AI tools by creating training and validation gaps; legal risks complicate use of proprietary platforms and cloud services. Output Quality | negative | medium | AI tool effectiveness (training/validation quality) and deployability |
0.05
|
| The study distills context-sensitive best practices for fact-checking in restrictive environments, including safety protocols, local partnerships, and hybrid verification workflows. Organizational Efficiency | positive | medium | recommended operational practices for safety and verification effectiveness |
0.05
|
| There is growing market demand for AI-assisted fact-checking tools, creating opportunities for software, monitoring services, and labeled datasets. Adoption Rate | positive | low | market demand for AI tools and labeled datasets |
0.03
|
| Censorship, restricted data flows, and government interference fragment markets, limit economies of scale, and favor well-resourced, internationally connected actors—widening capacity gaps. Market Structure | negative | medium | market fragmentation and distribution of capacity among actors |
0.05
|
| Underprovision of verification is likely if left to market forces because information quality has positive externalities and misinformation imposes negative externalities, justifying public funding, subsidies, or regulation. Governance And Regulation | negative | medium | level of provision of verification services relative to social optimum |
0.05
|
| Investments should prioritize hybrid models where automation provides scale and humans handle contextual, adversarial, and legally sensitive judgments. Output Quality | positive | medium | verification effectiveness and error mitigation in workflows |
0.05
|
| Standards for provenance, labeling of AI-generated content, and interoperable evidence formats would lower verification costs and create beneficial network effects. Market Structure | positive | low | verification cost and interoperability/network effects |
0.03
|
| AI adoption will shift fact-checking tasks (more monitoring, less rote verification), creating a need for reskilling and new roles (AI tool operators, analysts); donor and public investments should fund capacity building for local organizations. Skill Acquisition | positive | medium | changes in task allocation, workforce skills, and need for capacity-building investments |
0.05
|
| Commercial platforms' incentives may not align with public-interest verification, so economic policies (transparency mandates, data portability, competition policy) can reshape incentives and improve information ecosystems. Governance And Regulation | mixed | medium | alignment of platform incentives with public-interest verification |
0.05
|
| Overall, AI can materially improve fact-checking efficiency in the Middle East but only if paired with investments in data access, local capacity, legal protections, and governance measures addressing political and economic frictions. Organizational Efficiency | positive (conditional) | medium | fact-checking efficiency conditioned on complementary investments |
0.05
|