AI, Deepfakes and AR: Emerging Legal Areas in Media and Telecommunications
We used to say “seeing is believing.“
But in 2025, you believe everything you see at your own risk. Why? We live in a world where Gen AI can make convincing videos of an event that never happened, your face and voice can be replicated as a deepfake with scary precision, and the line between the physical and virtual worlds is gradually blurring.
If the future had three faces in media and telecom, they would be: the brain (AI), the mask (deepfakes), and the lens (AR). Each plays a different role, but all three raise powerful questions about truth, identity, and legality in the digital age.
Artificial Intelligence (AI) refers to the ability of machines to mimic human cognitive functions: learning, reasoning, problem-solving through algorithms and data.1) Without knowing it, we all interact with AI. Maybe it was a personalized playlist on Spotify, a chatbot answering your service complaint, or showing real-time captions during broadcasts. It is the unseen brain behind automated moderation tools, predictive text, voice assistants, and creative content too. In May 2025, TVC News introduced Nigeria’s first AI-powered news anchors, delivering bulletins in English, Yoruba, Hausa, Igbo and Pidgin, to extend 24/7 coverage and help human journalists with real-time translations and updates.2
Deepfakes are synthetic media created using AI techniques, particularly deep learning, to manipulate or generate audio, video, or images. They can convincingly replace a person’s face or voice with another’s, often in a way that is indistinguishable to the human eye.3 Originally explored for entertainment, deepfakes are now used in marketing, satire, and worryingly, in misinformation campaigns, political propaganda, and non-consensual explicit content. This makes them potent, and dangerous tools in the wrong hands.
Augmented Reality (AR) enhances the real world by overlaying digital information, such as images, sounds, or animations, through devices like smartphones, glasses, or AR headsets. Unlike virtual reality, which creates a fully immersive experience, AR blends the virtual with the physical. Media outlets use AR to offer immersive storytelling, while telecom companies use it in customer support, remote troubleshooting, and virtual showrooms.
With these emerging technologies come unprecedented challenges. Seeing is no longer believing.
Like a modern-day doubting Thomas, seeing is now asking what’s real, who made it, and can it be trusted?
The law has always walked a few steps behind technology. Existing laws in Nigeria were written for a pre-AI, pre-AR world. Not the one we currently live in.
It’s no secret that AI systems, AR applications, and deepfake generators rely heavily on neural networks that analyze large data, such as facial recognition, voice samples and behavioral patterns to generate mimics.4 Can they be trusted not to harvest the personal data of unsuspecting users? This sparks concerns about consent, surveillance, and digital identity. The Nigeria Data Protection Act (NDPA) attempts to address privacy issues,5 but enforcement remains weak, and many emerging technologies operate in grey zones.
Deepfakes have the power to distort reality. They can spread false political messages, manipulate public opinion, and destroy reputations. In a country like Nigeria, where misinformation already travels at lightning speed across platforms like WhatsApp and X (formerly Twitter), this technology poses a threat to democracy and public trust.6 In media and telecommunications, the line between satire, parody, and defamation becomes dangerously thin. While Nigerian law recognizes defamation as both a civil and criminal offence, it remains silent on synthetic media. The Cybercrimes Act7) penalizes false information, but it doesn’t contemplate the sophistication of AI-generated deception.
Beyond this, the question of intellectual property must also be considered. Who owns the copyright in AI-generated videos and deepfakes? Three options exist: the human who prompted the AI, the public domain or the AI itself. Under the Nigerian Copyright Act, copyright protection applies to works with human authorship, never for Artificial Intelligence. It is submitted that the copyright of any AI-generated video should be in the public domain and free to use by anyone.
Imagine a deepfake video which was created in Australia and shared in the 195 countries of the world going viral in Nigeria within minutes of posting. This shows the borderlessness of AI and media platforms. Yet laws remain local. Legal remedies become slow and ineffective, especially in nations with underdeveloped digital laws. Nigeria lacks the international agreements or technological infrastructure to trace or hold accountable many foreign-based actors. This creates loopholes for tech companies and content creators to escape liability, leaving victims without recourse.
Augmented Reality, while often overshadowed by the drama of deepfakes and AI deception, has its own challenges. Location-based AR apps,8 such as those that overlay digital content onto real-world streets, buildings, or public spaces, can infringe on privacy rights by collecting users’ geolocation data without adequate consent. They may also accidentally capture third parties who have not agreed to appear in recorded or live-streamed AR environments.
From privacy violations to intellectual property confusion, from misinformation to international enforcement gaps, it is clear that traditional legal structures are struggling to keep pace. But are we starting from scratch? Not entirely. Nigeria and other jurisdictions have made some moves that are worth examining.
The Nigeria Data Protection Act is one of the country’s first attempts to assert control over how personal data is processed, stored, and transferred. It borrows from the European Union’s GDPR in structure. Another is the Cybercrimes (Prohibition, Prevention, etc.) Act of 2015. While primarily focused on fraud, cyberstalking, and internet-based crimes, this act has been loosely applied to cases involving digital impersonation and misinformation. Yet, like the NDPA, it is silent on the legal implications of deepfakes or the weaponization of AI-generated content. Similarly, the recently updated Copyright Act of 2022 strengthens protection for creative works in the digital space but still centers around human authorship. It neither addresses the status of AI-generated works, nor does it provide redress for the unauthorized use of a person’s image or voice.
It is painfully apparent that Nigeria is still solving tomorrow’s problems with yesterday’s laws. To fill this gap, I recommend amending existing laws or enacting new ones. We should learn from the best global practices.
The European Union’s Artificial Intelligence Act stands out as a trailblazer. It classifies AI systems by risk, from minimal to high. Deepfakes are placed in the high-risk category, triggering strict requirements for transparency and accountability. It also bans unacceptable AI uses, such as: social scoring, subliminal manipulation, unrestricted biometric ID. It insists that synthetic content be clearly labeled. Across the Atlantic, the United States is taking a sector-by-sector approach: California had passed laws governing the use of deepfakes in election campaigns and non-consensual pornography.9 China, too, has issued rules requiring visible watermarks on AI-generated audio-visual content.10 Nigeria should follow suit. I recommend introducing a
Labeling Requirement: any media item (video, audio, image, news article) substantially generated or altered by AI/AR must carry a visible tag or disclosure. This would deter malicious deepfakes and bolster public trust.
Malicious deepfakes and voice-cloning are types of fraud and defamation. Therefore, the Cybercrimes Act should be updated to add specific offenses for malicious synthetic media. For example, criminalizing the creation or dissemination of AI-generated content that impersonates individuals or falsifies public information without consent. Penalties could mirror existing identity-theft and impersonation provisions (currently up to 7 years’ jail).
Similarly, Nigeria’s Election Laws should forbid AI-manipulated content used to deceive voters, drawing on models like the U.S. DEEPFAKES Accountability Act.
Immersive content like VR and AR should no longer sit in legal grey zones. Nigeria’s National Film and Video Censors Board (NFVCB) must recognize public AR/VR experiences as “video works” and apply existing content standards accordingly. The NFVCB Act should be amended to ensure creators register such content, especially when it involves entertainment or adult themes, and comply with rules on obscenity and violence.
On the telecom front, the Nigerian Communications Act 2003 needs a digital update. Under section 146 of the Act, the Nigerian Communications Commission (NCC) is empowered to issue regulations to ensure licensees prevent the transmission of harmful or unlawful content. This is a provision that could be extended to address AI-generated scam calls and deepfake communications.
If we are to tackle offenses related to emerging technologies efficiently, our courts must also evolve. Dedicated cybercrime tribunals or tech-focused divisions within federal courts should be established. Judges and prosecutors need ongoing training in digital forensics and AI evidence. A Judicial Tech Fellowship could lead this effort. Inspired by the EU’s push for explainability, Nigerian courts must be equipped to interpret algorithmic evidence and synthetic media through expert input. Deepfakes are growing so sophisticated that even professionals struggle to spot them. This makes cross-sector training essential.
Effective enforcement requires agencies to work together. A high-level National AI Task Force or commission (bringing together NITDA, NDPC, NBC, NCC, police, and civil society) should be established to monitor AI use, recommend reforms, and respond swiftly to emerging threats.
Finally, beyond laws and policies, Nigeria must win the battle of awareness. Public education campaigns on deepfakes and augmented reality are essential, not optional. Media literacy should be added to school curricula, helping young Nigerians discern truth from trickery. Universities could lead the charge by exploring, for instance, why people believe what they see, even when what they see is a lie. International partners like UNESCO and GDPR authorities can support with training for journalists, judges, and law enforcement, ensuring the human actors in our legal system are just as updated as the machines they now face.
The goal is not to fear innovation, but to govern it wisely. But let us not wait for the next deepfake scandal, viral lie, or virtual fraud before getting to work. The time to legislate, educate, and innovate is now!
Because in a world where seeing is no longer believing, it is the law that must help us believe again.
About Author
Abisola Oyeniyi is a dedicated law student training in problem solving and legal service innovation, seeking to contribute to legal research, compliance and client-focused solutions.

- Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (Penguin 2019 [↩]
- Leadership, ‘TVC Unveils Nigeria’s First AI News Anchors’ (1 May 2025) https://leadership.ng/tvc-unveils-nigerias-first-ai-news-anchors/ accessed 25 June 2025. [↩]
- Deepfake, Encyclopaedia Britannica (online, 17 June 2025) https://britannica.com/tke accessed 21 June 2025. [↩]
- Mika Westerlund, ‘The Emergence of Deepfake Technology: A Review’ (2019) 9(11) Technology Innovation Management Review 39 https://doi.org/10.22215/timreview/1282 accessed 21 June 2025 [↩]
- See s 24 of the Data Protection Act 2023, which guarantees the rights over personal data, including the right to be informed, access their data, request correction or deletion, restrict or object to processing, and not be subject to solely automated decisions without human intervention. [↩]
- The Guardian, ‘#NUJ70: Deepfakes, Impostors Pose Threat to Journalism – Tinubu, Osoba Warn’ (The Guardian, 18 May 2024) https://guardian.ng/news/nuj70-deepfakes-impostors-pose-threat-to-journalism-tinubu-osoba-warn/ accessed 22 June 2025 [↩]
- Cybercrimes (Prohibition, Prevention, etc) Act 2015, s 24(1)(b [↩]
- Examples are Pokémon GO, IKEA Place, Wikitude World Browser, Magical Park. [↩]
- The New York Times, ‘California Passes Election ‘Deepfake’ Laws, Forcing Social Media Companies to Take Action’ (The New York Times, 17 September 2024)
https://www.nytimes.com/2024/09/17/technm.html accessed 23 June 2025 [↩]
- Toby Bond and Emma Ren, ‘New AI Content Labelling Rules in China:
What Are They and How Do They Compare to the EU AI Act?’ (Lexology, 20 May 2025) https://www.lexology.com/library/de78 accessed 23 June 2025 [↩]


Leave a Reply