Notified Blog

The Rise of Deepfakes: How Communicators Can Combat Disinformation

In Hong Kong, a finance worker at a multinational firm was tricked into paying $25 million to fraudsters after they used deepfake technology to pose as the company’s chief financial officer.

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff - but in fact, all were deepfake recreations.



This recent example highlights a key issue being faced by corporate communicators in 2024 – the erosion of trust.

With the rapid proliferation of AI tools, communicators are now forced to juggle building and maintaining stakeholder trust while safely using artificial intelligence to bring efficiency and accuracy to corporate storytelling.

This new dynamic presents many challenges - but it also creates many opportunities when it comes to awareness, education and promoting the ethical use of AI.

AI Education and How To Shift the Conversation

AI has made our lives better in many ways, from live-saving medical diagnoses to the development of sustainable energy solutions.

However, as with any powerful tool, it's important to consider the ethics of artificial intelligence to ensure these tools are used responsibly.

Deepfakes, as demonstrated by the Hong Kong scam, present a sample of how AI can be misused. Such malicious applications erode trust in online interactions and highlight the need for strong safeguards.

In an open letter urging government attention to deepfake concerns, Andrew Critch, an AI researcher at the University of California, said: “Deepfakes are a huge threat to human society and are already causing growing harm to individuals, communities and the functioning of democracy. We need immediate action to combat the proliferation of deepfakes.”

It's important to remember that deepfakes are not representative of the full spectrum of AI applications. While there is the potential for misuse, it’s up to communicators to shift the conversation from fear to action.

As Gini Dietrich, the founder and CEO of Spin Sucks, said in one of her blogs showcasing ethical considerations: “You are responsible for ensuring that the way you use AI is ethical—and that you demand it of others, too.”

Here’s are four ways to begin shifting the conversation – both externally and within your organization:

1. Acknowledge the Potential of AI, Not Just the Threat

Instead of focusing solely on the negative, acknowledge the power and potential of AI for good. This creates a more balanced perspective and opens the door to solutions.

2. Focus on Collaboration, Not Blame

Instead of assigning blame, emphasize collaboration between developers, policymakers and users to establish ethical guidelines and regulations for AI development and deployment.

3. From "Is AI Ethical? to "How Can We Make AI Ethical?"

Shift the question from "yes or no" to an action-oriented tone. This encourages proactive solutions and empowers individuals to contribute to a more ethical AI future.

4. Empower Individuals with Knowledge and Tools

Equip employees and consumers with the knowledge and tools to identify and combat unethical AI practices. For example, this could include educational resources on deepfakes, fact-checking platforms and reporting mechanisms.

Remember: By focusing on the positive potential of AI, collaborating on solutions and empowering individuals, we can build a future where AI is a force for good.


Building Trust: Priorities and Solutions for Storytellers

As deepfake technology grows more accessible, corporate communicators must make trust-building an urgent priority while also providing solutions.

In a recent Forbes article, David Ford, Global Chief Communications Officer at Ogilvy, highlighted trust as a crucial trait for success.

He said, “PR is a field where value is driven by relationships, and meaningful relationships are fundamentally built through trust. Without trustworthiness, integrity and a commitment to strong moral leadership, longevity in the industry will be elusive. The best way to cultivate trust is through candor and curiosity—be open when you make mistakes and stay curious about how you can improve.

With this in mind, here are the top priorities and solutions communicators should consider:

Priorities:

  • Communicate in advance about the policies and measures in place to prevent the misuse of AI.
  • Verify if media is real or fake using authentication standards.
  • Increase transparency around the usage of AI tools.
  • Create public awareness of deepfakes through education.
  • Build internal plans for how to handle fake information when the situation arises.

Solutions:

  • Implement advanced cybersecurity and ethical AI frameworks.
  • Perform regular external and third-party audits of AI systems.
  • Cultivate human-centric design principles in AI products.
  • Hire dedicated roles to oversee organizational AI ethics.
  • Launch anti-deepfake detection tools to verify media integrity.
  • Work closely with regulators shaping AI governance policies.
  • Choose technology partners who use AI ethically.

Building trust with audiences in 2024 requires a concerted effort to address the challenges posed by risks such as deepfakes.

By staying alert and responsible about the uses and implications of new technologies, communicators can build and maintain trust by being transparent and adopting a human-focused approach.

Notified's Stance on Artificial Intelligence

At Notified, we’re committed to advancing AI technologies for public relations and investor relations responsibly, ethically, transparently and in compliance with applicable laws.

Our AI policy outlines the principles and guidelines that govern the development and use of AI in our organization and our client-facing products.

We’re committed to upholding the highest privacy and security standards and promoting the responsible use of AI technologies in accordance with our Privacy Policy.

Subscribe to our blog to receive the latest insights on how AI is impacting brand storytelling.

Subscribe by email