Share this
The Rise of Deepfakes: How Communicators Can Combat Disinformation
by The Notified Team on Mar 26, 2024 5:55:32 PM
In Hong Kong, a finance worker at a multinational firm was tricked into paying $25 million to fraudsters after they used deepfake technology to pose as the company’s chief financial officer.
The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff - but in fact, all were deepfake recreations.
This recent example highlights a key issue being faced by corporate communicators in 2024 – the erosion of trust.
With the rapid proliferation of AI tools, communicators are now forced to juggle building and maintaining stakeholder trust while safely using artificial intelligence to bring efficiency and accuracy to corporate storytelling.
This new dynamic presents many challenges - but it also creates many opportunities when it comes to awareness, education and promoting the ethical use of AI.
AI Education and How To Shift the Conversation
AI has made our lives better in many ways, from live-saving medical diagnoses to the development of sustainable energy solutions.
However, as with any powerful tool, it's important to consider the ethics of artificial intelligence to ensure these tools are used responsibly.
Deepfakes, as demonstrated by the Hong Kong scam, present a sample of how AI can be misused. Such malicious applications erode trust in online interactions and highlight the need for strong safeguards.
In an open letter urging government attention to deepfake concerns, Andrew Critch, an AI researcher at the University of California, said: “Deepfakes are a huge threat to human society and are already causing growing harm to individuals, communities and the functioning of democracy. We need immediate action to combat the proliferation of deepfakes.”
It's important to remember that deepfakes are not representative of the full spectrum of AI applications. While there is the potential for misuse, it’s up to communicators to shift the conversation from fear to action.
As Gini Dietrich, the founder and CEO of Spin Sucks, said in one of her blogs showcasing ethical considerations: “You are responsible for ensuring that the way you use AI is ethical—and that you demand it of others, too.”
Here’s are four ways to begin shifting the conversation – both externally and within your organization:
1. Acknowledge the Potential of AI, Not Just the Threat
Instead of focusing solely on the negative, acknowledge the power and potential of AI for good. This creates a more balanced perspective and opens the door to solutions.
2. Focus on Collaboration, Not Blame
Instead of assigning blame, emphasize collaboration between developers, policymakers and users to establish ethical guidelines and regulations for AI development and deployment.
3. From "Is AI Ethical? to "How Can We Make AI Ethical?"
Shift the question from "yes or no" to an action-oriented tone. This encourages proactive solutions and empowers individuals to contribute to a more ethical AI future.
4. Empower Individuals with Knowledge and Tools
Equip employees and consumers with the knowledge and tools to identify and combat unethical AI practices. For example, this could include educational resources on deepfakes, fact-checking platforms and reporting mechanisms.
Remember: By focusing on the positive potential of AI, collaborating on solutions and empowering individuals, we can build a future where AI is a force for good.
Building Trust: Priorities and Solutions for Storytellers
As deepfake technology grows more accessible, corporate communicators must make trust-building an urgent priority while also providing solutions.
In a recent Forbes article, David Ford, Global Chief Communications Officer at Ogilvy, highlighted trust as a crucial trait for success.
He said, “PR is a field where value is driven by relationships, and meaningful relationships are fundamentally built through trust. Without trustworthiness, integrity and a commitment to strong moral leadership, longevity in the industry will be elusive. The best way to cultivate trust is through candor and curiosity—be open when you make mistakes and stay curious about how you can improve.”
With this in mind, here are the top priorities and solutions communicators should consider:
Priorities:
- Communicate in advance about the policies and measures in place to prevent the misuse of AI.
- Verify if media is real or fake using authentication standards.
- Increase transparency around the usage of AI tools.
- Create public awareness of deepfakes through education.
- Build internal plans for how to handle fake information when the situation arises.
Solutions:
- Implement advanced cybersecurity and ethical AI frameworks.
- Perform regular external and third-party audits of AI systems.
- Cultivate human-centric design principles in AI products.
- Hire dedicated roles to oversee organizational AI ethics.
- Launch anti-deepfake detection tools to verify media integrity.
- Work closely with regulators shaping AI governance policies.
- Choose technology partners who use AI ethically.
Building trust with audiences in 2024 requires a concerted effort to address the challenges posed by risks such as deepfakes.
By staying alert and responsible about the uses and implications of new technologies, communicators can build and maintain trust by being transparent and adopting a human-focused approach.
Notified's Stance on Artificial Intelligence
At Notified, we’re committed to advancing AI technologies for public relations and investor relations responsibly, ethically, transparently and in compliance with applicable laws.
Our AI policy outlines the principles and guidelines that govern the development and use of AI in our organization and our client-facing products.
We’re committed to upholding the highest privacy and security standards and promoting the responsible use of AI technologies in accordance with our Privacy Policy.
Subscribe to our blog to receive the latest insights on how AI is impacting brand storytelling.
Share this
- Public Relations (152)
- Press Releases (94)
- GlobeNewswire (79)
- Press Release Distribution (75)
- Investor Relations (62)
- Artificial Intelligence (54)
- PR Communications (52)
- Global News Distribution (28)
- Media Relations (26)
- IR Websites (22)
- Webinar (22)
- Notified PR Platform (21)
- IR Communications (18)
- IR Webcasts (18)
- Experiences (17)
- Studio Webinar Platform (17)
- Virtual Events (17)
- Earnings Calls (16)
- Webcasts (16)
- Media Contacts Database (15)
- PR Measurement (15)
- Event Technology (12)
- Media Monitoring (12)
- Webinar Strategy (12)
- Generative AI (11)
- ESG (10)
- Social Media (10)
- Investor Days (9)
- Newswire (9)
- Virtual Event Platform (9)
- Earnings Release (8)
- IR Event Platform (8)
- News Roundup (8)
- PR Trends (7)
- Regulatory Filing (7)
- Accessibility (6)
- Case Study (6)
- Social Listening (6)
- Writing Tips (6)
- Report (5)
- Video (5)
- Webinar Engagement (5)
- Earnings Day (4)
- SEO (4)
- CLEAR Verified (3)
- Journalism (3)
- Brand Storytelling (2)
- CSR (2)
- DEI (2)
- Demand Generation (2)
- Insights & Analytics (2)
- PR Agency (2)
- ROI (2)
- Sentiment Analysis (2)
- Webhosting (2)
- Awards (1)
- Branding (1)
- Emojis (1)
- Events (1)
- Halloween (1)
- Internships (1)
- Life At Notified (1)
- Mark Cuban Foundation AI Bootcamp (1)
- Marketing (1)
- News Releases (1)
- PRSA ICON (1)
- Product Launch (1)
- Retail Investors (1)
- SXSW (1)
- Share of Voice (1)
- Sponsorships (1)
- Success Story (1)
- United Kingdom (1)
- eBook (1)
- November 2024 (7)
- October 2024 (14)
- September 2024 (15)
- August 2024 (14)
- July 2024 (14)
- June 2024 (14)
- May 2024 (12)
- April 2024 (13)
- March 2024 (13)
- February 2024 (15)
- January 2024 (11)
- December 2023 (7)
- November 2023 (13)
- October 2023 (14)
- September 2023 (7)
- August 2023 (8)
- July 2023 (7)
- June 2023 (8)
- May 2023 (8)
- April 2023 (5)
- March 2023 (5)
- February 2023 (8)
- January 2023 (9)