Connect with us

Hi, what are you looking for?

HEADLINES

SPECIAL REPORT: The rise and rise of deepfakes… and how to deal with it and similar future tech

What makes deepfakes different from traditional fake content (like edited photos or staged videos) is the realism and automation. Traditional media manipulation is usually manual and time-consuming. Deepfakes can be generated at scale and can mimic facial expressions, lipsyncing, and even voices.

IMAGE SOURCE: CANVA.COM

A company’s top officer who is scheduled to give a presentation in a conference, was unable to attend in person due to unforeseen circumstances. To communicate his message and maintain a professional image, the company decides to use deepfake technology to create a video of the officer delivering the presentation.

This scenario uses deepfake technology, or synthetic media, often in the form of videos, audio or images, generated through artificial intelligence (AI) and deep learning algorithms.

How are deepfakes created

It is done by collecting data, or existing videos and audio recordings of the top officer where he is seen speaking. Using deep learning algorithms, a deepfake video that mimics the officer’s appearance, voice, and mannerisms is created. The script for the presentation is fed into the deepfake model, and the video is edited to ensure a seamless delivery.

The development of deepfakes can be attributed to several advanced technologies and to software that unlocked new methods of generating digital content. “Specifically focusing on generative artificial intelligence (GenAI) and the multitude of models available that let you converse fluently with a chatbot right on through to forging a realistic audio and video of a real life person are now possible,” said Aaron Bugal, Field CISO, APJ at Sophos, a cybersecurity platform provider.

“Using advanced AI, mainly deep learning models like GANs (Generative Adversarial Networks) and autoencoders, these models are trained on real videos and images to learn how people talk, move, and look – and then recreate or swap those features in fake videos that look incredibly real,” disclosed Samuel Sathyajith, senior vice president of an Indian-owned security solutions provider Seqrite.

Advertisement. Scroll to continue reading.

What makes deepfakes different from traditional fake content (like edited photos or staged videos) is the realism and automation, according to Sathyajith. “Traditional media manipulation is usually manual and time-consuming. Deepfakes can be generated at scale and can mimic facial expressions, lipsyncing, and even voices.”

“These technical enhancements differ from traditional ‘photoshopped’ images and videos where instead of taking weeks to months to build, they can now be made in seconds with only a few directed prompts,” said Bugal.

Photo by Brett Jordan from Unsplash.com

Weaponizing deepfakes

Deepfakes have now become a significant concern in cybersecurity, especially as they are increasingly being weaponized in phishing attacks, misinformation campaigns, and identity theft.
This technology can cause real damage especially when used maliciously. It can have potential consequences on individuals, companies, and society.

Sathyajith mentioned that individuals might become victims of fake videos used for blackmail, harassment, or identity theft. He also said that companies can be targeted through impersonation of CEOs or executives to commit fraud – like convincing someone to wire money or approve fake deals. The society can also suffer when deepfakes are used to spread fake news or manipulate public opinion, especially around elections.

Sathyajith also said there have already been cases where deepfake audio was used to trick employees into transferring funds, or where fake political content went viral.

Deepfakes as new type of social engineering method

Deepfakes are becoming more accessible and easier to create, making this too incredibly dangerous in the hands of cybercriminals. They are very convincing with the subject moving and talking if they were real. And to those who are unaware of well-crafted messages delivered and/or endorsed by the faked ‘actor’ could enable a raft of fraudulent activity.

Advertisement. Scroll to continue reading.

Bugal classified deepfakes as a new type of social engineering method and will take a great deal of security awareness training to ensure people who are subjected to it take a moment to validate what they’re reading, hearing, and seeing is real and authentic.

Bugal said deepfake are a serious issue that the public should be aware of. And although security awareness can assist in helping educate users spot and react appropriately to deepfake productions , it’s important to always evaluate the messages and content you are consuming, especially from social media and messaging platforms when unsolicited messages are seen. He adds that misinformation and disinformation in this Internet age could be the weapon of choice of cyber criminals and activists, powered by deepfakes.

How common are deepfakes in PH?

Deepfakes are becoming increasingly prevalent in the Philippines. According to the study conducted by identity verification platform Sumsub, the Philippines experienced a staggering increase of 4,500% in deepfake-related cases in 2023 compared with the previous year, highlighting the country’s heightened susceptibility to these AI-generated scams that can lead to concerns about the spread of misinformation, scams, and reputational damage. This growth was part of a larger trend, with deepfake incidents increasing by an average 1,530% across Asia Pacific during the same period.

Photo by Swello from Unsplash.com

Detection and prevention of deepfakes

With the growth in deepfake cases becoming alarming, security providers are finding ways to tackle deepfakes using technology. Sathyajith relayed there are AI-powered detection tools that look for visual inconsistencies like unnatural blinking, strange lighting, or mismatched lip movements. There are also cross-checking video and audio to catch mismatches between what’s said and how it’s said.

However, Sathyajith warns that deepfakes are getting better – and harder to detect. Especially when shared on platforms like WhatsApp or Facebook, where compression, and editing can blur the signs. Social media platforms are starting to step in by using AI to scan content for manipulation; adding warning labels to suspicious media; and improving transparency through metadata and content provenance tools (like C2PA standards).

Bugal noted that social media platforms have a duty of care to perform and ensure that any harmful and blatantly incorrect information is marked accordingly so that a user can take appropriate action if presented with this content. Apart from this, technology to detect the presence of deepfakes is now being built into open-sourced models so that the generation of content will leave an invisible watermark that can be checked programmatically.

Advertisement. Scroll to continue reading.

Future of deepfakes

What does the future look like for deepfakes? “Like most technologies, deepfakes aren’t inherently good or bad – it’s all about how they’re used. In the future, we’ll likely see both sides,” said Sathyajith. “Positive applications of deepfakes include helping de-age actors or dub content in multiple languages in movies. In education, teachers can create personalized video content or avatars to explain complex topics, and in customer service, lifelike avatars may soon replace static chatbots.”

Despite deepfake’s positive applications, the risks will also grow. Sathyajith enumerated the risks as voice-based fraud, impersonation in real-time calls, and deepfake-powered phishing. He also said that synthetic identity fraud could be the next big thing in cybercrime.

“The development of deepfakes will evolve in the next five to 10 years as technology enhances and software capabilities also evolve, driving innovation, including the threats we will face,” said Bugal. “Not limited to deepfakes, but social engineering will continue to be a point of serious inflection between remaining safe and exposure to Internet threats. Education on misinformation and disinformation during this information age is paramount and will be pivotal in arming people with skills to identify, and appropriately need to, fraudulent productions.”

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

This commitment of support aligns with Maya’s broader approach to cybersecurity, treating protection as a core priority across both platform safeguards and consumer education.

HEADLINES

As cloud infrastructure grows to host the influx of AI workloads, it has become a critical target, with 99% of respondents reporting at least...

HEADLINES

Mass layoffs, hiring freezes, market uncertainty, and rushed AI integration crack businesses wide open to cyberattacks. Visionary leaders, however, can see the current situation...

HEADLINES

The update enhances Sophos’ Secure by Design pledge with a brand-new Health Check feature and several other security enhancements.

White Papers

As AI adoption explodes globally, it comes with a mental-health toll, the latest research from KPMG and EY suggests. Clinicians call for regulations that...

HEADLINES

The Philippines is revealed to be the second-largest target, representing 20% of APAC fraud attempts, following Indonesia as the dominant fraud hotspot, accounting for...

HEADLINES

These campaigns mix convincing visuals, well known hosting platforms like Discord, and regularly updated malware kits to evade detection by users and detection tools.

HEADLINES

Sophos XDR detected 100% of adversary behaviors (sub-steps)1 across two complex attack scenarios: Scattered Spider, which Sophos X-Ops tracks as GOLD HARVEST, a financially motivated...

Advertisement