Thanks to greater access to technology like AI and blockchain, organizations may be able to fight deep fake threats and reduce this risk in the coming years.
Deep fakes are causing a buzz in cybersecurity circles. While relatively new on the scene and not always malicious, they have caused high-profile losses for organizations targeted by threat actors.
The biggest cybersecurity challenge for 2022 will be deep fakes, thanks to a growing number of distributed workforce arrangements making it more challenging to receive face-to-face confirmation. Here’s what companies need to know.
What are deep fakes?
Deep fakes use complex algorithms and machine learning to create audio and video that seem real. This could be anything from hearing your boss’s voice over the phone to a video confirming that the CIO is asking for charity donations.
They seem so real, in fact, that someone working remotely may not even bother double-checking—the thought wouldn’t occur to them. Because of this new capability, companies need to prepare for more sophisticated breaks into their networks or attacks on employees.
Phishing is a mainstay in cyberattacks, as well as exploiting the blurred lines between personal and company devices. With audio and video fakes, companies will need to build new policies to keep workers and the organization secure.
Recent deep fake cyber crimes have made the news precisely because of the strange nature of deep fakes and the amount of money stolen and damage achieved. These are not the photoshopped images of a grocery store gossip rag. It’s potentially big business that could cost companies dearly.
New technologies offer major support combating deep fakes
Since the first known case of deep fakes back in 2017, companies have stepped in to find new ways to combat misleading information. In 2018, SRI International was awarded three contracts from Pentagon’s Defense Advanced Research Projects Agency (DARPA) to find technology for fighting attacks like deep fakes. Currently, the software that creates deep fakes is more readily available than software used to combat it.
Some of the newest technologies can help companies get a handle on deep fakes and other types of cyberattacks. As companies embrace new tech, they might have a better foundation for handling the challenges of remote work.
While artificial intelligence can’t simply flag a video or audio piece as inauthentic, it does have a few tricks up its sleeve for identifying possible deep fakes. Companies might be able to leverage AI to fight this new type of threat thanks to its complex processing capabilities.
First, AI can easily scan hundreds of millions of data points in seconds, if not much faster than humans ever could. The first defense could be analyzing the video or audio against hundreds of known originals to determine if it’s spliced together from those cuts. A human would take days or weeks to accomplish the same task, making AI critical for addressing a threat before anyone takes action.
Another way AI fights deep fakes is to scan for keywords and images which might flag the video for intervention. For example, an audio message asking for donations might trigger a notification that the audio link does not lead to the charity website. AI could also abstract the content and check it against any legal compliance or policy frameworks before sending the clip to a human to approve.
While most consumers continue to focus on the cryptocurrency and NFT portion of blockchain, the industry is realizing its potential for reducing fraud. In terms of audio or video files, blockchain can provide certificates of authenticity that are difficult (if not near impossible) to fake.
For example, users can leverage blockchain records to verify their identity before sending any type of file. Blockchain can also readily show if the content has been altered from its original state or forged entirely. In either of these cases, companies can implement policies that check either manually or through automation for these certificates.
Blockchain also allows the decentralization of authentication so that it’s more difficult for hackers to access an entire system. No single entity has complete power to confirm or deny files, providing an extra layer of security.
Deep fakes present more than monetary risk
The problem with deep fakes goes beyond financial risk, although that’s good enough reason alone to act. Deep fakes can cost companies millions and potentially run afoul of regulations like GDPR or CCPA.
They also present a risk to authenticity, making it difficult to distinguish what is real in the first place. Companies targeted in deep fake attacks may have trouble regaining the trust of their employees and customers in the future, a scenario that could spell the end in a highly competitive online world.
As deep fakes become more common, experts are urging companies to develop policies for managing authentication and handling deep fakes quickly if they occur. Thanks to greater access to technology like AI and blockchain, organizations may be able to fight deep fake threats and reduce this risk in the coming years.
Using tech and common sense
For now, many companies can train employees to spot deep fakes:
- Do subjects in the videos blink, or does something seem off?
- Is skin patchy?
- Are there blurry patches in videos or aberrations in the audio?
While not foolproof for spotting deep fakes, the technology doesn’t always produce authentic results. In some cases, just being aware of the types of information a company would send over video or a phone call could alert a worker that a CEO personally soliciting funds is fishy.
The other good news is that many companies have appeared to research and develop open-source tools for companies to use in their cybersecurity efforts. While deep fakes may be poised to change the world as we know it, companies can be ready.