Real-Time Deepfake Detection to Combat Misinformation: A few years ago, deepfakes appeared to be an innovative technology whose creators relied on substantial computing resources. Deepfakes are now pervasive and have the potential to be abused for disinformation, espionage, and other malicious purposes.
Intel Labs has developed real-time deepfake detection technology in order to combat this escalating issue. Ilke Demir, a senior research scientist at Intel, describes the technology underlying deepfakes, Intel’s detection methods, and the ethical considerations involved in the development and implementation of such tools.
Deepfakes are videos, utterances, or images in which the actor or action is created by artificial intelligence (AI) and is not real. Deepfakes use complex deep-learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to generate content that is extremely realistic and credible. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it difficult to differentiate between authentic and false content.
Deepfake is sometimes used to describe altered authentic content, such as the 2019 video of former House Speaker Nancy Pelosi that was altered to make her appear intoxicated.
The team of Demir investigates computational deepfakes, which are artificial forms of content created by machines. “It is called deepfake because all of that content is generated by a complex deep-learning architecture in generative AI,” he explains.
Top 10 Intelligence Agencies in the World That Shape Global Politics
Real-Time Deepfake Detection to Combat Misinformation: Intel Labs’ Innovative Approach
Frequently, cybercriminals and other malicious actors abuse deepfake technology. Political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for financial gain are examples of use cases. These negative effects highlight the need for efficient deepfake detection techniques.
Intel Labs has created one of the first real-time deepfake detection platforms on the planet. Instead of searching for signs of deception, the technology concentrates on detecting what is genuine, such as the heart rate. Using a technique known as photoplethysmography — the detection system analyses colour changes in the veins caused by oxygen content, which is computationally visible — the technology can determine whether a personality is genuine or artificial.
“We are attempting to examine what is true and genuine. Demir stated that heart rate is one of [the indications]. “When your heart circulates blood, it travels to your veins, and the oxygen content in the blood causes the veins to change colour. It is not visible to the naked eye; I cannot see your heart rate by simply watching this video. However, this colour shift is computationally discernible.”
Intel’s deepfake detection technology is being implemented across multiple industries and platforms, including social media tools, news agencies, broadcasters, content creation tools, entrepreneurs, and non-profit organisations. By integrating the technology into their operations, these organisations can better identify deepfakes and misinformation and mitigate their spread.
Deepfake technology has legitimate applications, despite the possibility of abuse. The creation of avatars to better depict individuals in digital environments was one of the earliest applications. Demir refers to a specific use case dubbed “MyFace, MyChoice,” which employs deepfakes to improve online platform privacy.
This method enables individuals to control their appearance in online photos by substituting their visage with a “quantifiably dissimilar deepfake” if they wish to avoid being recognised. These controls provide increased privacy and control over an individual’s identify, counteracting face-recognition algorithms.
It is essential to ensure the ethical development and implementation of AI technologies. The Trusted Media team at Intel collaborates with anthropologists, social scientists, and user researchers to evaluate and improve the technology. Additionally, the company has a Responsible AI Council, which evaluates AI systems for compliance with responsible and ethical principles, including potential biases, limitations, and potentially detrimental use cases. This multidisciplinary approach ensures that artificial intelligence (AI) technologies, such as deepfake detection, benefit humans and do not cause injury.
Dimer states, “We have legal professionals, social scientists, and psychologists, and they are all working together to identify the limitations to determine if there is bias — algorithmic bias, systematic bias, data bias, or any other type of bias.” The team scans the code for “any possible use cases of a technology that could be harmful to people.”
As the prevalence and sophistication of deepfakes increases, it becomes increasingly essential to develop and implement detection technologies to combat misinformation and other negative outcomes. Real-time deepfake detection technology from Intel Labs provides a scalable and effective solution to this expanding issue.
Intel is working towards a future in which AI technologies are used responsibly and for the benefit of society by incorporating ethical considerations and collaborating with specialists from a variety of disciplines.