Deepfake is no longer new, however it continues to be causing havoc with governments and authorities looking to impose strict action against misuse of this AI technology. Though deepfake has found implications in the movie and advertisement industry, it still becomes imperative to push for techniques to help identify them.
To help tackle deepfakes, a number of companies are working on AI algorithms that can detect such deceptive content. Here are a few of them that have been making waves:
DeepMedia AI
Yale graduates Rijul Gupta and Emma Brown co-founded DeepMedia to help unmask deepfake technology. DeepMedia offers two products: DubSync, a service utilising AI for translation and dubbing, and DeepIdentify.AI, a deepfake detection service. The latter serves as their main offering, securing a notable $25 million three-year contract with the U.S. Department of Defense (DoD), along with undisclosed agreements with allied forces.
We're honored to be recognized for our efforts in detecting deepfakes in @vivwalt's latest piece for @FortuneMagazine. Our collaborations with the UN, the DoD, and Big Tech reflect our dedication to tackling digital deception. https://t.co/4XNTCxRHia
— DeepMedia (@DeepMedia_AI) December 5, 2023
Sentinel
An Estonian company, Sentinel, helps companies defend against fake media content. They use a Defence in Depth (DiD) approach to automate the authentication of digital media. The company works with democratic governments, defence agencies, and enterprises in mitigating the threat of deepfakes through a prominent AI-based protection platform.
Kroop AI
Founded by Jyoti Joshi, an AI scientist, along with IIT alumni Milan Chaudhari and Sarthak Gupta, this Gujarat-based startup Kroop AI, offers a deployable AI-enabled platform. It’s designed for both businesses and individuals to identify deepfakes across audio, video, or image data. Kroop AI’s deep learning-based platform can detect and analyse deepfakes in detail across various platforms and mediums. This startup aims to become a global, affordable tool for detecting fake content, with a focus on the banking, finance sector, and cybersecurity.
In an age where reality can be manipulated, stay informed and protect yourself.
Recent news of a deepfake video involving actress @iamRashmika highlights the growing concern over the spread of misinformation and the potential harm it can cause. (1/4) pic.twitter.com/G27OGBtsqi— Kroop AI (@kroop_ai) November 8, 2023
Sensity
Netherlands-based company Sensity, provides a visual threat intelligence platform and API to detect and counter deepfakes. They gather visual threat intelligence and use deep learning algorithms for detection.
Group Cyber ID
India’s first hi-tech cyber detection centre which provides extensive support to a number of organisations including law enforcement. GCID provides a range of services including advanced cyber security, IT audit, digital forensic, and threat intelligence services. They specialise in offering security solutions to government agencies, public sectors, and businesses, encompassing areas such as network security, crime scene investigation, financial fraud detection, and border protection consultations.
Intel FakeCatcher
FakeCatcher, is a real-time deepfake detector developed by Intel in partnership with Umur Ciftci from the State University of New York at Binghamton. It operates on a web-based platform and uses Intel hardware and software to detect deepfakes by looking for subtle “blood flow” changes in video pixels. The technology can detect fake videos with a 96% accuracy and when released last year, it was considered world’s first real-time deepfake detector that returns results in milliseconds.
Q Integrity
Switzerland-based company Quantum Integrity utilises patented deep learning technology for detecting deepfake image and video forgery, customizable for various use cases. Recently, the company moved from Switzerland to the US and branded themselves as Q-Integrity.
Microsoft Video Authenticator
Microsoft’s tool generates a confidence score for images or videos to indicate if the media has been manipulated. It analyses media for indications of manipulation using sophisticated AI algorithms.
The tool was released ahead of 2020 US elections, and Microsoft has partnered with the AI Foundation to provide its Video Authenticator tool to news outlets and political campaigns, as part of the Reality Defender 2020 initiative. They have also collaborated with media giants such as BBC and the New York Times, Microsoft’s Project Origin aims to standardise authenticity technology, with support from the Trusted News Initiative.
#Deepfakes no more. Behold, the Microsoft Video Authenticator, a tool that can analyze a still photo or video and provide a percentage chance that the media is artificially manipulated. (1/2) pic.twitter.com/IINud4lWmE
— Microsoft On the Issues (@MSFTIssues) September 1, 2020
The post 8 Deep Tech Companies to Fight Against Deepfake appeared first on Analytics India Magazine.