Meity's Indiaai Picks 5 Projects To Advance Deepfake Detection, Bias Mitigation & AI Security
It has selected five projects under the second round of its Expression of Interest (EoI) for the 'Safe & Trusted AI' pillar, aimed at fostering secure, transparent, and trustworthy AI systems.
A multi-stakeholder technical committee evaluated the submissions to identify initiatives with the highest potential for impact.
The selected projects span critical themes, including deepfake detection, bias mitigation, and penetration testing for generative AI, translating the vision of Safe & Trusted AI into practical solutions.
Several initiatives are underway in the fields of deepfake detection, bias mitigation, and AI security.
In deepfake detection, Saakshya-a multi-agent, RAG-enhanced framework developed by IIT Jodhpur and IIT Madras-focuses on both detection and governance.
AI Vishleshak, by IIT Mandi in collaboration with the Directorate, Forensic Services, Himachal Pradesh, aims to enhance audio-visual deepfake detection and handwritten signature forgery detection with adversarial robustness, explainability, and domain generalisation. Additionally, IIT Kharagpur is developing a real-time voice deepfake detection system.
In bias mitigation, projects include evaluating gender bias in agriculture LLMs and creating Digital Public Goods (DPGs) for benchmarking and fair data practices, led by Digital Futures Lab and Karya.
In the area of penetration testing and AI evaluation, Anvil, developed by Globals ITES Pvt Ltd and IIIT Dharwad, serves as a tool for penetration testing and assessment of LLMs and generative AI systems.
These initiatives aim to enhance real-time deepfake detection, strengthen forensic analysis, mitigate bias in AI models, and develop robust tools to evaluate generative AI systems.
(KNN Bureau)
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment