-

DETECTOR

Deepfake evidence and technology for forensic content oversight and research
Laptop in background with illustrations of deepfake impact

With rapid advancements in AI, forensic institutes and LEAs can struggle to differentiate authentic evidence from AI fabrications and manipulations. Despite promising detection research, existing methods fall short as current models rely on limited, non-diverse datasets and need to keep pace with technology developments. 

DETECTOR will address the growing challenges faced by forensic experts and LEA practitioners in detecting and analysing altered or synthetic media content through creating comprehensive datasets, research and development, and specialised training. DETECTOR will offer an integrated approach for safeguarding the authenticity of digital evidence and strengthening the capabilities of forensic investigations on a European level. 

CENTRIC will support DETECTOR in addressing these challenges by preparing a comprehensive landscape review of synthetic media detection together with AI researchers, LEAs, forensic scientists, legal and ethical experts, leading to a series of user and technical requirements. 

By directly engaging with these stakeholders, CENTRIC will develop several serious games to enhance practitioners’ understanding of synthetic media and detection approaches, organise yearly testing and evaluation exercises, and closely collaborate with the wider community to deliver results that counter AI-driven media manipulation and its impacts on crime.

Funding Statement


Image
European Commission Flag

Funded by the European Union

Further Information