Editor’s Note
As deepfake technology advances, the legal and regulatory frameworks to combat its misuse remain a work in progress. This article examines the current state of deterrence measures, highlighting the ongoing development of key regulations like the EU’s AI Act and the challenges in establishing decisive enforcement.

Parallel to technical countermeasures, legal and regulatory measures are also essential. The EU’s AI Act (Editor’s note: The world’s first comprehensive AI regulation law broadly governing AI technology, characterized by setting requirements, obligations, and penalties based on risk levels) aims to mandate clear labeling for content created by generative AI. In Europe, this will likely be a crucial step in preventing the malicious use of deepfakes in elections.

On the other hand, legal regulations against deepfakes in the United States are still in their developmental stages. However, a step forward seemed to have been taken when the Governor of California signed a bill in September 2024 regulating election advertisements using deepfake images, audio, or video content.
Yet, just one day after the signing, a lawsuit was filed to block it, claiming it violated freedom of speech. A federal district court accepted this claim and issued a temporary restraining order, halting the enforcement of the new law.

Ultimately, federal-level legislation was not enacted before the presidential election, and deterrence measures remain lacking in decisiveness.
The advancement of deepfake technology is significantly changing the information warfare landscape in elections. As AI technology evolves, the quality of deepfake content continues to improve, necessitating corresponding advancements in technical and legal countermeasures. At present, there are no means to completely prevent the spread of misinformation via deepfakes. A persistent approach from both technical and regulatory perspectives will be necessary moving forward.
