Safeguarding customers against deepfake harm – Canadian Underwriter
The era of deepfake artificial intelligence (AI) is characterized by a profound lack of trust. Instances of deception have emerged in recent years, leading people to question the authenticity of what they see and hear. For example, individuals received a robocall purportedly from U.S. President Joe Biden advising them not to participate in the 2024 election. Similarly, reports surfaced of Elon Musk endorsing free cryptocurrency giveaways, both of which were fabricated through deepfake technology.
These deepfake manipulations are spreading misinformation and causing harm. A recent report by the Canadian Secret Intelligence Services (CSIS) describes deepfake as a technology that employs advanced AI to alter or generate images, voices, videos, or text. This capability enables perpetrators to place individuals in situations they never experienced, often used to deceive people by incorporating deepfake elements into genuine media content. Neal Jardine, the global director of cyber risk intelligence at BOXX Insurance, explains that deepfakes erode public trust and advises that cyber insurance policies can cover the damages caused by such deceptive practices.
Protecting clients against deepfake risks is analogous to defending against ransomware threats, emphasizes Jardine. Education plays a vital role in raising awareness among clients about the prevalence of deepfake scams. The proliferation of AI deepfake scams can be classified into three primary types. The first involves a common grandparent scam, where cybercriminals obtain voice recordings of victims to create convincing deepfake replicas, exploiting emotional ties to extort money.
Deepfake scams have infiltrated the corporate realm as well, with financial executives receiving doctored requests from supposed CEOs. These scams involve using deepfake technology to deceive individuals into believing false realities that serve the creator’s fraudulent purposes. For instance, manipulating stock prices by disseminating deepfake videos of company CEOs making misleading statements can yield significant profits for cybercriminals. Despite the potential damage caused by AI deepfakes to reputations and financial stability, Jardine underscores the importance of cultivating awareness and skepticism to identify telltale signs of fraudulent content.
Moreover, the prevalence of non-consensual deepfake pornographic content underscores the urgency of developing legislative measures to protect victims and combat the proliferation of such material. Jardine emphasizes the need for education to combat the spread of deepfake technology and highlights its limitations, such as struggles with replicating accurate facial features, especially around the hairline and glasses. By remaining vigilant and attentive to subtle inconsistencies in visual and auditory cues, individuals can detect and mitigate the impact of deepfake manipulations. Ultimately, fostering a culture of digital literacy and skepticism is crucial in navigating the complex landscape of AI-generated deepfake content.