If Truth be Told: AI and its Distortion of Reality

As reported in a recent Washington Post article (AI is destabilizing ‘the concept of truth itself’ in 2024 election) we have already seen instances this year where people are dismissing media accusations by suggesting it’s fabricated. This response may not have been possible a few years ago, but now, with the power of deepfake and generative AI technology, it provides a person with plausible deniability to any allegation. This has placed us in a grey area, as claims can now be refuted with no definitive truth. This article discusses how doctored media can distort reality, AI, and plausible deniability, the impact on investigations and chain of custody, and the methods to combat misinformation.

Distorting Reality and Eroding Trust

One of the most significant impacts of fabricated media is their ability to distort reality and manipulate public perception. Individuals with malicious intent can use this AI technology to create genuine media featuring fabricated scenarios, statements, or actions. As a result, public figures, politicians, and even everyday individuals may find themselves at the mercy of false narratives.

Most recently, President Biden has been the target of AI-generated robocalls telling voters not to vote in a primary election, the first case of voter suppression in the lead-up to the 2024 presidential election. This raises serious privacy concerns and has led to calls for more laws and legislation to protect individuals from deepfake media. 

The consequences of such manipulation are extreme, as trust in information sources is eroded. Discerning truth from fiction becomes increasingly challenging with the spread of fake media. This erosion of trust affects individuals and has broader implications, including a potential decline in public confidence in institutions and authorities.

Plausible Deniability: A Dangerous Consequence

Fabricated media has introduced a new dimension to plausible deniability, allowing people to distance themselves from their actions by claiming the content is fabricated. This trend poses a severe threat to accountability and the concept of truth, as individuals can exploit the uncertainty surrounding the authenticity of content to avoid consequences.

As discussed in the Washington Post article, AI creates what is known as a “liar’s dividend.” In this scenario, misinformation is spread so often that the truth becomes unclear.

In legal contexts, using deepfakes complicates matters for law enforcement and legal professionals. The plausible deniability of manipulated media challenges the traditional reliance on visual evidence, introducing skepticism that can obstruct the pursuit of justice and chain of custody.

Impact on Law Enforcement Investigations

Law enforcement agencies are grappling with the challenges posed by the rise of false media in criminal investigations. The FBI Cyber Division has stated that synthetic media was predicted to be used by criminals for ‘spear phishing’ targeted attacks within the following 12-18 months.

Furthermore, altering media to create alibis or false narratives can obstruct the investigative process. Criminals and their legal representation may exploit the plausible deniability introduced through the use of Generative AI and Deepfake technology to cast doubt on their involvement in crime by challenging the content of a prosecution’s digital evidence. Digital evidence combined with plausible deniability is becoming increasingly important in the justice process. Unless the chain of custody can be proven, digital evidence combined with plausible deniability introduces an element of doubt in the evidence.

Insurance Investigations and Misinformation

The insurance industry is not immune to the far-reaching implications of deepfakes. As insurers rely heavily on evidence, including visual documentation, to assess claims, the potential for fraudulent manipulation through synthetic media poses a significant threat. Individuals seeking to exploit insurance policies may use manipulated media to support false claims, making it challenging for insurance investigators to discern fact from fiction.

As previously reported, voicemail impersonation is occurring, asking for invoices or payments to be made under the guise of a bad actor. This has dangerous implications for cyber insurance, especially now, as AI destabilizes the truth and prevents digital media integrity. 

This new frontier of deception complicates the claims process and may result in increased costs for insurance companies. The industry must adopt advanced authentication measures and AI-driven tools to mitigate these risks and detect potential deepfake content within claims submissions.

Combating Disinformation and Finding Truth

As the prevalence of deepfakes continues to rise, there is an urgent need for a multifaceted approach to tackling this issue. Technological advancements in AI-based detection tools are essential to effectively identifying manipulated content. Additionally, education at all levels and raising awareness about the existence and potential impact of doctored media can support individuals in critically evaluating the information they encounter.

Aviv Odaya, a Harvard University affiliate and AI expert, said that tech companies could prevent fake media by “watermarking audio to create a digital fingerprint or joining a coalition meant to prevent the spreading of misleading information online by developing technical standards that establish the origins of media content.”

The impact of people using deepfakes and AI as a scapegoat to distort reality and achieve plausible deniability echoes across society, affecting trust, accountability, and various sectors, including law enforcement, justice, and professional investigations. As technology advances, an effort must be made to ensure that the information captured at the source is tamper-evident and that its integrity can be assured to combat challenges to its authenticity.

Get in touch today if you want to protect your digital assets’ authenticity and integrity and demonstrate the digital chain of custody.

Twitter
Facebook
LinkedIn

Login