Artificial intelligence (AI) has become a part of nearly every industry, and the legal field is no exception. More specifically, AI-generated evidence is constantly evolving, and it is important for attorneys to keep learning about it, so that we can stay informed, prepared and more importantly, ahead of the curve.
AI-generated evidence consists of a variety of data including but not limited to photographs, videos and other documents or materials that are developed or integrated through AI technology to analyze data or create new content. While this form of evidence can be an innovative tool, there is wide-spread, valid concern about the use of this evidence. On one hand, unlike with tangible, physical evidence, there is no clear point of origination with AI-generated materials. Thus, there are concerns about authenticating the validity of the AI-generated evidence and the integrity of said evidence.
To determine whether evidence is AI-generated, there are a few steps we can take to guide our assessment. One step is to trust our instincts. Attorneys are trained to evaluate the credibility, consistency, and plausibility of all evidence and information presented to them while investigating and developing their case strategy. That same instinct should be applied when reviewing potential AI-generated evidence. If a document, text, image, or video appears too polished, or inconsistent with the surrounding facts, it may warrant closer scrutiny. There is something to be said about thinking a piece of evidence is “too good to be true.” There is no harm in following up on the validity of a piece of evidence if something about it has raised questioned or caused pause.
For example, consider a scenario where opposing counsel produces a video allegedly depicting a plaintiff speaking in a measured, articulate manner with calculated pauses and minimal emotions. Yet during deposition testimony, the attorney sees that same plaintiff speaks rapidly, with an accent, displays natural hesitation, or is animated. Such discrepancies between the evidence provided and real-world presentation should raise immediate concerns about manipulation or artificially-generated activity.
Depositions are therefore a critical investigative tool, not only for factual development, but also for evaluating whether the evidence aligns with the witness’s authentic behavior, speech patterns, and demeanor. Attorneys may also investigate whether there are any pre-suit recorded statements taken by a reputable third-party, such as an insurance company, of plaintiffs or relevant fact witnesses to further assess any discrepancies with proffered digital evidence.
Once our suspicions arise, the next step is to obtain the underlying metadata associated with the evidence. Metadata functions as a digital fingerprint, often revealing creation date and time and device or software used to generate the file.
For example, if a party claims a text message was sent five years earlier, metadata may show that the file was actually created recently and on what software. In many cases, metadata can serve as our smoking gun.
Nevertheless, attorneys must be aware and keep vigilant that in the age of AI, metadata can also be manipulated. It is just one piece of the puzzle that can be used to evaluate the authenticity of proffered evidence.
If authenticity remains in question, retaining an appropriate expert is critical. Courts increasingly rely on technical experts to interpret complex digital evidence, particularly where AI tools may be involved. Relevant experts may include digital forensic scientists, data analysts and/or cybersecurity professionals.
Courts have begun to confront the admissibility of AI-generated or claimed AI-manipulated evidence. In Huang v. Tesla[1], a California state court rejected an objection to video evidence premised on the generalized claim that the footage “could have been” a deepfake. The court made clear that the mere possibility of AI manipulation is insufficient to exclude evidence. Instead, the court determined that parties must present concrete, technically-grounded proof demonstrating that the presented evidence is inauthentic or unreliable. This ruling shows that challenges to AI-related evidence must be supported by specific facts, expert analysis, or forensic evidence.
This approach is consistent with longstanding authentication requirements under Rule 901 of the Federal Rules of Evidence. Rule 901 requires only that the proponent produce evidence “sufficient to support a finding that the item is what the proponent claims it is.” The standard is intentionally low: courts do not demand absolute proof of authenticity, but rather a prima facie showing through testimony of a witness with knowledge (expert witness), distinctive characteristics, metadata, or evidence describing the process or system that produced the item. Once that threshold is met, the burden shifts to the opponent to demonstrate a genuine issue as to authenticity. In the AI context, courts are making clear that hypothetical concerns about deepfakes do not, by themselves, defeat admissibility.
Similarly, attorneys must grapple with the potential consequences of the improper use of AI-generated evidence in their cases and the importance of identifying and asserting improper use by their adversaries. AI use presents risks that we cannot ignore, such as hallucinations in case law citations.
In the matter, Mendones v. Cushman & Wakefield, Inc.[2], the Superior Court of California, Alameda County dismissed the case with prejudice when it was discovered pro se plaintiffs had used deepfake videos and altered photographs as exhibits to their motion for summary judgment. As previously discussed, the subject videos showed witness testimony in an unnatural manner with unsynchronized mouth movements and other identifiable issues. Evaluation of the photographs demonstrated false data, such as altered screenshotted messages and a security guard superimposed into the image.
In the Order, the California Superior Court states, it “remains suspicious of the other evidentiary submissions, but it does not have the time, fundings, or technical expertise to determine the authenticity of Plaintiffs’ statements or conduct a forensic analysis.” This point leads into a deeper discussion about how improper use of altered, false and/or distorted AI-generated evidence puts further burden on court time and resources. In response, courts have begun imposing non-monetary and monetary sanctions in response to the AI-generated hallucinations in legal briefs. Recently, an Eastern District of Pennsylvania federal court judge sanctioned two attorneys after they filed a brief that included these AI-hallucinated citations.
While these sanctions are one tool used by the court to send a strong message throughout the legal community about the consequences of these serious acts of misconduct, it is important for attorneys to remember the professional rules of conduct and oath of fidelity, honesty, and lawful practice that they have an obligation to abide by.
This is not to say that use of AI-generated materials is strictly prohibited. We know that this ever-developing technology will continue to be a part of ongoing legal practice. However, it is important that all attorneys stay apprised of the rules, protocol and guidelines as outlined by the courts for use of AI in their legal practice and hold their adversaries accountable to uphold the same standards.
Since AI is here to stay, attorneys must approach it as a tool, and never as the final product. AI cannot replace professional judgment, ethical obligations, and strategic analysis that is essential to competent representation. There are many nuances that attorneys as humans discover in their cases that can never be replicated by AI technology. Thus, in this age of technology, it is important for attorneys to remember this human aspect of practice is a strength and accordingly, must hold ourselves and our adversaries accountable.
[1] https://www.thomsonreuters.com/en-us/posts/ai-in-courts/deepfakes-evidence-authentication/
For additional details regarding our attorneys, our business activities, or if you would like to speak to a Weber Gallagher lawyer, please contact: