By Daniel Rankin
“Success in creating effective AI could be the biggest event in the history of our civilization, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.” —Stephen Hawking
Artificial intelligence is a concept few understand. Yet one need not indulge the scientific parlance and technical minutiae to understand its ramifications for our future. Artificial intelligence is here. And the Luddites aren’t the only ones feeling threatened by its potential for good and evil.
I. What is AI?
Artificial intelligence, or AI, refers to a computer’s ability to mimic human learning behavior. At the risk of oversimplifying, here’s how it works: Through data gathering and trial and error, computers “learn” by predicting which action to take to optimize a programmed goal.1
While the concept sounds simple, the technical capability is revolutionary. Historically, we have leveraged machines to maximize our muscle. Now we’re on our way to leveraging machines to maximize our minds. So how could this new wave of intelligent machines affect one of the most intellectually rigorous professions? This article explores three major ways how AI could change the legal practice.
II. Deep Fake Due Process, the End of Privacy, & Changes in Substantive Law
a. Deep Fake Due Process
Artificial intelligence can certainly help judges and juries with the decision-making process, insofar as the questions are purely legal. But it can also flip the entire fact-finding process on its head, resulting in extreme injustice. The resulting injustice will likely manifest itself in so-called deep-fake technology.
With the help of AI, ordinary people are beginning to create “deep fakes,” which are AI-enhanced fake pictures and videos. These AI-enhanced deep fakes leverage “machine-learning algorithms to insert faces and voices into video and audio recordings of actual people and enables the creation of realistic impersonations out of digital whole cloth.”2
Sure, the current, imperfect deep-fake technology can make for funny videos that caricature celebrities and political figures. But as the technology gets better—and it will—the human eye will soon find it impossible to distinguish between real and fake.3
This means anyone with a vendetta and a little chutzpah can create a deep fake that depicts someone doing something unsavory or illegal. And guess what: the courtroom is not immune to misleading evidence. This fake evidence will inevitably leak into the courtroom, and it could dupe juries into believing an innocent person committed a crime. This is what I call “deep-fake due process.” It’s a threat. It deserves attention. It deserves solutions.
b. The End of Privacy
In Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, Professors Bobby Chesney and Danielle Citron predict a development stemming from deep-fake due process: “immutable life logs” as an alibi service.4 Because deep-fake technology will be able to portray people saying and doing things they never said or did, alibis will become essential to proving innocence in the courtroom. If defendants cannot convince juries that they were not committing the alleged crime, the jury will then rely on the deep-fake evidence—and that’s bad news for the digitally ensnared defendant.
This is where immutable life logs as an alibi service come in. Deep fakes will create a heightened demand for proof of where you are and what you’re doing at all times, so companies—and maybe even the government—will enlist in the lifelogging business, through which wearable technology (like an Apple Watch, for example) could track you around the clock.
Lifelogging’s potent solution to deep-fake due process, however, dispenses privacy side effects that would be hard to swallow. Lifelogs will, in effect, destroy privacy. No longer will anyone be truly alone and off the grid—ever—unless you want to risk spending your time behind bars in a snazzy orange jumpsuit.
c. Changes in Public Law
Because lifelogging will undermine privacy, its legal effects will likely manifest in Fourth Amendment law—and more specifically, Katz’s reasonable-expectation-of-privacy test. If people begin to accept lifelogging services as a solution to deep-fake technology, then the objective prong of the Justice Harlan’s two-part privacy test (i.e., that the privacy expectation is one that society is prepared to recognize as reasonable) could deteriorate.5
One concerned with long-term tracking, however, may quickly point to the Supreme Court’s recent decision in Carpenter v. United States,6 which held that the government generally needs a warrant before it reviews an individual’s cell-site location information held by wireless carriers.7
While Carpenter provides a glimmer of hope for Fourth Amendment protection in the Digital Age, its precedential weight remains shaky. The Court made clear that Carpenter was a narrow decision. And as careful readers of Carpenter also know, the Court’s reasoning departed from traditional Fourth Amendment doctrine. For those reasons, coupled with the potential shift on the Court,8 one shouldn’t place too much faith in the Court expanding Carpenter’s groundbreaking holding.
d. Changes in Civil Law
Departing from public law for a moment, some may wonder what kind of civil liability awaits deep-fake creators. As Professors Bobby Chesney and Danielle Citron also explain, victims have a menu of potential tort claims: defamation, intentional infliction of emotional distress, copyright infringement, right of publicity, and false light.9
Of these torts, false light seems to be one of the few causes of action that offers a symmetrical fit for deep fakes. False light, a privacy tort, creates liability for an individual who spreads falsehoods about a person that are objectionable but not necessarily defamatory.10 This is often the case with deep fakes. Deep fakes falsely portray people saying or doing things they never said or did, and these fabricated portrayals may not fit neatly within actionable categories of defamation.
False light also offers a recovery mechanism in jurisdictions that might take a more formalistic approach to defamation. For example, does making a video of someone constitute written or oral communication? The question is deceptively straightforward, because most defamation cases involve the accused writing or speaking something, not the plaintiff. False light may thus be the more apt tort for deep fakes.
Yet some jurisdictions have refused to recognize false light as a viable cause of action. New York, Texas, and Florida, for example, have all declined to recognize it as a tort. These states argue that because false light duplicates defamation in many ways, it would place more unneeded tension on the First Amendment’s freedom-of-speech guarantee.11 But given the deep-fake threat, we may see those jurisdictions reconsider plucking it from the tort wastebasket.
III. The AI Surveillance State
Privacy proponents will recoil upon hearing that AI is also increasing the effectiveness of state-surveillance techniques. Before artificial intelligence, cameras were useful only to the extent someone either observed a live feed or reviewed recorded footage. No more.
With the help of AI, cameras can now navigate 3-D dimensions and make sense of what they see—no human assistance needed. Even more, AI-assisted cameras are beginning to operate beyond ordinary human capability: they can identify millions of faces and predict human behavior.
a. Facial Recognition
Facial recognition is nothing new. We see it on the new iPhone X with its face-scanning technology, and we see it on Facebook when its software analyzes faces to suggest tagging a friend in a photo.
The way it works is conceptually simple: The camera looks at a face, extracts distinguishing facial features (like the size and width of a nose, for example), and then compares those features against a database of pictures (sometimes taken from drivers’ license photos).
China, so far, is the world leader in using facial-recognition technology as a surveillance tool. Under its “Sharp Eyes” program, China’s goal is to recognize all Chinese citizens within seconds of their face appearing on a camera.12 To that end, China has scattered cameras across the country.13 Its comprehensive face database has already led to one Chinese city capturing 375 suspects and 39 fugitives since the program’s inception.14
While a similar program in the United States could lead to an increase in captured criminals, many would voice concern about the privacy implications. It’s well-settled Fourth Amendment doctrine that when an individual goes out in public, that individual loses (if not has a severely diminished) expectation of privacy.15
But now in our post-Carpenter world, the legality of a Sharp Eyes-like program for criminal-justice purposes would be questionable at best.16 The legality of such a program for foreign-intelligence purposes, however, might fare better, particularly given the procedural hurdles17 and deferential standards of review18 courts lean on when it comes to national-security issues.
b. Behavior Prediction
In the tech world, facial recognition has become a known commodity. Behavior prediction, on the other hand, is the latest trend. In addition to recognizing who you are, AI-equipped cameras will be intelligent enough to predict your behavior. This technology already exists, and it’s only getting better.
One company, for example, claims it has created a “gaydar”—that is, a machine that can determine an individual’s sexual orientation. Some say impossible, but the machine has already proven its ability to determine sexual orientation by using algorithms based on facial features and expressions. The algorithm has flexed a 91% accuracy rate.19
Another company, Faception in Tel Aviv, created a program that purports to determine whether someone is a criminal—just by looking at a face.
Skeptics may be thinking: “Okay, well that’s not that impressive. The camera can just run photos of someone against a criminal database.”
Well, it doesn’t. Based on the premise that facial features reveal personality traits (called “physiognomy”), the program simply reads a face and assigns it a probability of criminal intent. In one demo, the program rang the bell at 90% accuracy.20
c. Changes in the Law
There’s no need to rehash the privacy implications here; this article beats that horse enough. Instead, the law could noticeably change on two other fronts: evidence and warrants.
Before swinging open the courthouse doors, the police first need to catch a suspect. One way the police catch suspects is through arrest warrants, which are based on probable cause. If an AI-equipped camera identifies someone as a likely criminal, will that be enough to meet the probable-cause standard? If so, and assuming the technology assigns a percentage of criminality to an individual, how much will satisfy probable cause? 90%? 70%? 50%? This also raises the question whether it’s even ethical to arrest someone before he or she commits a crime.
What about the role of this technology in the courtroom as evidence? Would it be too prejudicial under rule 403 to show a jury that AI software determined that a defendant is likely a criminal? What if, instead, prosecutors used the technology during trial as a buttress for their arguments?
A closing argument, for example, could sound like this: “Based on all the eye-witness testimony, along with the determination that the defendant, considering his facial features, has an 80% likelihood of having committed the charged crime, you should find the defendant guilty.”
These types of arguments could be commonplace in the future. And how would these arguments square with the Constitution’s Confrontation Clause? The Confrontation Clause in the Sixth Amendment states that a defendant has the right to confront all witnesses against him.21
Under current interpretations of the Confrontation Clause, a defendant has the right to confront the DNA technicians who test DNA at crime scenes.22 Does this mean defendants will also have the right to confront the software developers who helped create the AI-equipped cameras? Will cross-examinations about the accuracy of complex algorithms go right over jurors’ heads? These are all tough questions, and they’ll be no small task for courts to answer.
IV. AI Machines to Replace Lawyers, Judges, and Juries
With advances in technology inevitably trails distress about being replaced with a smart, autonomous machine. Those with jobs that require simple, repetitive tasks have had cause for concern. But now with an ever-strengthening AI presence, should those with jobs that require complex problem-solving skills (e.g., doctors and lawyers) be concerned too? Maybe. AI could eventually replace lawyers, judges, and juries. And, well, everyone.
a. AI v. Lawyers
In San Francisco, there’s a developing project called “Project Debater.” Project Debater involves artificially intelligent machines debating humans. And the machines are getting pretty good at it.
Here’s what’s going on: Artificially intelligent robots are given a topic, they listen to a human’s argument, and then they begin piecing together a rebuttal based on a search of arguments posted on the web. “Project Debater doesn’t try to build an argument based on an understanding of the subject in the question. Instead, it simply constructs one by combining elements of previous arguments, along with relevant points of information from Wikipedia.”23
Some believe that robots’ inability to understand words or the topic itself is a crucial weakness. But this weakness shouldn’t assuage lawyers’ concerns. While some lawyers pride themselves on creative arguments, aren’t many legal arguments just syllogisms grounded in precedent? And aren’t most legal briefs—which make arguments—posted on legal databases like Westlaw and Lexis Nexus? Robolawyers may soon outpace their human counterparts in the courtroom.
b. AI v. Judges
One of the main gripes about judges is that they don’t apply the law fairly. After all, they, like everyone else, have human fallibility and prejudices. For this reason, there is now substantial research—and progress—in creating “robojudges.” Robojudges, some argue, will finally eliminate inequality in the law’s interpretation: “they could be programmed to all be identical and to treat everyone equally, transparently applying the law in a truly unbiased fashion.”24
And bias isn’t the only thing robojudges could eliminate. Because robojudges will have virtually unlimited memory and learning capacity, they, unlike human judges, will know every relevant legal principle applicable to any particular case.25 This makes for a much fairer and efficient legal process. As a result, state prosecutors may decrease their involvement in the widely criticized plea-bargaining process, thus letting many defendants finally have their day in court.
While robojudges will supposedly optimize due process, they also have inherent flaws—namely, bugs and hack-ability. The potential for bugs or glitches may make defendants uneasy about whether they got a fair shake in the courthouse. Those disgruntled defendants could then pepper appeals courts with algorithmic challenges, stumping even the most well-respected jurists who lack technical training and education.
Despite these concerns, AI has already seeped into courtrooms. Judges across the country are now using AI in bail hearings to determine flight risk and risk to public safety, among other factors. For example, “judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released.”26
Is it only a matter of time before artificially intelligent robots start presiding over full merits trials?
c. AI v. Juries
Why stop at lawyers and judges? AI could replace juries, too. One might object and say, “Fact-finding is a peculiar human process; robots couldn’t assess eyewitness credibility or weigh evidence.”
Sure, a reasonable objection. But it overlooks the fact that machines are already determining sexual orientation and criminality with mind-boggling accuracy. That same technology could, theoretically, make credibility assessments based on a witness’s tone of voice, appearance, and other physically examinable traits that separate liars from truth-tellers.27 Besides, AI is already remarkably similar to the trial process: evidence just becomes data, and jury instructions just become protocol.
d. Dude, where’s my car? Why lawyers, judges, and juries shouldn’t worry about losing their jobs
Despite the above precarious premonitions, lawyers, judges, and juries shouldn’t worry too much about AI-robot replacements. This is because one of the steepest hurdles for robots to jump is natural language. Language is supremely complex, especially for machines. This is particularly evident in online translators, which often result in problematic translations.
For example, in United States v. Cruz–Zamora, a Kansas district court had to address whether an officer, through consent, could search a Spanish-speaking defendant’s car when the officer’s use of Google Translate resulted in a miscommunication. The officer typed into the translator, “Can I search your car?” which resulted in the phrase, “¿Puedo buscar el auto?” The Spanish-speaking defendant answered “Yes,” because the literal translation was, “Can I find your car?”28
As you may have guessed, the case ended up in court because the officer found piles of drugs in the defendant’s car. The defendant challenged the search under the Fourth Amendment, arguing that he didn’t actually consent because he thought he was answering a different question that seemed obvious. The district court agreed, concluding that the government didn’t meet its burden in showing that the defendant had freely consented to the search.29
Cruz–Zamora demonstrates that although machines can process language to a considerable degree, they still fail to understand the words themselves. Because of this conundrum, it will be a long time before machines begin to seriously dabble in the legal profession’s most important currency—language. And when, or if, that time comes, machines will have likely taken everyone’s jobs at that point.
V. Conclusion
Artificial intelligence brings with it both an array of challenges and solutions to the law. To make the most of its promising solutions, we must take its challenges in stride. This means acknowledging and improving AI’s bugs and inherent weaknesses, while also accounting for its potential to undermine our civil liberties. Only then can we have a balanced discussion about intermingling our centuries-old legal system with the latest robotic neural networks.
___________________________________
1 See generally Max Tegmark, Life 3.0: Being Human In the Age of Artificial Intelligence 49–82 (2018); see also Tom Harris, How Robots Work: Robots and Artificial Intelligence, HOW STUFF WORKS, https://science.howstuffworks.com/robot6.htm.
2 Bobby Chesney & Danielle Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 CAL. L. REV. (forthcoming 2019). Available at SSRN: https://ssrn.com/abstract=3213954.
3 See, e.g., Good Morning America, Jordan Peele uses AI, President Obama in fake news PSA, YOUTUBE (Apr. 18, 2018), https://www.youtube.com/watch?v=bE1KWpoX9Hk.
4 Chesney & Citron, supra note 2, at 52–56.
5 Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring).6 138 S.Ct. 2206 (2018).
6 138 S.Ct. 2206, 2217 (2018)
7 Id. at 2220.
8 See generally Orin S. Kerr, Judge Kavanaugh on the Fourth Amendment, SCOTUSBLOG (July 20, 2018), http://www.scotusblog.com/2018/07/judge- kavanaugh-on-the-fourth-amendment/ (“In tough Fourth Amendment cases that divide the Supreme Court, a Justice Kavanaugh would likely be on the government’s side.”).
9 Chesney & Citron, supra note 2, 34–36.
10 See generally Cain v. Hearst Corp, 878 S.W.2d 577 (Tex. 1994) (detailing the substance and history of false light).
11 See Jews for Jesus, Inc. v. Rapp, 997 So.2d 1098 (Fla. 2008) (“Because we conclude that false light is largely duplicative of existing torts, but without the attendant protections of the First Amendment, we decline to recognize the tort . . . .”); Howell v. New York Post, 612 N.E.2d 699 (N.Y. 1993) (reasoning that because the state had already enacted a statutory right to privacy, there was no need to recognize the common-law right to privacy); Cain, 878 S.W.2d at 579 (declining to recognize false light because it “duplicates other rights of recovery” and because it increases the “tension that already exists between free speech constitutional guarantees and tort law.”).
12 See generally Simon Denyer, In China, facial recognition is sharp end of a drive for total surveillance, WASH. POST (Jan. 7, 2018), https://www.washingtonpost.com/news/world/wp/2018/01/07/feature/in-china- facial-recognition-is-sharp-end-of-a-drive-for-total- surveillance/?utm_term=.0b9cac9ecab4;
13 See Oiwan Lam, With ‘Sharp Eyes,’ Smart Phones and TV Sets Are Watching Chinese Citizens, ADVOX (Apr. 3, 2018), https://advox.globalvoices.org/2018/04/03/with-sharp-eyes-smart-phones-and- tv-sets-are-watching-chinese-citizens/ (“[T]here are more than 360,000 surveillance cameras installed in Linyi City, out of 2.93 million surveillance cameras in all Shandong province.”).
14 Tara Francis Chan, One Chinese city is using facial-recognition that can help police detect and arrest criminals in as little as 2 minutes, BUSINESSINSIDER (Mar. 19, 2018), https://www.businessinsider.com/china-guiyang- using-facial-recognition-to-arrest-criminals-2018-3.
15 See United States v. Knotts, 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another); but see Carpenter, 138 S.Ct. at 2219 (noting that individuals have a “reduced expectation of privacy in information they share with others.”) (emphasis added).
16 Cf. Carpenter, 138 S.Ct. at 2214 (“[A] central aim of the Framers was ‘to place obstacles in the way of a too permeating police surveillance’”) (quoting United States v. Di Re, 332 U.S. 581, 595 (1948)).
17 See, e.g., Clapper v. Amnesty Int’l, 568 U.S. (2013) (holding that the plaintiff lacked standing to challenge § 702 of FISA); ACLU v. NSA, 493 F.3d 644 (6th Cir. 2007) (holding that the plaintiffs lacked standing to challenge the Terrorist Surveillance Program due to the State Secrets Privilege).
18 Cf. United States v. United States District Court, 407 U.S. 297, 321–22 (1972) (noting that its Fourth Amendment analysis did not apply to activities of “foreign powers or their agents.”); Trump v. Hawaii, 138 S.Ct. 2392, 2401 (2018) (explaining that courts traditionally accord deference to the executive when it comes to decisions of national security).
19 Sam Levin, New AI can guess whether you’re gay or straight from a photograph, The Guardian (Sept. 7, 2017), https://www.theguardian.com/technology/2017/sep/07/new-artificial- intelligence-can-tell-whether-youre-gay-or-straight-from-a-photograph.
20 Gus Lubin, Facial-profiling could be dangerously inaccurate and biased, experts warn, Business Insider (Oct. 12, 2016), https://www.businessinsider.com/does-faception-work-2016-10.
21 U.S. CONST. amend. VI (“In all criminal prosecutions, the accused shall enjoy the right . . . to be confronted with the witnesses against him . . . .”).
22 See Melendez-Diaz v. Massachusetts, 557 U.S. 305 (2009) (holding that because lab certificates functioned as affidavits, which are categorically considered testimonial statements, the defendant had the right to confront and cross-examine the lab technician who tested the DNA evidence).
23 Will Knight, This AI program could beat you in an argument—but it doesn’t know what it’s saying, MIT TECH REV. (June 19, 2018), https://www.technologyreview.com/s/611487/this-ai-program-could-beat-you- in-an-argumentbut-it-doesnt-know-what-its-saying/.
24 Tegmark, supra note 1, at 105.
25 Id. at 106.
26 Sam Corbett–Davies, Even Imperfect Algorithms Can Improve the Criminal Justice System, N.Y. TIMES (Dec. 20, 2017), https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice- system.html.
27 Cf. Jeff Parsons, Artificial intelligence can tell a criminal based on facial recognition alone, claims creepy Minority Report study, MIRROR (Nov. 22, 2016), https://www.mirror.co.uk/tech/artificial-intelligence-can-tell-criminal- 9309765 (discussing how AI can determine who may be a criminal by judging facial features).
28 No. 17-40100-CM, 2018 WL 2684108, at *1 (D. Kan. June 5, 2018).
39 Id. at *3.