Automating Surgery: The Law of Autonomous Surgical Robots
by David Britton*
Autonomous surgical robots will soon operate on patients. Robots will use sensors to measure patients’ physiology and pathologies, use that information to plan how to complete the necessary surgical actions, and execute those plans by physically moving surgical tools–all without direct human control. These emerging surgical robots pose new questions for the legal and regulatory regimes surrounding medical devices and the practice of medicine.
This paper builds on a growing body of legal scholarship about Robot Law to examine the legally-disruptive potential of robots in surgery. The United States Food and Drug Administration (FDA) will evaluate new robots before they reach market, most likely through its rigorous premarket approval process, which is accompanied by a preemption of state tort law. The FDA has experience dealing with uncertainty in inherently unpredictable new technologies, and should eventually be able to find a way to evaluate surgical robots. In time, FDA regulation will clash with state licensure of medical professionals as people begin to regard robots less as tools and more as anthropomorphized social actors who themselves practice medicine. Because of autonomous robots’ novel capabilities, surgical robots may push federal regulation into more direct conflict with the states’ traditional role in regulating the practice of medicine.
* JD/MS dual-degree candidate, Duke University School of Law & Pratt School of Engineering. Student researcher, Brain Tool Laboratory, Humans and Autonomy Lab, & Science, Law, and Policy Lab. Thanks to Professors Arti Rai and Barack Richman for the course that inspired this project and for their helpful comments along the way, to Professor Buz Waitzkin for his thoughtful feedback, and to Dr. Patrick Codd, the Brain Tool Lab, and my fellow lab members for introducing me to surgical robots and supporting my interest in robot law.
Surgical robotics is a growing, $3 billion per year market. The biggest player in that market is Intuitive Surgical, Inc., which estimates that its da Vinci robotic surgery platform was used in about 600,000 procedures worldwide in 2014–more than triple the number from just five years earlier. The da Vinci platform is used by surgeons in minimally-invasive surgery to control surgical tools with greater precision than would be capable without robotic assistance, and represents the current paradigm in surgical robotics.
The da Vinci system allows robotic tools inside the patient to be controlled by a surgeon at a nearby computer console. Viewing a 3D video feed of the surgical site from an endoscopic camera, the surgeon manipulates handheld controllers like he or she would move real tools. The controller movements are processed by the computer system and translated into the physical movement of the patient-end tools to recreate the surgeon’s motions, often with adjustments on the scale of the movements or filters for eliminating human hand jitters. This setup is known to roboticists as a master-slave system, wherein the surgeon directly maneuvers the ‘master’ controller while the ‘slave’ tool mimics those motions.
When using a master-slave system for surgery, the surgeon remains in complete control of the movement of the tools and directly carries out each task necessary to complete the procedure. Thus, instead of saving costs or leading to shorter surgeries, robotic surgery often costs more and may take longer than traditional alternatives. In addition to this lack of added efficiency, many studies found that robotic assistance does not improve patient outcomes when compared to non-robotic surgical techniques for a given procedure. Although no large-scale clinical trials comparing robotically-assisted surgery to alternative methods have been conducted, there may be hundreds of comparative studies which fail to find an objective patient-outcome advantage to master-slave robot-assisted surgery: Intuitive Surgical’s 2014 annual report states that 400 comparative studies between robot-assisted surgeries and alternative methods were published in 2014, but could only pull one example that said da Vinci was clearly better. In essence, healthcare professionals are beginning to realize that existing surgical robots are not as helpful as once imagined.
Some roboticists have noticed these shortfalls as well, and turned their attention to the key aspect of robotics that is revolutionizing other industries: automation and autonomy. Manufacturing robotics led the latest industrial revolution with precise repetitiveness. Autopilot and other computerized control features on planes made air travel by far the safest mode of transportation through unwavering information monitoring and quick reaction times. Self-driving cars are attempting to bring this level of safety to our roadways. Amazon uses warehouse robots to fill orders efficiently and accurately, with a system autonomous to the point that no human needs to know where a given item is in the warehouse. Many of these machines carry out tasks autonomously—that is, they take in information from the environment around them, use that information to plan future actions, and carry out those actions to achieve a given goal, all without direct human intervention. In simpler terms, robots sense, think, and act.
Despite early success in other fields, autonomous robots have not yet reached the operating room. In fact, the United States Food and Drug Administration (FDA) carefully refers to current robotic systems as “robotically-assisted surgical devices (RASD).” An FDA discussion paper explains that RASD “are not considered to be surgical robots,” because a “robot” by definition moves within its environment to perform tasks with some autonomy. For example, the FDA’s definition means da Vinci is not a robot because a human surgeon does the system’s thinking and directs its action, so that da Vinci is not autonomous. Using this definition, the FDA states that “there are no surgical robots on the market.”
But these surgical robots are coming: several research groups have begun taking a stab at automating surgical subtasks with various levels of autonomy. In industry, Google and Johnson & Johnson recently teamed up to form Verb Surgical, a somewhat mysterious research group aimed at creating “the future of surgery” by teaming Google’s artificial intelligence and machine learning expertise with J&J’s medical device experience. Meanwhile, academic researchers have worked on automating bone drilling for high-precision ear surgeries, the removal of dead scar tissue, suturing within a surgical site by a laparoscopic robot like da Vinci, and needle navigation for lung biopsies.
To give a concrete example, the Brain Tool Laboratory here at Duke is developing a robot to autonomously remove brain tumors. In contrast to a master-slave robotic system which relies on a surgeon’s visual determinations and direct controller manipulations, this system will provide an example of true autonomous surgical robot. First, with the help of the surgeon, medical imaging (e.g., computed tomography, magnetic resonance imaging, ultrasound, etc.) is used to align the robot to the patient and delineate the unwanted tissue. Robotic control algorithms then plan a path for a surgical laser to be aimed and fired at the tumor to safely ablate the cancerous tissue. The robot then executes the cutting path. Importantly, the control system receives real-time feedback from the medical imaging sensors, and uses that information to repeatedly update the tool path and monitor the robot’s progress in removing the tumor. In other words, the robot’s movements are not preplanned, but rather are adjusted automatically during the course of surgery to account for changes detected by the imaging sensors. This “closed loop” feedback will allow the system to adjust for shifting brain tissue and unpredictable laser-tissue ablation efficiency. The surgeon is relieved of direct manipulation of tumor removal tools, and given a supervisory role over the robot and the surgery. Overall, the device senses the environment, computes a course of action, and deploys surgical action without direct human intervention, making it a “surgical robot” within the FDA’s definition.
The line between a non-autonomous RASD and an autonomous “robot” will not always be so clear. For example, researchers at Vanderbilt and the University of North Carolina are developing a tentacle-like, curved needle robot designed to safely reach remote sections of the lung for tumor biopsies. The ‘tentacle’ in the system is formed by extending and rotating curved, concentric tube segments to navigate the tip of the needle to a target location along a curved path. Researchers demonstrated a system which gives a surgeon full control over the location of the tip of the needle, while leaving the complicated dynamics of the curved tube configuration to computer control. Position sensors along the tube help the control software ensure that the curved tubes avoid touching sensitive nerves and blood vessels, while the physician is in control of the surgical action at the point of interest. In practice, the system may appear to use traditional da Vinci-style master-slave control of the tip of the needle, but the computer algorithms are making and executing safety-critical control decisions about the rest of the device dynamics. Such borderline systems call into question the helpfulness searching for a bright-line definition of “robot” that makes sense in the context of surgery.
|Automation Level||Automation Description|
|1||The computer offers no assistance: human must take all decisions and actions.|
|2||The computer offers a complete set of decisions/action alternatives, or|
|3||Narrows the selection down to a few, or|
|4||Suggests one alternative, and|
|5||Executes that suggestion if the human approves, or|
|6||Allows the human a restricted time to veto before automatic execution, or|
|7||Executes automatically, then necessarily informs the human, or|
|8||Informs the human only if asked, or|
|9||Informs the human only if it, the computer, decides to.|
|10||The computer decides everything and acts autonomously, ignoring the human.|
In fact, designers of surgical robots—guided by policymakers, regulators, and other stakeholders—will need to make more nuanced decisions about the remaining human role in these systems. Experts in human-robot interaction have laid out “levels of automation”, shown in Table 1, which describe the spectrum between human-only decision-making and full robot autonomy. At each step above level 1, the computer takes on more and more of the burden of decision making, leaving less for the human. The levels of automation are not intended to make a normative argument about proper robot design, but instead describe how supervisory control of robots can be designed. The levels of automation emphasize that issues surrounding new surgical robots are not limited to just hardware or software , but instead closely involve the evolving role of humans who collaborate with robots.
As robots move higher and higher up the levels of autonomy, legal scholars have begun to realize that robots pose a disruptive threat to existing legal doctrines and regulatory regimes, recognizing how law and policy will shape, and be shaped by, technological innovation. In order to fit into this broader dialogue surrounding robots and the law, this paper follows a framework developed by Professor Ryan Calo in an early work in the nascent field of Robot Law, which identified aspects of robotics that will be disruptive to current standards in American law.
The three key legally-disruptive traits of robotics identified by Professor Calo therefore guide this discussion of medical device law. First, robots transform digital information into physical changes in the real world. This “embodiment” of software redirects the focus of medical device safety regulation towards computer code and dynamic movements, adding new questions about device safety and effectiveness that may lead the FDA to conduct a more thorough review of the new wave of surgical robots. Second, autonomous robots may behave in ways not dictated nor expected by their programmers or operators. This “emergence” creates evidentiary challenges to proving safety and effectiveness, but the FDA is well equipped to deal with these challenges. One facet of FDA evaluation, human factors testing, will be critical for ensuring robot safety while pushing the FDA to investigate and attempt to limit the users of a device. Third, people tend to project human social traits onto robots, blurring the line between person and instrument. This “social valence” of robots implicates the limits of federal jurisdiction in regulating the practice of medicine when actions previously carried out by human doctors or nurses licensed by state medical associations are taken over by regulated robots.
Relying on Calo’s level of abstraction to gain a better perspective on the legally-interesting characteristics of robots, this paper discusses how federal regulation of medical devices and the underlying issues of federalism will be impacted by increasingly autonomous surgical robots. This approach leads to the conclusion that most challenges presented by robots are not entirely new to the FDA, suggesting that the agency is capable of regulating automated surgical robots without federal levle outside policy interventions like the creation of a Federal Robotics Commission—something once recommended by Calo. Meanwhile, robots may disrupt existing regimes by pulling federal device evaluation and regulation into more direct contact with the practice of medicine itself, infringing on legal territory previously reserved to the Several States.
Embodiment: FDA Regulatory Options for Software-Controlled Surgical Robots
Robots take sensor inputs, stored data, and computational processing algorithms and turn it into physical action. In the context of medical devices, this embodiment of the cyber-world means that control software becomes an integral part of device safety and effectiveness. The following section discusses the FDA’s existing regulatory pathways and an important preemption clause that limits the effect of state law on some devices. Examining which pathway will be applied, I conclude that rigorous premarket approval will likely be preferred for surgical robots, and the extra costs of that process on device companies are offset by the preemption of tort law that it provides.
a. Background on FDA Regulation and Preemption
Medical device regulation attempts to both protect the public health and to advance it through innovation. The balancing act between ensuring safety and facilitating the development of new products is reflected in the FDA’s mission statement:
FDA is responsible for protecting the public health by assuring the safety, efficacy and security of human and veterinary drugs, biological products, medical devices, our nation’s food supply, cosmetics, and products that emit radiation. FDA is also responsible for advancing the public health by helping to speed innovations that make medicines more effective, safer, and more affordable and by helping the public get the accurate, science-based information they need to use medicines and foods to maintain and improve their health.
In the context of surgical robots, the FDA’s regulatory programs should be tailored to ensure the safety and effectiveness of robots in treating patients without placing overwhelming regulatory obstacles in the way of device developers.
Trying to strike this balance, the FDA currently has two main regulatory pathways for bringing a new medical device to market: premarket approval (PMA) and 510(k) clearance. PMA is the more stringent of the two, and is applied to devices that are “represented to be for a use in supporting or sustaining human life” or that present a “potential unreasonable risk of illness or injury.” PMA requires the FDA to determine that sufficient, valid scientific evidence assures that the device is safe and effective for its intended use. Thus, a PMA applicant generally must provide results from clinical investigations involving human subjects showing safety and effectiveness data, adverse reactions and complications, patient complaints, device failures, and other relevant scientific information. The application is often reviewed by an advisory committee made up of outside experts.
The FDA expends significant resources reviewing a PMA application, estimating in 2005 that reviewing one PMA application costs the agency an average of $870,000. One survey of medical device companies found that it took an average of 54 months to reach approval from their first communication with the FDA regarding an innovation. The same survey found that the average total costs for a medical device company from the time of product conception to approval was $94 million. Fifty-two new devices received PMA approval in 2015.
The 510(k) pathway is more popular, with 3,006 clearances in 2015. 510(k) applies to moderately risky devices, and clears a device for marketing if it is “substantially equivalent” to a “predicate” device already on the market. A predicate device is a device that was available on the market before 1976, or any device cleared since then via 510(k). The FDA will clear a device as substantially equivalent to an earlier device if:
(1) The device has the same intended use as the predicate device; and
(2) The device:
(i) Has the same technological characteristics as the predicate device; or
(ii) (A) Has different technological characteristics, such as a significant change in the materials, design, energy source, or other features of the device from those of the predicate device;
(B) The data submitted establishes that the device is substantially equivalent to the predicate device and contains information, including clinical data if deemed necessary by the Commissioner, that demonstrates that the device is as safe and as effective as a legally marketed device; and
(C) Does not raise different questions of safety and effectiveness than the predicate device.
A 510(k) applicant must therefore submit information about the device’s design, characteristics, and relationship to a predicate device, and any data backing up those claims. In contrast to PMA, human-subject clinical trials for safety and effectiveness are typically not required. However, the FDA can respond to a 510(k) application by requesting additional information it deems relevant, which may lead to frustration over the unpredictability of the clearance process.
A 510(k) application is significantly cheaper for the FDA to review, at an estimated average cost of $18,200 per application. A company’s total costs from product concept to clearance is around $31 million on average. Although FDA hopes to reach a final decision on each application within three months, U.S. companies reported an average time of 10 months from first submission of an application to clearance. This faster timeline and the lower evidentiary requirements make 510(k) appealing to device companies over PMA.
A newer, third pathway, known as de novo 510(k) review, attempts to fill the gap between 510(k) and PMA. De novo review applies to innovative devices which fail the substantial equivalency test outlined above yet are not high enough risk to warrant full PMA inspection. The applicant must present data and information demonstrating that controls similar to those applied to 510(k) devices are sufficient to ensure the safety and effectiveness of the device. Once de novo review clears a device, that device becomes a predicate device just like any other 510(k) cleared device and opens up the door for similar devices to be cleared through the regular 510(k) process. De novo review has not been widely used thus far.
FDA also has the ability to monitor devices after they are put on the market. Many PMA approvals require postmarket surveillance studies to gather further safety and efficacy data. Postmarket study may also be required by 510(k) clearance. FDA regulations also mandate reporting of device-related adverse events by device manufacturers and health care facilities, and allow reporting of such events by patients. Lastly, FDA may issue recall orders for marketed devices which are found to pose health hazards.
Outside of the FDA, tort liability offers another avenue for post-market device regulation. Lawsuits following injuries allegedly caused by defective products are typically governed by state law in the relevant jurisdiction. However, reacting to the patchwork of state-level device regulations that arose in the wake of deaths caused by an IUD, Congress included an express preemption provision in the Medical Device Amendments of 1976:
[N]o State or political subdivision of a State may establish or continue in effect with respect to a device intended for human use any requirement–
(1) which is different from, or in addition to, any requirement applicable under this Act [21 USCS §§ 301 et seq.] to the device, and
(2) which relates to the safety or effectiveness of the device or to any other matter included in a requirement applicable to the device under this Act [21 USCS §§ 301 et seq.].
Although the U.S. Supreme Court previously found that 510(k) clearance did not invoke this preemption clause, Justice Scalia’s majority opinion in Riegel v. Medtronic, Inc., 552 U.S. 312 (2008) held that PMA did create requirements for a device which overrule state laws under the explicit preemption clause. Further, the court held that state common law tort claims are among the laws preempted. As a result, Medtronic could not be held liable after a Medtronic catheter ruptured in Riegel’s right coronary artery. For autonomous robots, this means software errors—embodied by a robot physically injuring a patient—will not lead to tort liability if the device and its software went through the rigorous PMA review process, at least so long as the manufacturer complies with applicable federal regulations.
Thus, a choice between 510(k) and PMA changes not only the regulatory process and applicable federal law, but also decides how state law will apply to the robot. Because no realistic process exists for a third party challenge to a 510(k) or de novo clearance, FDA has significant discretion in deciding what to do with a new technology. Additionally, device companies will certainly shape this territory through the paths they choose to pursue first based on their competitive regulatory strategies. The following section discusses which pathways could apply to surgical robots and considers what regulatory strategy may be best for robotics companies.
b. Regulatory Pathway Decisions for Surgical Robots
The first RASD, da Vinci, was approved via the 510(k) pathway as being substantially equivalent to the non-robotic laparoscopic tools and holders that it aimed to replace. FDA reviewers apparently deemed that the leap from hands-on mechanical control of tools to master-slave computer-mediated control did not raise significant new questions about the safety and effectiveness different than those asked of existing devices. Following this precedent, RASD have been approved via the 510(k) process for the last 15 years.
Recall, however, that the FDA maintains that RASD “are technically not robots, since they are guided by direct user control.” At least as of July 2015, FDA’s acting Deputy Director of the Division of Surgical Devices believed that “to date, FDA has not seen any . . . surgical devices that have autonomous features in them.” Although perhaps repeating this language merely to clarify the state of the art, FDA’s definition of RASD may be signaling how autonomous systems will be regulated.
In particular, FDA’s statement that RASD are not robots implies that any device the FDA recognizes as meeting their definition of an autonomous robot will fail the 510(k) substantial equivalency test. First, because no existing surgical systems are “robots”, no available predicate devices have autonomous features. Second, by implying that FDA knows to look for autonomy in a robotic device, the definition indicates that autonomous capability would be a “different technical characteristic” than anything present in a predicate device under 21 CFR § 807.100(b)(ii)(A). The device must therefore fulfill the two additional requirements found in (ii)(B) and (C), which require both the submission of data to establish safety and effectiveness equivalent to the predicate device and a finding that the “device does not raise different questions of safety and effectiveness than the predicate device.”
Although an applicant could plausibly satisfy (B) with convincing-enough testing, a fully autonomous surgical robot certainly raises a different question of safety and effectiveness than a master-slave RASD. In particular, errors in actively completing the surgical task targeted by the tool—which could previously be blamed on surgeon misuse or mistake—now enter the immediate attention of device evaluators. Because of this new question, cautious regulators would likely not allow 510(k) clearance for an autonomous surgical robot. For example, the Duke tumor resection robot described above should not receive 510(k) clearance, but instead should be evaluated through PMA, because the closed-loop feedback, tool path planning, and automatic laser steering and firing are new features that ask significant new questions of device safety and effectiveness. These features may pose enough risk that de novo reclassification will also be unavailable at first.
However, a case could be made for the curved tube robot example—or other systems lower on the levels of automation in Table 1—to be cleared through 510(k) or at least through de novo review. Because the surgeon remains in direct control of the tool at the point of interest, the regulator need not ask whether the robot knows how to effectively complete the surgical tasks. Instead, questions about the autonomous capabilities of the tube obstacle avoidance part of the system may be deemed similar enough to questions about the reliability of existing surgical robotic motors, actuators, arms, and end effectors to make 510(k) clearance plausible. Although intraoperative obstacle avoidance is fundamentally different than those issues from an engineering perspective, the analytical jump is no larger than the leap made to clear the first RASD under 510(k). If such a device is seen as posing about the same level of risk as current RASD, de novo clearance may be available. Federal regulators under pressure to keep regulatory costs low and confronted by these borderline systems might be willing to let more and more automation slide into devices through a series of de novo and 510(k) applications.
In terms of what will be preferred by robotics companies, 510(k) clearance is certainly cheaper and may at first appear to be the only way to ensure economic feasibility given the limited size of the surgical robotics market in today’s health care reimbursement climate. De novo review also can save regulatory compliance costs, but could be problematic from the competitive standpoint that it exposes a new kind of technology to copycats who can then use the first device as a predicate device for future 510(k) applications. In other words, the first company that convinces the FDA to approve an autonomous robot through de novo rather than PMA will lower regulatory hurdles—and thus economic barriers to entry—for all other players in the competitive market who can now use that robot as a predicate device for 510(k) submissions. A PMA approved device, still considered high risk, does not create a new predicate device and therefore sustains a high regulatory barrier to competitor entry.
In addition, PMA’s preemption of state-law tort claims against the device manufacturer is particularly advantageous for autonomous robots. A tortious device-related medical injury can result from two broad categories: a device mechanism fails and injures the patient, or a doctor misuses the tool in a way that injures the patient. Typically, device companies only need to worry about the first category, focused on physical design failures and manufacturing defects. However, when autonomy is introduced into a device, it encroaches on the second category: the software may misuse the physical tools in a way that injures the patient. For device companies considering autonomy, this is the essence of the “embodiment” problem: the code they write may physically injure real people and open the companies up to tort liability for actions formerly carried out only by healthcare professionals.
One might question whether medical malpractice tort reforms, like caps on noneconomic damages, intended to protect physicians and hospitals would now apply to autonomous surgical robots. Regardless, because robots create new ways for a product to hurt someone, developers will likely find that it would make good economic sense to pay extra up front for ex ante PMA than to be caught dealing with uncertain, potentially massive tort damages ex poste. The estimated difference in cost to developers of $60 million from concept to market between 510(k) and PMA approval could be eaten up quickly by a handful of tort cases and the accompanying bad publicity. With the additional consideration that more good data is good for marketing, these considerations suggest that a strategic company may be wise to choose PMA despite its higher price tag and longer timeline.
Patients injured by PMA robots will not be huge fans of this system. Thus far, most RASD-related injuries could be blamed on surgeons. However, autonomous robots might injure a patient even though human medical staff did everything right. Despite a human urge to blame the closest human operator of automated technology for catastrophic system failures, robot malfunction may be the only legal cause of the patient’s injury in many cases. If the robot went through PMA, Congress’s choice of ex ante regulation means the individual patient may go uncompensated. State courts and legislatures trying to be fair to injured plaintiffs are not likely to be happy with that result. If these injuries become widespread, patients and state courts may hunt for ways around the Medical Device Act’s preemption clause, or pressure Congress to modify it.
In sum, 510(k) clearance is the faster, cheaper path to market for medical devices and is exploited by current robotically-assisted surgical devices. Some statements by the FDA and its officers seem to suggest that 510(k) clearance will not be available for true surgical “robots” with autonomous capabilities, but the real world evolution of such clearance will be shaped by the capabilities of early technologies and the competitive regulatory strategy of device companies. Because PMA maintains more initial barriers for competitors and preempts state tort claims, PMA may actually be preferable to 510(k) or de novo review for device companies. PMA thereby offers incentives to develop novel robots, but does so at the cost of quick innovation and robust competition and to the detriment of individual plaintiffs injured by robots. These costs may be justified if the FDA is capable of testing and evaluating robots to ensure a high level safety and effectiveness to fulfill the “protect the patients” half of their mission statement. The following section argues that the FDA is capable of understanding and evaluating robots’ novel technical traits like emergent behavior, and how investigation of crucial engineering design principles will lead FDA even deeper into interference with regulation of the practice of medicine.
Emergence: Understanding and Testing Robots in the Face of Uncertainty.
Autonomous robots, making their own decisions based on sensor inputs, stored data, deterministic algorithms or probabilistic machine learning may take actions not understood by their users and sometimes not predicted by their designers. Figuring out how to successfully test these “emergent” behaviors would contribute immensely to a world which will soon be teeming with safety-critical autonomous robots. Fortunately, FDA may be the best equipped of any federal agency to develop and execute sufficient testing programs for new autonomous technologies. At the very least, FDA is capable of understanding these systems significantly better than state trial courts. The following attempts to explain why “emergence” makes a system difficult to test and points out the advantages the FDA has in addressing the problem.
Computers and robots are best at highly structured tasks. A simple calculator, for instance, is excellent at executing the axiomatic rules of arithmetic. Computer programs solving structured problems rely on a limited set of inputs and determine outputs based on deterministic algorithms. Manufacturing robotics take advantage of automation by repeating precisely the same motion to create assembly lines full of identical products.
However, most human problems are not so structured: we often deal with uncertainty. In surgery, for example, patients vary based on height, weight, physical fitness, and gender, and the presentation of a pathology or injury may change significantly from case to case in terms of anatomical structure. A manufacturing robotics system moving in exactly the same way repeatedly cannot account for this kind of diversity. Instead, humans cope with the uncertainty of each new circumstance by following experience-based intuition to judge how to proceed.
Robots and computer systems can be designed to deal with this sort of uncertainty in several ways. The first is to redesign the task to limit automation to a structured task. For example, the task of mail-ordering a book, perhaps done in the past through long-hand letter, was replaced by the Amazon “buy-with-one-click” button. Computers can process these standardized button clicks much easier than they could interpret handwritten letters. For surgical robotics, this example urges roboticists to reimagine the delivery of surgical action rather than attempt to replicate current human surgical practice with a robot.
Second, the scope of issues dealt with by the system can be bounded. As one example, consider an airport self-check-in kiosk. A self-check-in kiosk operates automatically through structured steps when it finds no problems with any of your information, but stops helping and directs you to a human agent when an issue outside its ability arises. By knowing its limits, the kiosk avoids dealing with unusual problems. For surgical robotics applications, the FDA can accomplish this sort of limitation through definition of the device’s “indication”—the disease the device is approved to treat. By approving a robot only for use on a certain pathology, for use on a limited group of patients, or perhaps even for a limited set of presentations of a given disease, some uncertainty in the robot’s operation can be mitigated. Any limitation on the indication for use, however, reduces the pool of potential patients for a device company marketing to hospitals. Additionally, the FDA cannot keep a physician from using a device for off-label use.
Lastly, the system can be designed to guess a best solution when faced with uncertainty. Guessing—more accurately described as choosing the most likely answer from a group of probability distributions—sounds undesirable, but will be necessary when the other two options are not available. For instance, a self-driving car company cannot afford to redesign all road infrastructure to simplify driving, nor will it be safe to hand off control to humans in all unusual situations. Thus, the self-driving car will need to deal with the uncertainty inherent in the human task of driving and stemming from the limitations of its sensors. Is that object detected by the sensors a person or a tree? How likely is it that the pedestrian will cross the road? Who will move first at a four-way-stop? To deal with such questions, autonomous decision making begins to rely on choosing the most likely solution based on probability distributions instead of finding deterministic solutions.
Rather than trying to explicitly write out all of the rules needed to navigate situations like this, software engineers use a technique known as “machine learning.” In essence, the robot is ‘trained’ to do something. Perhaps the best example of machine learning for lawyers to understand is e-discovery document review, which allows a computer program to assist attorneys in sorting through gigabytes of digital document files during discovery. To operate this software, an attorney starts by working through a training set of documents, indicating which are relevant to the case and which are not. Based on the attorney’s labelling, the computer ‘learns’ what words, patterns of words, and other traits are present in the ‘relevant’ set of documents. The program then sorts the rest of the documents based on these learned standards. These systems are much more effective than simple term searching. Importantly, because neither the user nor the original software designer knows exactly what the computer has decided to look for in the documents, neither person will be able to explain exactly why the system returned a particular document as relevant. In some cases, the system may flag a document for reasons no one would have predicted. For more complicated algorithms and problems, unpredictability increases and the robot’s thinking becomes a black box. The main takeaway of this explanation is the phenomena Ryan Calo calls “emergence”: a robot trained to deal with uncertainty may exhibit behavior not foreseen by the designer.
Unpredictability is problematic for a regulatory system which seeks to ensure safety. When a robot is acting based on probabilistic guesses or sensors with error rates, a chance of mistake is always present and that mistake may not be foreseeable to designers or users. At present, the proper way to test autonomous systems which operate in environments with high uncertainty is an open question of extraordinary importance. Real-world testing would require an unreasonable number of test trials to reach acceptable levels of certainty. To return to the self-driving car example, a tested car could drive hundreds of millions of miles along every street in the country and still fail to test obscure combinations of weather, traffic, pedestrians, construction workers, and road conditions that the car cannot safely handle. Computer models of a system can more quickly test across all random combinations of modelled variables, but any results are limited by the fact that a model is inherently a simplified version of the real world system.
Unlike other regulatory bodies dealing with these emergent robot behavior (e.g., the National Highway Traffic Safety Administration), FDA has experience determining safety in the face of unpredictability. In pharmaceutical evaluation, FDA’s highest-profile job, the true effect in the human body is always uncertain. Even after millions of dollars in pre-clinical research, only about one in ten drugs that start Phase I safety clinical trials make it through to approval. The drugs that are approved often have side effects that frequently cannot be mitigated or predicted, and are more effective for some people for unknown reasons. Despite all the unknowns surrounding pharmacological action in diverse people’s bodies, FDA manages to certify drugs for market.
Drugs are a significantly different technology than stochastic robots, and the gold standard of clinical drug trials may not be feasible for robots nor get at all of the relevant risks. As mentioned earlier, proper methodologies for testing autonomous systems in any context are still important research goals. Despite the differences between drugs and robots, the comparison at least makes clear that robot control unpredictability and “emergence” is not scarier than what the FDA regularly analyzes.
Even in the device space, FDA has experience dealing with cyber-physical systems which nearly fit their definition of robot. A pacemaker, for example, has sensors that read electrical signals from a patient’s heart, a computer which deterministically processes those signals and looks for irregularities, and the ability to send electric pulses when necessary to correct arrhythmias. The only piece missing from the FDA’s definition of “robot” is the ability to move in physical space. Pacemakers go through PMA, and FDA is apparently successful at certifying these deterministic systems.
As a specific example of automated device regulation, FDA recently approved Johnson & Johnson’s Sedasys sedation system, which automates anesthesiology. The device monitors a patient’s breathing, heart rate, and blood oxygen levels calculates an appropriate dosage of a sedation drug; and applies the drug through an intravenous line drip. FDA approved the device via PMA, limiting the device’s indication of use to colonoscopies and endoscopies in healthy patients where an anesthesia professional is “immediately available for assistance or consultation.” Even for that limited indication, Sedasys is poised to capture significant value from the one billion dollar per year market for colonoscopy-related anesthesia services. WhileFDA has recent experience in reviewing and approving an autonomous medical device which may replace a medical specialty, Sedasys has not been used enough yet in practice to know whether FDA’s testing was really effective or not.
Beyond past experience, FDA is committed to staying ahead of the curve on technical development. The Center for Devices at the FDA has an Office of Science and Engineering Laboratories with several hundred employees who work on regulatory science, and some of these researchers are beginning to study how to evaluate autonomous systems. The Office of Device Evaluation has also been increasing its software expertise, as exhibited in a recent guidance document about cybersecurity. In general, FDA review is interdisciplinary and draws on expertise from across engineering and statistical specialties, and now will include computer scientists and roboticists. Like everyone else trying to test autonomous systems, FDA does not yet know how to evaluate stochastic robots and should be actively working to figure it out. But, as put by one expert on government bureaucracies, “the FDA has the resources to keep its people up to date on technology, hire people with new skills, and, when needed, bring in outside expertise through advisory panels.” FDA is thus relatively well situated to eventually handle new challenges posed by robotics.
One field of expertise that will be critical for ensuring safety in autonomous systems is human factors engineering, which focuses on the ways humans interact with technology. Human factors engineering—and its subfield, human-robot interaction—applies knowledge of human sensory, cognitive, and physiological capabilities and limitations to guide product design with the goal of making products safe and easy to use. For the FDA—which does not have the authority to directly dictate physician practice or punish misuse of a device—human factors principles can be used to reach across that line by requiring a device to be designed and sold in a way that minimizes the risk of operator errors. In simpler terms, a medical device is neither dangerous nor effective until someone tries to use it, so FDA has an interest in studying how the human uses the tool.
Recently, FDA released a guidance document reemphasizing the importance of human factors testing for medical devices. This document outlines requirements for usability testing, which assesses “user interactions with a device user interface to identify use errors that would or could result in serious harm to the patient or user.” A related proposed guidance offers a list of technology types that will require human factors testing and includes surgical robotics.
With increasing automation, careful analysis of human-robot interaction principles becomes even more critical to system safety. The highest profile failures of automated systems—from the Three Mile Island partial meltdown to the 2009 Air France flight 447 crash—cannot be fairly characterized as only a user failure or only an automation failure, but instead resulted from a breakdown in the interaction between humans and automated systems. In response, experts in human-robot interaction have identified several key problems that must be addressed when designing or regulating safety critical systems. First, “mode confusion” refers to situations where the human user does not understand what the system is doing or why it is doing it. Caused by system complexity or insufficient information communication—and especially problematic for “emergent” system behaviors—mode confusion can lead humans into dangerous decisions made with poor situational awareness. Second, automation may reduce a person’s mental workload, leading to boredom and an immediate degradation in performance, while the skills needed to carry out the task without the robot erode over time. Third, an appropriate level of trust in the robot is hard to instill, given human over-reliance on generally successful automation and human annoyance with frequent false alarms.
To create a safe automated system, all of these factors must be considered when designing or testing a surgical robot. Testing for these factors requires observations of real systems in use by real users while monitoring workload, performance, error rates, and other human behavior to locate potential for mistakes. These test necessarily involve inspection of users in their natural environment: which for the FDA is medical professionals in hospitals and clinics. FDA’s update to its human factors engineering guidance document and the related proposed guidance requiring human factors submissions for robotic surgery devices indicate that the agency has begun to orient itself to these problems.
In practice, FDA’s approval letter for the automated anesthesiology system Sedasys , required a post-market study of how users respond to system alarms to see if real users would listen to what the system was trying to tell them, and an evaluation of whether the availability of a professional anesthesiologist for emergency intervention is necessary. Through these tests and associated limitations on device use, FDA is using human factors engineering tests and principles to influence physician and hospital practice as much as possible without directly regulating the practice of medicine.
In sum, FDA is charged with evaluating device safety and effectiveness, and is certainly capable of doing so even for robots with emergent behavior. Through human factors testing, which will only become more critical with the introduction of autonomy, FDA can examine and somewhat constrain the actions of healthcare professionals in an effort to maximize device safety. FDA is generally regarded as lacking the authority to regulate “the practice of medicine.” Human factors tests and related policies blur the line between device regulation and physician practice, particularly as robots demand more thorough user testing and training. Therefore, in order to sufficiently test and effectively monitor the use of surgical robots, FDA will necessarily expand its influence over the practice of surgery.
Moving forward, as described in the next section, the robot itself might be a social actor practicing medicine. Regulation of such robots leads the federal government undeniably into territory previously reserved to the States.
Social Valence: Surgical Robots as Anthropomorphized Members of the Surgical Team.
After robots get past the FDA’s ex ante regulation by demonstrating safety, effectiveness, and usability, the next question becomes how the new technology fits into the social structure of the operating room. Human-robot interaction research has found that robots evoke different human responses than traditional tools. For example, a Roomba autonomous vacuum cleaner robot “has no social skills whatsoever, but just the fact that it moves around on its own prompts people to name it, talk to it, and feel bad for it when it gets stuck under the couch.” More dramatically, soldiers using robotic units to deal with improvised explosive devices in Iraq and Afghanistan began to name their robotic tools, award them medals, and even hold funerals for them. Empirical robotics research backs up this anecdotal notion that humans tend to anthropomorphize robots and attribute social value to them. In other words, as a robot begins to move in the physical world apparently under its own volition, people come to value the robot less like a toaster and more like a pet. Embracing this human psychological response, Ryan Calo’s last disruptive trait of robotics is “social valence,” the idea that people tend to ascribe anthropomorphic social traits to autonomous robots. For the law, social valence raises questions about allowing robots to hold legal rights and responsibilities.
In the surgical setting, this human-robot interaction research implies that autonomous surgical robots will feel less like another tool and more like a member of the surgical team. An operating room is already a dynamic team environment—an attending surgeon oversees several nurses and residents while working with an anesthesiologist to coordinate complicated preparatory, sterilization, and surgical tasks. For the surgeon, deploying a surgical robot to complete certain tasks will likely feel more like delegating a task to a resident than using a scalpel, especially if the robot is designed with social behaviors like voice recognition or facial expressions. If the robot acquires anthropomorphic social value similar to that of its human coworkers, people might feel like the robots should be treated more like the humans under the law. Like the human members of the surgical team, robots might be seen as social actors engaged in the practice of medicine.
Then, FDA review, regulation, and control of a robot’s medical practice would be directly infringing on each state’s traditional control over the actors it allows to practice medicine in its jurisdiction. When the federal government is encroaching on state power, the broadest legal question becomes whether the Constitution allows—that is, whether it authorizes and does not prohibit— the federal government to directly regulate the practice of medicine.
First, the federal government is constitutionally authorized to regulate the practice of medicine, especially by robots. Under the Commerce Clause, regulations related to surgical robots or other medical devices are authorized as being directed at items traded in interstate commerce. Additionally, because health care costs are a significant portion of the nation’s economy, services provided by medical professionals are commercial in nature, and managed care networks and hospital chains are increasingly multi-state operations, Congress is authorized to regulate the practice of medicine because it has a substantial relation to interstate commerce. The spending power could also be leveraged, given that the federal government spends the majority of the money in the healthcare sector.
Second, nothing prohibits the federal government from regulating the practice of medicine. In an earlier era in federalism jurisprudence, the Supreme Court wrote that “[o]bviously, direct control of medical practice in the states is beyond the power of the federal government.” The Court, however, upheld the federal controlled substances statute at issue in that case, and the very next year upheld a Prohibition-era medicinal liquor prescribing law against the objections of four dissenters who argued that states held “the exclusive power . . . of controlling medical practice.” Forecasting a doctrine more akinto what is accepted today, Justice Brandeis wrote for the majority in the latter case that “[w]hen the United States exerts any of the powers conferred upon it by the Constitution, no valid objection can be based upon the fact that such exercise may be attended by some or all of the incidents which attend the exercise by a State of its police power.” Since then, “the full Court has discarded the notion that the Tenth Amendment allocates exclusive authority over certain domains to the states,” opening the door for federal government to regulate medical practice. Robots may very well push the FDA through that door. Thus, the regulation of the practice of medicine, especially by robots, is within the power of the federal government.
A state’s response to robots practicing medicine may be to attempt to license them similar to physicians or pharmacists. Under current law, a state licensing requirement for a PMA robot would be preempted by the Medical Device Act because it adds an additional requirement to those imposed by federal law with respect to the device itself. In fact, Justice Scalia’s account of the history of the MDA in Riegel suggests state regulation of this type was the target of the preemption provision, and robot licensure would fit cleanly into the language of the preemption clause. State licensure would not be preempted for a 510(k) device, creating another reason for device developers to pursue PMA.
A related question is whether preemption provisions would apply to state robot-related rules not acting directly on the robots. For instance, state medical boards, led by physicians afraid of losing their jobs to robots, might try to restrict who is allowed to operate a surgical robot. For instance, a state could decide that nurse practitioner is in violation of scope of practice laws if he or she uses a surgical robot on a patient without a physician present. When FDA approvals include a requirement on the personnel needed to operate a device—like the requirement of availability of an anesthesiology professional in the Sedasys indication—such scope of practice laws may be at odds with federal regulation.
Currently, the FDA regulation on the preemption clause claims the clause does “not preempt State or local permits, licensing . . . or other requirements relating to the approval or sanction of the practice of medicine or . . . related professions that administer, dispense, or sell devices.” However, the majority opinion in Riegel was not impressed by FDA’s interpretation of the preemption clause, stating that the agency’s interpretation “can add nothing to our analysis but confusion.” Following the Supreme Court’s indifference to FDA’s interpretation, a court might find that state scope of practice laws about robots—even when not directly regulating the robots—relate to the safety or effectiveness of the medical devices, are different from federal requirements applicable to the devices, and are therefore preempted.
With robots also coming to other phases of health care—for example, elder care robots are of particular interest to many researchers—these issues will not be confined to the surgical arena. Across the healthcare sector, regardless of how particular futuristic cases will play out, surgical robots will take on social value and disrupt the existing federalist framework for regulating the practice of medicine.
Autonomous surgical robots are coming. Robots will use sensors to measure patients’ physiology and pathologies, use that information to plan how to complete the necessary surgical operations, and execute those plans by physically moving surgical instruments, all without direct, immediate human control. Device manufacturers are likely to submit these robots for FDA approval via the PMA pathway which, although slower and costlier than its alternatives, provides the competitive barrier to entry of a more thorough safety review and the advantage of preemption of state tort and licensure laws. FDA has experience evaluating unpredictable new technologies, and is well-equipped to evaluate this incoming wave of surgical robot applications. In particular, human factors testing will help FDA assure that robots are designed to minimize mistakes on the part of their human supervisors. As FDA-approved surgical robots assume social status within operating room staffs, federal and state law is likely to clash over control of the practice of medicine.
This paper discussed a specific application for robots. Alongside surgical robots, other robots and cyberphysical systems will emerge for other healthcare applications as well as in other industries. The hope is that this discussion contributes to a broader discourse on robots and law in several ways. First, it demonstrates the value of discussing the legal issues in some particularity, with reference to particular industries, regulatory regimes, engineering principles, and examples of at least a few real robots. Second, it explores how at least one federal agency will be capable of evaluating new robots, albeit slowly and expensively. Third, it illustrates how questions in robot law will often involve new interactions between state and federal law.
Studying surgical robot law is a deeply interdisciplinary endeavor. This paper’s goal is to distill some related concepts into language that is understandable across disciplines. Guiding and stimulating early discussions between patients, physicians, nurses, healthcare administrators, insurance companies, regulators, lawyers, device companies, and roboticists—or at least the academics who study those things—could have a lasting, positive impact on the safe and effective development of incredible new technologies.
Medical Robotics and Computer-Assisted Surgery: The Global Market, BCC Research (June 2014) (“The global market for medical robotics and computer-assisted surgical equipment was worth nearly $2.7 billion in 2013. The market is projected to approach $3.3 billion in 2014 and $4.6 billion by 2019.”).
 Intuitive Surgical, Annual Report 2014, http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9Mjc0MjUxfENoaWxkSUQ9LTF8VHlwZT0z&t=1 (last accessed Feb. 23, 2016).
 Gabriel I Barbash & Sherry A. Glied, New Technology and Health Care Costs — The Case of Robot-Assisted Surgery, New England Journal of Medicine 363 (8): 701–4. (2010); Shawn Tsuda et al., SAGES TAVAC Safety and Effectiveness Analysis: Da Vinci® Surgical System (Intuitive Surgical, Sunnyvale, CA), Surgical Endoscopy 29 (10): 2873–84 (2015).
 Id.; Wright et al., “Robotically assisted v. laparoscopic hysterectomy among women with benign gynecologic disease.” JAMA 309(7):689-698 (“To date, robotically assisted hysterectomy has not been shown to be more effective than laparoscopy” despite being “substantially more expensive than any other modality of hysterectomy.”) (2013); Huang et al., “Systematic review and meta-analysis of robotic versus laparoscopic distal pancreatectomy for benign and malignant pancreatic lesions.” Surgical Endoscopy 2016 online http://link.springer.com/article/10.1007%2Fs00464-015-4723-7 . (finding no difference in rate of complications or outcomes); Wright et al., “Comparative Effectiveness of Robotic Versus Laparoscopic Hysterectomy for Endometrial Cancer.” J. Clinical Oncology vol. 30 no. 8 783-791 (March 10, 2012) (“there were no significant differences in the rates of intraoperative complications, . . . surgical site complications, . . . medical complications . . . , or prolonged hospitalization.” But “robotic hysterectomy was significantly more costly.”)
 Intuitive Surgical, supra note 2.
 Chris Isidore, What’s the safest way to travel (May 13 2015) http://money.cnn.com/2015/05/13/news/economy/train-plane-car-deaths/ .
 See, e.g., Robert Montenegro, Google’s Self-Driving Cars Are Ridiculously Safe http://bigthink.com/ideafeed/googles-self-driving-car-is-ridiculously-safe.
 Ryan Calo, Robots in American Law, We Robot 2016, 13. Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2737598.
 Robot law scholars like to quibble about the definition of “robot.” See Froomkin, Introduction, Robot Law (Calo et al., eds. 2016).
 FDA, Discussion Paper: Robotically-Assisted Surgical Devices (2015), available at http://www.fda.gov/downloads/MedicalDevices/NewsEvents/WorkshopsConferences/UCM454811.pdf.
 Id. (FDA adopted the International Organization for Standardization (ISO) definition of “robot” from ISO 8373:2012(en), where robot is defined as an “actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks.” For consistency, I try to follow this definition when I use the term “robot” within this article. “Robotic” is used herein to refer to electro-mechanical systems without autonomy.)
 About Us, VerbSurgical.com, http://www.verbsurgical.com/about-us/ (last visited May 2, 2016).
 Majdani et al., A Robot-Guided Minimally Invasive Approach for Cochlear Implant Surgery: Preliminary Results of a Temporal Bone Study, Int’l J. Computer Assisted Radiology and Surgery 4 (5): 475–86. (2009).
 Kehoe et al., Autonomous Multilateral Debridement with the Raven Surgical Robot, IEEE Int’l Conf. on Robotics & Automation (2014) Available at http://cal-mr.berkeley.edu/papers/Kehoe-icra-2014-final.pdf .
 Leonard, et al., Smart Tissue Anastomosis Robot (STAR): A Vision-Guided Robotics System for Laparoscopic Suturing, IEEE Transactions on Bio-Medical Engineering 61 (4): 1305–17. (2014).
 Torres et al., A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation, http://robotics.cs.unc.edu/publications/Torres2015_ICRA.pdf (Last accessed May 2, 2016).
 The author of this paper is a member of the research group developing this robot. No publications on the robot are available yet.
 Researchers have yet to determine what role the surgeon should have in monitoring the system. See infra note 24 and accompanying table.
 Active cannula, Vanderbilt Institute in Surgery and Engineering, https://www4.vanderbilt.edu/vise/viseprojects/active-cannula/.
 Torres, supra note 22.
 Raja Parasuraman, Thomas B. Sheridan, & Christopher D. Wickens, A Model for Types and Levels of Human Interaction with Automation, IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 30, no. 3 (May 2000); Mary Cummings, Man versus Machine or Man + Machine, IEEE Intelligent Systems (2014) (available at http://hal.pratt.duke.edu/sites/hal.pratt.duke.edu/files/u10/IS-29-05-Expert%20Opinion%5B1%5D_0.pdf).
 Parasuraman, supra note 24.
 See, e.g., Robot Law (Calo, Froomkin, Kerr, eds. 2016); See also We Robot Conferences (2012-2016) at robots.law.miami/2016.
 Ryan Calo, Robotics and the Lessons of Cyberlaw, Cal. L.R. Vol. 103:513-564 (2015).
 Id. at 532.
 Ryan Calo once made the case for the creation of a new Federal Robotics Commission, which would offer technical support, guidance, and resources to other agencies encountering new robot-related problems. Ryan Calo, The Case for a Federal Robotics Commission, Brookings (2014) (available at http://www.brookings.edu/research/reports2/2014/09/case-for-federal-robotics-commission).
 See, e.g., James Flaherty, Jr., Defending Substantial Equivalence: An Argument for the Continuing Validity of the 510(k) Premarket Notification Process, 63 Food Drug L.J. 901.
 FDA Mission Statement, http://www.fda.gov/downloads/aboutfda/reportsmanualsforms/reports/budgetreports/ucm298331.pdf . (last accessed March 22, 2016).
 As stated by William Maisel, FDA’s acting director of the Office of Device Evaluation, at a public workshop on Robotic Assisted Surgical Devices (July 27, 2015), “[T] he first prong of our vision is that patients in the U.S. have access to high quality, safe and effective medical devices of public health importance first in the world. …if we set our evidentiary bars to high, then a lot of really great ideas will never make it. And so, we have to appropriately balance the availability of these technologies, getting these technologies to market and also make sure that they remain safe and effective. …[W] e also need to think about what is the cost of the development of the technology… [I]f studies, the cost of developing a technology is too high, then many of those technologies will never make it to patients. And so, striking the right balance important.” (transcript available at http://www.fda.gov/MedicalDevices/NewsEvents/WorkshopsConferences/ucm435255.htm).
 21 U.S.C. § 360c(a)(1)(C)
 21 C.F.R. 814
 21 C.F.R. 814.20(6)(ii)
 CRS report, page 12-13
Gov’t Accountability Off., SHORTCOMINGS IN FDA’s PREMARKET REVIEW, POSTMARKET SURVEILLANCE, AND INSPECTIONS OF DEVICE MANUFACTURING ESTABLISHMENTS, TESTIMONY BEFORE THE SUBCOMMITTEE ON HEALTH, COMMITTEE ON ENERGY AND COMMERCE, HOUSE OF REPRESENTATIVES, 5 (2009) http://www.gao.gov/new.items/d09370t.pdf .
 Josh Makower, Aabed Meer, & Lyn Denend, FDA, Impact on U.S. Medical Technology Innovation: A Survey of Over 200 Medical Technology Companies (Nov. 2010), 23 (available at http://advamed.org/res.download/30).
 Id. at 28. (Notably, not all of those costs are directly attributable to regulatory compliance activities).
 Devices Approved in 2015, http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/DeviceApprovalsandClearances/PMAApprovals/ucm439065.htm.
 Devices Cleared in 2015, http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/DeviceApprovalsandClearances/510kClearances/ucm432160.htm.
 21 CFR 807.100(b)
 Congressional Research Service, FDA Regulation of Medical Devices, 9 (2012)
 21 CFR § 807.100(a)(3)
 Makower, supra note 41 at 26 (A Stanford-based survey of 200 U.S. medical device companies found that over half ranked FDA regulatory performance as “mostly unpredictable” or “very unpredictable”, as compared to less than 5% of respondents ranking European Union device regulation in the same category.).
 Gov’t Accountability Off., supra note 40 at 5.
 Makower, supra note 41 at 7.
 Id at 26.
 De Novo Classification Process (Evaluation of Automatic Class III Designation): Draft Guidance for Industry and FDA Staff (Aug. 14, 2014).
 New Section 513(f)(2) – Evaluation of Automatic Class III Designation, Guidance for Industry and CDRH Staff, http://www.fda.gov/RegulatoryInformation/Guidances/ucm080195.htm (last accessed May 2, 2015).
 Only 10 de novo device reclassification decisions were made in 2015. Device Classifications under Section 513(a)(1)(de novo), database at http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPMN/denovo.cfm (last accessed May 3, 2016).
 21 U.S.C. § 360c(a)(1)(B).
 Congressional Research Service, supra note 46 at 15–16.
 21 CFR § 810
 Riegel v. Medtronic, Inc., 552 U.S. 312, 315–16 (2006).
 21 U.S.C. § 360(k)
 Medtronic, Inc. v. Lohr, 518 U.S. 470 (1996)
 Riegel, 552 U.S. 312 at 322-3.
 Id. at 323-326.
 Id. at 320.
 See Hughes v. Boston Scientific Corp., 631 F.3d 762 (5th Cir. 2011) (holding that plaintiff’s state law failure to warn claim was not preempted where it was based on Boston Scientific’s failure to report serious injuries or malfunctions related to the device as required by FDA regulations); See also Bausch v. Stryker Corp., 630 F.3d 546 (7th Cir. 2010) (preemption protection “does not apply where the patient can prove that she was hurt by the manufacturer’s violation of federal law.”).
See, e.g., Ivy Sports Medicine v. Burwell, 767 F.3d 81 (D.C. Cir. 2014) (holding that revocation of a 510(k) clearance requires notice-and-comment rulemaking to reclassify the device as a higher-risk Class III device).
 Maisel, supra note 35 at 27-28 (July 27, 2015)
 There is no public record of the application or FDA reasoning behind clearing the da Vinci. That is, the FDA database shows no ’summary’ for the first da Vinci clearance, unlike for many other devices at http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K002489 ).
 Maisel, supra note 35 at 28,
 Maisel, supra note 35 at 27
 21 CFR 807.100(b), supra note 45 and accompanying text.
 Michael Drues, Secrets of the De Novo Pathway, Part 2: Is De Novo Right For Your Device? , Med Device Online (2014) http://www.meddeviceonline.com/doc/secrets-of-the-de-novo-pathway-part-is-de-novo-right-for-your-device-0001 (last accessed May 2, 2016).
 Makower, supra notes 41, 51 and accompanying text.
 See, e.g., Taylor v. Intuitive Surgical Inc., 188 Wn. App. 776 (Wash. Ct. App. 2015) (petition for review by Wash. granted Feb. 10, 2016).
 M.C. Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, We Robot 2016, http://robots.law.miami.edu/2016/wp-content/uploads/2015/07/ELISH_WEROBOT_cautionary-tales_03212016.pdf.
 Riegel, supra note 60.
 See, e.g., Dana Remus & Frank Levy, Can robots be lawyers? Computers, Lawyers, and the Practice of Law (2016), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2701092.
 Mary Cummings, Man versus Machine or Man + Machine, IEEE Intelligent Systems (2014), http://hal.pratt.duke.edu/sites/hal.pratt.duke.edu/files/u10/IS-29-05-Expert%20Opinion%5B1%5D_0.pdf.
 Remus & Levy, supra note 83, at 10-12.
 Id. at 11.
 Emphasizing the importance of the indication choice, the scope of indications for RASD was the topic of an FDA public workshop in July 2015. http://www.fda.gov/MedicalDevices/NewsEvents/WorkshopsConferences/ucm435255.htm.
 21 U.S.C. § 396.
 Remus & Levy, supra note 83, at 12.
 See Harry Surden & Mary-Anne Williams, Self-Driving Cars, Predictability, and Law, We Robot 2016, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2747491.
 Remus & Levy, supra note 83, at 16–18.
 Calo, supra note 27.
 See Harry Surden & Mary-Anne Williams, supra note 96.
 See, e.g., Miles S. Thompson, Evaluating Intelligent Systems with Performance Uncertainty in Large Test Spaces, Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop (2010), http://dl.acm.org/citation.cfm?id=2377602.
 Nidhi Kalra & Susan M. Paddock, Driving to Safety: How Many Miles of Driving Would it Take to Demonstrate Autonomous Vehicle Reliability, RAND Corporation (2016), http://www.rand.org/pubs/research_reports/RR1478.html (finding that “fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their safety in terms of fatalities and injuries”).
Patrick J. Hayes, The Frame Problem and Related Problems in Artificial Intelligence, Stanford Univ. Dept. of Computer Science (1971).
 Bill Berkrot, Success rates for experimental drugs falls: study, Reuters (2011), http://www.reuters.com/article/us-pharmaceuticals-success-idUSTRE71D2U920110214.
 Although beyond the scope of this project, one could imagine designing a testing protocol for robots based on the staged approval process for drugs, building from lab testing, simulations, and animal models into multiphase safety and effectiveness human-subject studies. Differences would of course arise: for example, it makes no sense to replicate Phase I safety trials by testing a brain tumor removal robot on a healthy patient by having it carve out part of his brain. Failure modes are also different: a robot might have problems with power outages or earthquakes or other events that might not occur at all—or could not ethically be inflicted—during a clinical trial.
 FDA, SEDASYS Computer-Assisted Personalized Sedation System, http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/DeviceApprovalsandClearances/Recently-ApprovedDevices/ucm353950.htm (2015) (recall J&J’s involvement in Verb Surgical, supra note 14 and accompanying text).
 Todd C. Frankel, New machine could one day replace anesthesiologists, WASH. POST (May 11, 2015), http://www.washingtonpost.com/business/economy/new-machinecould-one-day-replace-anesthesiologists/2015/05/11/92e8a42c-f424-11e4-b2f3- af5479e6bbdd_story.html
 SEDASYS, supra note 111; Use of the automated system without an anesthesiologist available would merely be an off-label use, which the FDA has no power to restrict, 21 U.S.C. § 396.
 Frankel, supra note 112.
 Aaron Mannes, Institutional Options for Robot Governance, We Robot 2016, 16. http://robots.law.miami.edu/2016/wp-content/uploads/2015/07/Mannes_RobotGovernanceFinal.pdf.
 Id.; see e.g. FDA, Postmarket Management of Cybersecurity in Medical Devices: Draft Guidance for Industry and Food and Drug Administration Staff (2016) http://www.fda.gov/downloads/medicaldevices/deviceregulationandguidance/guidancedocuments/ucm482022.pdf.
 Mannes, supra note 116 at 16.
 See, e.g., Testimony of Miss Cummings, Hands Off: The Future of Self-Driving Cars, Senate Committee on Commerce, Science, and Transportation (March 15, 2016).
 Human Factors: The Journal of the Human Factors and Ergonomics Society, http://www.hfes.org/publications/ProductDetail.aspx?ProductId=1 (“Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.”).
 21 U.S.C. § 396 (“Nothing in this chapter shall be construed to limit or interfere with the authority of a healthcare practitioner to prescribe or administer any legally marketed device to a patient for any condition or disease within a legitimate health care practitioner-patient relationship.”)
 FDA, Applying Human Factors and Usability Engineering to Medical Devices: Guidance for industry and Food and Drug Administrative Staff (Feb. 6, 2016) http://www.fda.gov/downloads/MedicalDevices/…/UCM259760.pdf..
 Id. at 3 (Def. 3.7).
 FDA, List of Highest Priority Devices for Human Factors Review: Draft Guidance for industry and Food and Drug Administrative Staff (Feb. 3, 2016) http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM484097.pdf.
 See, e.g., M.C. Elish, supra note 81.
 See, e.g., Butler et al., A Formal Methods Approach to the Analysis of Mode Confusion, Digital Avionics Systems Conference, Proceedings 17th.
 E.g., Reuters & Andy Echkart, Controller in Deadly German Train Crash Was Playing Game on Phone, NBC News (Apr. 12, 2016) http://www.nbcnews.com/news/world/controller-deadly-german-train-crash-was-playing-game-phone-prosecutors-n555121 (train operator was playing cellphone game during crash: journalists and train company say this isn’t a “technical problem,” but human factors engineers would argue that automated system design that makes the human so bored that he decides to play a game instead of monitor the train is absolutely an engineering design flaw).
 See, e.g., Raja Parasuraman & Christopher D. Wickens, Humans: Still Vital After All These Years of Automation, Human Factors (June 2008), 512–20 http://peres.rihmlab.org/Classes/PSYC6419seminar/ParasuramanWickens08.pdf.
 Christy Foreman, Office Director, Clearance Letter: SEDASYS Computer-Assisted Personalized Sedation System (May 3, 2013) http://www.accessdata.fda.gov/cdrh_docs/pdf8/p080009a.pdf.
 The scope of this paper leaves out discussion of the “payer” side of the health care system: that is, will Medicare/Medicaid and private insurers agree to pay for robot surgery?
 Kate Darling, Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects, Robot Law, 217 (Calo et al., eds. 2016).
 Nidhi Subbaraman, Solidiers <3 robots: Military bots get awards, nicknames … funerals, NBC News, http://www.nbcnews.com/technology/soldiers-3-robots-military-bots-get-awards-nicknames-funerals-4B11215746 (2013); Further, a rumor in the HRI and robot law community tells of soldiers trying to sacrifice themselves to save robots, but a reliable source with that story cannot be located.
 See, e.g., Darling, supra note 132; Kate Darling, “Who’s Johnny?” Anthropomorphic framing in human-robot interaction, integration, and policy, We Robot 2015 http://www.werobot2015.org/wp-content/uploads/2015/04/Darling_Whos_Johnny_WeRobot_2015.pdf.
 Darling, supra note 132.
 Calo, supra note 27.
 Personal observations at Duke University Hospital, 4/7/2014
 Darling, supra note 132 at 218.
 U.S. Const. Art. 1 § 1 clause 3 ; See, e.g., United States. v. Lopez, 514 U.S. 549, 558 (“Congress is empowered to regulate and protect the instrumentalities of interstate commerce, or persons or things in interstate commerce, even though the threat may come only from intrastate activities.”) Robots will be sold, tested, potentially even operated across state lines.
 See, e.g., U.S. v. Lopez, 514 U.S. at 548–49 (“Congress’ commerce authority includes the power to regulate those activities having a substantial relation to interstate commerce.”).
 U.S. Const. Art. 1 § 1 cl. 1; Lars Noah, Ambivalent Commitments to Federalism in Controlling the Practice of Medicine, 53 U. Kan. L. R. 149, 169 (2004).
 Linder v. U.S. 268 U.S. 5, 22-23 (1925).
 Lambert v. Yellowley, 272 U.S. 581, 604 (1926) (J. Sutherland dissenting).
 Id. at 596.
 Noah, supra note 143 at 161.
 § 360k(a)(1); See, Reigel at 906.
 Riegel at 315–6,
 Lohr, supra note 62.
 State medical boards have attempted to limit the practice of Advance Practice Registered Nurses in other contexts, and a medical board rule about nurses not using robots might be an antitrust violation. Federal Trade Commission, Policy Perspectives: Competition and the Regulation of Advanced Practice Nurses (March 2014).
 Sedasys, supra note 111; Foreman, supra note 130.
 21 CFR §808.1(d)(3).
 Riegel at 330.
 Broekens et al., Assistive social robots in elderly care: a review, Gerontechnology (2009) (available at http://gerontechnology.info/index.php/journal/article/view/gt.2009.08.02.002.00/997).