The intersection of technology and healthcare has reached an unprecedented point in history. The traditional medical model, where the ability to accurately diagnose and treat patients is gained from thousands of hours of hands-on medical training, is being challenged. The challenge comes from the creation of artificial intelligence (AI) software that can use deep learning to meet or exceed the accuracy and dependability of medical decisions made by their human counterparts. These AI systems function as an “economy of minds” using collective experiences of human physicians and healthcare providers to create massive databases of knowledge that can be trained to perform specific undertakings.[1] AI already infiltrates fields such as radiology, optometry, and some simple surgeries. But what happens when the initial programming has been outgrown by the learning of the technology and the technology makes a mistake. Who becomes the tortfeasor? The doctor relying on the AI? The hospital paying for the AI? Is the company responsible for the initial programming? Or is no one liable?

This paper will challenge the current application of the technology’s ethical considerations facing a healthcare provider using AI and explore the use of AI systems in medical decision making. I will divide my argument into five parts. Part I will define AI by deciphering a very complex, technical topic understandable language. Part II will break down AI into the three primary classifications of AI systems that will be important to delineate as these types of systems are discussed. Part III will explore the current and potential uses of AI in healthcare by giving more concrete examples of AI applications when mass amounts of data are available. Part IV will identify significant concerns with the ethics of AI as we know it today but specifically, focus on the reliability concerns of a healthcare provider using an AI system for diagnosis or treatment planning. We will also look at how current law intersects with innovative AI. Finally, in Part V, I make a recommendation considering the use of AI in healthcare and how the symbiosis of such tools can be maximized while at the same time limiting risk.

1.     Artificial Intelligence Defined

Artificial Intelligence (AI) is one of the most misunderstood terms and difficult to explain concepts in our culture today. When the public hears the term AI, they often imagine how the machines become self-aware and begin systematically disposing of the entire human race as depicted in the movies.[2] However, what AI actually is, as we know it today, “is a combination of neuro-linguistic processing with a knowledge base and data storage where interaction data matches with analytic data.”[3] In layman’s terms, AI is a computer program trained to recognize a specific outcome after being exposed to adequate amounts of variable data that would allow a statistical rate of accuracy to be determined when it analyzes a new data point.[4] The ability of these systems to use “probabilistic representations” and “statistical learning methods” has opened the door to AI influenced products in “machine learning, statistics, control theory, neuroscience, and other fields.”[5] The goal of these systems is to perform in such a way that “if observed in human activity” that the general public would label the performance “intelligent.”[6] Depending on the task, this can be easily accomplished or impossible with the ability of current computing power.[7] However, scientists and engineers have been finding more and more applications for these outcome-driven systems in our current society.[8]

Products such as advanced analytics and diagnostic analysis are now being utilized in a variety of industries and marketplaces such as law and medicine. Global researchers and advisors, Gartner, called advances in AI “the most disruptive class of technologies over the next [ten] years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks.”[9] Gartner and many others feel that we are still in the infancy of AI.[10] This infancy, or newness, is apparent in the only recent widespread rise of facial recognition as a standard feature on many cell phones, cars that self-park and have limited autopilot abilities, and big data analytics is the new normal instead of a rarity.[11]

2.     Classifying AI

With an informed understanding of how AI functions, it is possible to further classify AI systems into three distinct categories: Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Superintelligence.[12] These classifications also help delineate how the stage of progression that science and computing power currently limits our AI capabilities.[13]

Artificial Narrow Intelligence (ANI) is the most common type of AI and essentially a task-specific program interface.[14] When most people speak of existing AI or they reference a current product as using AI, ANI is typically what they are describing. Products such as advanced research analytics, diagnostic analysis, and statistical probabilities are all examples of ANI.[15] Because it refers to the limited parameters of such a system, ANI is also referred to as “Weak AI.”[16] The term, weak, is analogous with basic or limited compared to the theoretical potential of more advanced systems. An interesting example of this weakness is China’s “crime-fighting facial recognition software” that recently gave famous executive, Dong Mingzhu, a ticket for jaywalking as a bus with her face on the side of it sped through an intersection.[17]

Artificial General Intelligence (AGI) differs from ANI in that it refers to the programming either mirroring or “exceeding human intelligence.”[18] AGI can learn multiple tasks and transfer it’s learning between them.[19] In other words, the AI “learns how to learn.”[20] Examples of AGI would be IBM’s Watson supercomputer and self-driving cars as we know them today.[21] However, within this category of AI is where our current state of technology puts a hard stop on the capability of development.[22] With the goal of replicating human behavior and convincing the user of intellectual understanding, the results are simply beyond the comprehension of today’s science.[23]  The brain and how our consciousness works is undoubtedly beyond our current capacity.[24] Science is unable to replicate even the most basic of human abilities currently, and it is debated whether “mapping the human brain will ever be feasible.”[25] The most capable AGI to date is the Impala algorithm which can learn up to 30 different tasks with a variety of complexity.[26] While impressive innovation, the Impala algorithm is basic in comparison to the millions of separate tasks performed every day by the human mind. This fact is not lost on the cutting-edge developers of AI. These scientists and researchers are diligently working to expand the borders of possibility as we know it as evidenced by the new and increasingly advanced systems coming online regularly.

Artificial Superintelligence (ASI) is the final and most advanced level of AI. This level of AI would be self-aware and would far exceed the human capability both in quantity and quality of performance. This phenomenon is often referred to as the “singularity” or the point in time where technology will reach a point of “unfathomable changes” to human existence as we know it.[27] It has been hypothesized that after the singularity, computations that would have taken years only minutes before would only take seconds and that life as we know it would never be the same.[28] The reality of this phase of AI is met with “radical uncertainty” and is entirely theoretical, but the potential is fascinating that top minds feel we may see the singularity in five to thirty years.[29] The possibility of an algorithm that can synthesize an amount of data that could lead to the eradication of poverty, homelessness or disease could change the world forever.[30] Although, there is a valid concern about the ethical considerations of a self-aware computer that exceeds human understanding. Think tanks, like the Partnership on AI, are attempting to bring those policy issues to the forefront of the discussion as we continue to innovate and create.[31] By regulating the industry as it is being designed, these groups hope to inject ethical considerations early in an attempt to limit any negative impact on society in light of the potential of this type of AI.[32]

3.     Applications of AI in Healthcare

Few industries showcase the importance of quick and accurate decision making as in healthcare. ANI and AGI systems aimed at medical decision making are taught with a variety of healthcare data with the goal of speeding up many tasks.[33] Moreover, the AI can significantly reduce or eliminate human error when critical decisions must be made in seconds instead of minutes or hours.[34] These systems are assisting healthcare providers in their decision making daily in the form of assisting with emergency dispatch calls, virtual nurses, robotic assistance in laparoscopic surgery, and symptomology research. The AI systems “mine medical records, design treatment plans [and] create drugs [] faster than any current actor on the healthcare palette including any medical professional.”[35]

The AI has been successful in assisting healthcare providers by instantaneously analyzing the enormous possible number of diagnoses for patients to determine a “more accurate diagnosis” in many disciplines.[36] These diagnoses have evolved into more than another data point on a graph; they are a complete qualitative and quantitative analysis designed to replicate or exceed the decision made by a physician in a specific task or diagnostic reading.[37] Therefore in areas where the AI’s diagnosis is exceptionally accurate, their findings have been given significant weight and a heightened status. For example, IBM’s supercomputer, Watson, has been trained to analyze and diagnose certain types of cancer, like leukemia, by synthesizing an individual’s blood and bone marrow testing against a dataset of positive cancer diagnosis in multiple stages of the disease.[38] These AI systems are substantially well trained and, in some diagnoses, their findings are “significantly more accurate” than traditional cancer staging systems.[39]

The ability of hospitals and physicians to provide their patients with better health outcomes by leveraging these types of new technologies has led to a rapid expansion of available tools and software based on ANI and AGI in healthcare. These systems can analyze data specific to a hospital or hospital system or drill down data in individual clinics or health departments.[40] The abundance of data that has been collected over the years can now be easily referenced and compiled into usable models and statistical calculations that give healthcare, as an industry, better probabilistic diagnostic data, but also provide insight into healthcare trends and successful initiatives both globally and in local communities.[41] The specificity of ANI has led to immediate applications in the normal versus abnormal diagnoses that have led to further testing and more accurate diagnosis in breast, colorectal, and optometric cancers and conditions.[42]

In addition to diagnostic analysis, ANI systems are also helping healthcare providers and hospitals to be more efficient in their choice of orders and testing when a patient presents for treatment.[43] Doctor AI, a cutting-edge AGI system, uses data mined from the hospital’s electronic health record (EHR) to perform a differential diagnosis, a comparison between two or more similar diagnoses, with significantly higher accuracy than baseline human expectations.[44] In sum, this system can predict a physician’s diagnosis and anticipate order sets faster and with greater efficiency than a healthcare provider making a preliminary diagnosis and then manually uploading a pre-planned group of orders commonly associated with their initial findings.[45] In doing so, the nurses and healthcare providers can provide specific, necessary tests to the patient faster and reduce the time needed to treat the individual patient’s condition.[46] This saves the patient time, expense, and suffering while receiving their healthcare.

4.     The Ethical Considerations of AI

The potential upside of using AI technology to increase patient outcomes is staggering. These systems can provide the healthcare provider with information in seconds that traditionally would have taken days to analyze, require multiple appointments, and delay care to the current patient as well as everyone else trying to be seen.[47] This technology has the “potential to redesign healthcare completely.”[48] However, fundamental legal and ethical questions about how the AI will be used must be answered in regards to patient privacy, disclosures about the AI making healthcare decisions, and reliance on the diagnostic reports. These questions have already become front of mind for many in bioethics, although the rapid creation and implementation of AI in the healthcare setting has compounded the importance of these issues in regards to the application of AI. Those in bioethics are, in turn, looking to the law and the courts to guide them in their decision making. After all, time has shown that while advances in technology affect the law, the law, in turn, affects the innovation of technology.[49]

A.  Informed Consent of Healthcare Decisions

For even the most simple AI system to function, an enormous amount of data is needed to train the neural network on what is or is not a correct outcome.[50] In patient care, interdisciplinary planning, and diagnosis of specific disease processes, the logical source of this data would come from actual patient data within that particular clinic, specialty care unit, or hospital system. The issue of informed consent about the use of thousands of patient records suddenly becomes of paramount concern. Informed Consent is defined as having “full knowledge of the risks and concerns.”[51] While accurate that moving forward informed consent could be obtained at new patient intake, the issue is using previous patient data stored in a proprietary system or an EHR. The use of this data is highly regulated under the Health Information and Portability and Accountability Act (HIPAA).[52] HIPAA protects this sensitive data from being distributed or misused by individuals that do not have a medical need to know about that specific patient’s care.[53] Hospitals would essentially be opening patient charts to these AI systems to use the data as they see fit. Without the patient’s informed consent to allow the data to be mined, the hospital and AI system have violated these HIPAA concerns.[54] Thus, without the informed consent of the data pool, sensitive patient information has been unlawfully used.[55]

B.  Disclosure of the Use of AI

In the same vein, the patient bill of rights, adopted in the Patient Protection and Affordable Care Act, requires that the patient is fully informed of the decisions being made in regards to healthcare.[56] To be fully informed would include the available healthcare options relevant to their current healthcare needs.[57] If the healthcare provider could use AI to verify their diagnosis, similar to a second opinion, the patient, in theory, should have disclosure of these options.[58] Inversely, this may also include situations where the healthcare provider chose to use their own judgment over the diagnosis provided by the AI. In turn, if the healthcare provider consulted the AI to make their diagnosis or plan of care, the patient should be informed of this use, regardless of the level of accuracy.[59] These patients should then have the same opportunity and ability to choose treatment paths as a patient faced with laparoscopic or traditional surgery, for example.[60] Denying the patient the ability to understand the AI being used in their care would violate the patient bill of rights by reducing their ability to choose their care.[61]

However, healthcare literacy now also includes healthcare professionals attempting to explain one of the most complicated phenomena of emerging technology in AI. Physicians and healthcare providers are trained to explain procedures at a third-grade level to patients to increase the likelihood of their understanding. The question remains whether advanced AI can adequately be explained at such a low level. If not, is informed consent indeed obtained? Unfortunately, this concern is compounded with the struggle of healthcare as a whole being adequately explained and it is a topic that requires much more research beyond the scope of this paper.

C.  Reliance on the AI by Healthcare Professionals

Perhaps most important to the discussion of AI in healthcare, is the question of what happens when the AI is wrong. The idea of the entire course of treatment being swift and efficient is noble, but what if the AI made a mistake and the patient is submitted to unnecessary tests that potentially delay the patient getting the care they need. This is not to mention that if these AI systems are being used to diagnose cancer where, for example, an incorrect diagnosis could prove fatal or cost millions of dollars in unnecessary treatment. Therefore, a correct diagnosis now includes the additional steps of the physician or healthcare provider essentially deciding whether to trust the AI diagnosis or even use the AI at all.

Many of these issues have not been substantially raised because of the rapid design to implementation that has occurred as these systems come to market.[62] Fundamentally, computer scientists and programmers are typically not lawyers, healthcare providers or bound by a strict ethical code.[63] However, these ethical issues and practical limitations of even the best AI systems are blatantly obvious to their creators from the outset:

One limitation of Doctor AI is that, in medical practice, incorrect predictions can sometimes be more important than correct predictions as they can degrade patient health. Also, although Doctor AI has shown that it can mimic physicians’ average behavior, it would be more useful to learn to perform better than average. We set as our future work to address these issues so that Doctor AI can provide practical help to physicians in the future.[64]

Doctor AI is arguably one of the most advanced medical AGI systems in existence, and it has been shown to be 80% as effective as a physician or healthcare provider with the ability and training to diagnose and create data sets.[65] Thus at only 80% effective, it becomes apparent that there are understood limitations of even the best AI at this point. It is entirely reasonable that if a healthcare provider substituted their reasoning for that of an AI system, be it from time-pressure, staffing issues, or pressure from the hospital that hopes to replace human staffers with this technology, they would be knowingly risking the patient’s positive health outcome.

5.     Proposed Solutions

The question then becomes, can we rely on the AI to make healthcare decisions at all? The answer is no, or at least, as of the time of this paper being written, not yet. The current state of AI has put healthcare providers in a lose/lose situation. To explain, I will break my argument down into two main parts. Initially, in Part I, the lack of computational power to regulate the decision-making algorithm, and then, in Part II, where the tortfeasor is inefficiently defined where errors are made.

A.  Inability to Regulate the Algorithm

AI functions by raw data being introduced to the system and then being passed through a neural network that, based on how the program is “taught,” produces an outcome that a human would determine to be intelligent.[66] For ANI systems, the success, or accuracy, of the system is based on the quality of the data and the quality of the teaching.[67] For advanced AGI systems, the issues are further complicated by the system’s ability to learn on its own, thus extending beyond the programmed data and teaching provided by the creator or administrator of the system.[68] Thus, once the code makes adaptations beyond the initial learning, the algorithm can be influenced by a host of external influences that may cause discrepancies or inaccuracies that are entirely “out of the hands” of the developers.[69] Simply put, once the AGI surpasses its training, we are unable to regulate the algorithm or pinpoint the learning that caused the AI to respond the way it does. The current limitations of our computing power cannot audit or catalog individual AGI decisions because there is simply too much data to synthesize to determine how a single decision is made.[70] Even a snapshot of the decisions made by the system at the time, like a black box in an airplane, could be impossible to comprehend effectively outside the entire system.[71]

One of the best examples of this is Microsoft’s AI chatbot, Tay.[72] Tay was an AGI system attempting to interact with users on Twitter at a level that was indistinguishable from other human users.[73] It analyzed traffic on Twitter and learned how users of the platform act and what their typical responses to questioning might be.[74] Unfortunately, the project had to be scrapped within sixteen hours of its implementation because Tay’s output became “racist, inflammatory, and political” to the point of “Hitler was right” as well as “9/11 was an inside job.”[75] The outside influences that occurred beyond the initial data set led to arguably corrupted outputs.[76]

Another example is the Correctional Offender Management Profiling for Alternative Sanctions or COMPAS.[77] COMPAS was an AGI system that became prone to identify African American defendants as almost twice as likely to re-offend and offered tainted recommendations to those defendants.[78] This system used probability models associated with zip codes, social media activity, and income levels to determine likely future behavior of a subset of the population without taking into account societal and social factors like first-time offenses or underlying factors surrounding the crime.[79]

In 2018, The House of Representatives warned about the promises and dangers of artificial intelligence stating, “allowing misuse-related considerations to influence research priorities and norms” and engage with “relevant actors when harmful applications are foreseeable.”[80] These concerns are not unfounded. In a separate but related use of this information, there is growing concern that AI systems that obtain EHR data could be abused or altered by individuals or companies.[81] These modified AI systems could bypass the doctor-patient privilege and provide potential employers or government agencies with privileged information about individuals that they would otherwise not have access to.[82]

With that being said, we are again faced the question of whether we can rely on the AI at all. The same ability of the system to make a determination could also be its detriment. Healthcare providers, lawyers, judges, and a host of other professionals would essentially be placing their licenses on the line by solely adopting the decision provided by the AI.

B.  AI and the Law

According to the current state of the law, when mistakes are made in healthcare, the remedy is most often in tort law. Tort law provides financial compensation for individuals who are “harmed by the negligent conduct of others.”[83] The “responsible party” would be the one “responsible for causing the injury.”[84] The problem lies in determining the responsible party regarding the AI.[85] As mentioned previously, the current state of computer power is unable to decipher the chain of learning and decisions made by the AI after the system surpasses the initial set of learning.[86] This inability would leave an injured party likely without the ability to prove causation and is repugnant to the notions of fairness and responsibility under the law.[87]

Problems in determining the responsible party regarding the AI has led to creative legal solutions to eliminate this liability loophole.[88] Suggestions have been made that we assign such an AI system “legal personhood.”[89] The legal personhood assignment would allow the AI to have liability when it harms someone.[90] However, an algorithm cannot be deterred by the threat of tort or criminal prosecution the same way that a human can.[91] In turn, assigning personhood to the AI system would be an opportunity to assign blame with little or no recourse for the algorithm not to perform such a task, and the developer of the AI would be able to distance themselves from the decisions of the system because it is individual and separate from the “person.”[92] This is not to mention that assigning blame to an algorithm outside of an agency relationship would in no way guarantee recovery.

Further, there have been proposals that the law is expanded to encompass the use of the AI with a “group responsibility” function since the use of the system would be in concert with the developer, the healthcare provider, and the hospital or clinic, for example.[93] However, considering that once the AGI system surpasses its initial learning, that its decisions are essentially it’s own, it would be unreasonable to hold the developer liable.[94] The decisions made by the system are no longer based solely on the initial data set, and the AI is outside the developer’s control.[95] A decision like this, to hold everyone liable with a stake in providing the AI system’s existence essentially, would thwart AI development in an instant.[96]

In the same token, the hospital or physician that supplied the data essentially did only that. In patient care, every individual is an endless combination of individual variables. The data set is, however, one of the most critical pieces of the AI being successful.[97] If the data set was tainted or corrupt, it is foreseeable that the data set provider could have individual liability. Although, if the issue occurred beyond the scope of the initial data, it would be unreasonable to hold this individual or entity liable based on the machine learning. Just as with the developers, the system has surpassed the initial data set learning, and outside resources are influencing the system.[98]

In this scenario, the healthcare provider relying on the AI’s diagnosis should be assigned blame as the tortfeasor. The AI system is a tool to be used only in the combination of the healthcare provider’s expertise.[99] The healthcare provider who substitutes his or her experience and judgment for the decision of the AI system should be held responsible for the outcome. The “last chance” doctrine gives the healthcare provider the final opportunity to catch the mistake and mitigate the damage on the journey to positive patient outcomes.[100] In addition, the American Medical Association (AMA) requires that the attending healthcare provider that has accepted care of that patient is responsible for that patient until handoff to another provider occurs.[101] This responsibility includes the patient care decisions, treatment plans, and care plans associated with the individual patient during their time under the care of that provider.[102] This also includes the decision to rely or not to rely on AI in their decision making.

Therefore, in the current state of technology, using AI to diagnose patients is a lose/lose situation for healthcare providers. If the AI is wrong they are liable for a tort action, and if the AI is correct but the physician did not use the diagnosis, the healthcare provider denied the patient proper care and is liable as well.[103]

Proponents of the use of AI would argue that there is no difference in a human physician or healthcare provider looking at the data and making a similar mistake. This is true, of course. However, the physician is not able to see all of the decisions the AI made to arrive at its proposed diagnosis, yet the physician must make the judgment call as to the validity of the result. Until science is able to analyze the algorithm of the AI, the system will not be able to be trusted in its fullest capacity. The burden of perfection is once again placed on physicians and healthcare providers whose duty is to care for their patients with the highest level of ethics and technological understanding.

Furthermore, assigning blame to the healthcare provider will limit the AI to a tool be used to double-check analysis as the healthcare provider will be unwilling to trust their license on the system entirely. It will be up to the healthcare providers and hospital staff to resist pressure from hospital administration to replace the skilled team with automation. The resistance will be easier said than done in the current healthcare climate, where there is downward pressure to bill more patients and at the same time reduce cost. However, by further defining the role AI plays in healthcare, developers and entrepreneurs can focus on specific tools for specific needs at a higher rate and expand the offerings of ANI as AGI takes shape. This change in conceptual thinking will only initially limit the expanse of attempts at AI in healthcare but will ensure patients get the best outcomes until the technology can be seen as a help up and not a staff replacement.

C.  The AI Attorney’s Role

What form AI takes moving forward is unknown, but it has the potential to change how we interact with the world from now on.[104] Rule 1.1, comment 8, of the Model Rules of Professional Conduct, in which most states have adopted a version of such rule, that states:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.[105]

Attorneys have a duty to understand the technology they use in their practice.[106] They have a responsibility to protect their client’s interests and advise them of the risks and benefits of their actions; this includes hospitals and the companies developing AI.[107] These attorneys will play an integral role in explaining “prosocial” and ethical considerations when advising the AI developers as the technology grows and expands within regulated markets like healthcare.[108]

The action taken by these attorneys will also influence the legislation toward the ability to propose and enact new laws that further govern autonomous creations like AGI on their plight towards an ASI system.[109] However, those prominent in the legislature will need attorneys who can expressly counsel both the needs of the law to embrace the potential of what AI is, and could be. The legislature will also need educated lawyers who can explain how small intricacies in the law can further clarify the definition of AI and how these systems fit into those new laws. Until then, attorneys should advise their healthcare clients that there may be a symbiosis of experts and analytics that aid in increased health outcomes but without the sole reliance on the AI in order to reduce liability.[110]

Conclusion

Artificial Intelligence is a phenomenon that has the potential to alter how we interact with data, learn, and make decisions. In healthcare, these decisions will give humanity access to data synthesis that could lead to the next technology revolution. However, in designing AI as we know it today, it could be said that “scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”[111] For healthcare specifically, the inability to determine the accuracy of current AI could lead to potentially disastrous effects of patients and patient care if there is too much reliance on the wrong system. The current AI systems have only limited testing, but the cost-saving potential is forcing them into hospitals and clinics at an alarming rate. Healthcare providers must be conscious of their liability and the limitations of current AI just like they are with staffing considerations, internal policies, or the changing healthcare climate in general. However, AI systems being used to double-check the healthcare provider’s years of experience or using the AI’s determination as a starting point for research could in and of itself increase the speed and accuracy of healthcare delivery without total reliance on the system. This would allow the healthcare provider to mitigate their risk but give their patients the highest levels of care available.

At the same time, the court will be faced with tough decisions determining the liability surrounding these AI systems. Litigators that can describe the interworkings of AI and its effect on society will be invaluable. It will be up to these well-informed attorneys to understand the needs of their clients and to challenge the developers of AI in such a way to inject the necessary controls and auditing capabilities on new AI systems coming into the market. Moreover, it will be up to these AI attorneys to seek remedies for those who have been harmed by using their knowledge of AI, the specific AI system, and forcing the courts to assign blame. These individuals will have a tremendous impact on the way this technology shapes future society. This is an enormous blessing and an enormous burden in the same breath.

In conclusion, AI is here to stay. It should be embraced for its vast potential upside but kept at arm’s length as it evolves into the powerfully reliable technological resource it is meant to be. The AI of today is only a glimpse of what is to come with technology. In fact, leading minds believe the singularity is a mere 5 to 30 years away from the time I am writing this.[112] If that is the case, healthcare, in all likelihood, will be AI’s crown jewel or guillotine. When an individual’s life is in the balance, there is no greater risk to the individual and the financial ramifications associated with these decisions are insurmountable. The developers of advanced AI must understand these concerns and hold themselves to a higher standard in their design. They must consider prosocial elements and implement them broadly where possible as AI evolves. And the law must adapt to the change that AI presents on a widescale use. We must support those carrying the burden of this technology, like healthcare providers, and give them the guidance they deserve.

 

Ryan Dobbs, 3L

Ryan is the founding president of TALIS and editor-in-chief of the TALIS blog. Contact Ryan at RyanDobbs@ou.edu, at RyanDobbs.Lawyer, or on twitter @RyanDobbs

References

[1] Dr. Ben Goertzel, The Joe Rogan Experience (Dec. 6, 2018) (downloaded using iTunes).

[2] John Niman, A Brief Overview of Artificial Intelligence Application and Policy, Nev. Law. 8 (2018).

[3] Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter, Artificial Intelligence 3 (Dec. 31, 2015), https://www.aaai.org/ojs/index.php/AImagazine/issue/view/212.

[4] id.

[5] id.

[6] David T. Laton, Manhattan_project.exe: A Nuclear Option for the Digital Age, 25  Cath.U.J.L. & Tech. 94 (2016).

[7] Goertzel, supra note 1.

[8] Research Priorities, supra note 3, at 3.

[9] Alex Woodie, How AI Fares in Gartner’s Latest Hype Cycle, Datanami (October 3, 2018, 6:46 PM), https://www.datanami.com/2017/08/29/AI-fares-gartners-latest-hype-cycle/.

[10] id.

[11] id.

[12] Laton, supra note 6, at 94.

[13] id.

[14] Ryan Dowell, Fundamental Protections for Non-Biological Intelligences or: How We Learn to Stop Worrying and Love Our Robot Brethren, 19 Minn. J.L. Sci. & Tech. 305, 308 (2018).

[15] Research Priorities, supra note 3, at 3.

[16] Dowell, supra note 14, at 308.

[17] Tang Ziyi, AI Mistakes Bus-Side Ad for Famous CEO, Charges Her with Jaywalking, CX Live (Nov. 22, 2018), https://www.caixinglobal.com/2018-11-22/ai-mistakes-bus-side-ad-for-famous-ceo-charges-her-with-jaywalkingdo-101350772.html

[18] Niman, supra note 2, at 8.

[19] Aaron Krumins, Artificial Intelligence is Here, and Impala is its Name, Extreme Tech (Aug. 21, 2018, at 1:01 PM), https://www.extremetech.com/extreme/275768-artificial-general-intelligence-is-here-and-impala-is-its-name.

[20] id.

[21] Bridget Watson, A Mind of Its Own-Direct Infringement by Users of Artificial Intelligence Systems, 58 IDEA: J. Franklin Pierce for Intell. Prop 65, 73 (2017).

[22] Krumins, supra note 18.

[23] Laton, supra note 6, at 94.

[24] Niman, supra note 2, at 8.

[25] id.

[26] Krumins, supra note 18.

[27] Goertzel, supra note 1.

[28] id.

[29] id.

[30] id.

[31] Partnership on AI, https://www.partnershiponAI.org/ (last visited, Nov. 18, 2018).

[32] Jordan Bigda, The Legal Profession: From Humans to Robots, 18 J. High Tech. L.

396, 398–99 (2018).

[33] Fei Jiang et.al, Artificial Intelligence in Healthcare: Past, Present and Future, 2 Stroke and Vascular Neurology, (Nov. 21, 2018, 5:47 PM), https://svn.bmj.com/content/2/4/230.

[34] id.

[35] 10 Ways Technology Is Changing Healthcare, The Medical Futurist, https://medicalfuturist.com/ten-ways-technology-changing-healthcare (last visited Nov. 21, 2018 at 5:47 PM).

[36] Watson, supra note 20, at 73.

[37] id.

[38] id.

[39] Harry Burke et. al, Artificial Neural Networks Improve the Accuracy of Cancer Survival Prediction, Cancer (Nov. 21, 2018 at 5:59 PM), https://onlinelibrary.wiley.com/doi/full/10.1002/%28SICI%291097-0142%2819970215%2979%3A4%3C857%3A%3AAID-CNCR24%3E3.0.CO%3B2-Y.

[40] Dowell, supra note 14, at 308.

[41] Burke, supra note 36.

[42] id.

[43] Edward Choi et al., Doctor AI: Predicting Clinical Events via Recurrent Neural Networks, 56 JMLR 1, (2016), http://proceedings.mlr.press/v56/Choi16.pdf.

[44] id.

[45] id.

[46] id.

[47] Research Priorities, supra note 3, at 3.

[48] 10 Ways, supra note 32.

[49] Aryeh Friedman, Law and the Innovative Process: Preliminary Reflections, 1986 Colum. Bus. L. Rev. 1 (1986).

[50] Research Priorities, supra note 3, at 3.

[51] Black’s Law Dictionary (10th ed. 2014).

[52] Health Insurance Portability and Accountability Act of 1996 (HIPAA) Pub.L. 104–191, § 221, 110 Stat. 1936, 2009 (1996).

[53] id.

[54] id.

[55] id.

[56] Patient Protection and Affordable Care Act, Pub.L. 111–148, § 2717, 124 Stat. 119 (2010).

[57] § 221, 110 Stat. at 2009.

[58] id.

[59] id.

[60] id.

[61] § 2717, 124 Stat. at 119.

[62] Bigda, supra note 29, at 399.

[63] id.

[64] Choi, supra note 40.

[65] id.

[66] Laton, supra note 6.

[67] id.

[68] Weston Kowert, The Foreseeability of Human-Artificial Intelligence Interactions, 96 Tex. L. Rev. 181, 184 (2017).

[69] Kowert, supra note 65, at 184.

[70] Krumins, supra note 18.

[71] id.

[72] Tay, Microsoft’s AI Chatbot, Gets a Crash Course in Racism from Twitter, The Guardian, (Mar. 24, 2016),https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-AI-chatbot-gets-a-crash-course-in-racism-from-twitter.

[73] id.

[74] id.

[75] id.

[76] id.

[77] Rise of the Racist Robots – How AI is Learning all our Worst Impulses, The Guardian (Aug. 8, 2017), https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-AI-is-learning-all-our-worst-impulses.

[78] id.

[79] id.

[80] 2018 CQDPRPT 0144 (2018) (House panel delving into promises, dangers of artificial intelligence).

[81] Goertzel, supra note 1.

[82] id.

[83] Kowert, supra note 65, at 184.

[84] id.

[85] id.

[86] id.

[87] Mark Chinen, The Co-Evolution of Autonomous Machines and Legal Responsibility, 20 Va. J.L. & Tech. 338 (2016).

[88] Kowert, supra note 65, at 184.

[89] Chinen, supra note 80, at 338.

[90] id.

[91] Chinen, supra note 80, at 338.

[92] id.

[93] id.

[94] id.

[95] Niman, supra note 2, at 8.

[96] Chinen, supra note 80, at 338.

[97] Jiang, supra note 30.

[98] Kowert, supra note 65, at 184.

[99]  Dowell, supra note 14, at 308.

[100] Restatement (Second) of Torts § 3 (2000).

[101] Katherine Blondon et. al, Physician Handoffs: Opportunities and Limitations for Supportive Technologies, AMIA Ann’l Symp. Proc. (Nov. 5, 2015) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4765668/.

[102] id.

[103] Kowert, supra note 65, at 184.

[104] Roy D. Simon, Artificial Intelligence, Real Ethics, N.Y. St. B.J., March/April 34, 37 (2018).

[105] Model Rules of Prof’l Conduct r. 1.1, cmt. 8 (Am.Bar Ass’n 2013).

[106] Model Rules of Prof’l Conduct r. 1.1, cmt. 8 (Am.Bar Ass’n 2013).

[107] Model Rules of Prof’l Conduct r. 2.1 (Am.Bar Ass’n 2013).

[108] Chinen, supra note 80, at 338

[109] Laton, supra note 6, at 94.

[110] Dowell, supra note 14, at 308.

[111] See Jurassic Park (Universal Pictures 1993).

[112] Goertzel, supra note 1.