Professionalism/Medicine, AI, and Professional Discretion

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Introduction to Medical Ethics[edit | edit source]

The role of medical professionals can be interpreted in different ways, from rendering service to humanity and promoting the art of healing[1] to restoring good health by identifying, diagnosing, and treating clinical illnesses.[2] Medical professionals must abide by a systems of rules and principles that guide clinical medicine and scientific research, known as medical ethics. Professional ethicists recommend integrating these four principles[3] into decision making within the medical field:

  1. Autonomy: Patients have the right to determine their own healthcare.
  2. Justice: Distributing the benefits and burdens of care across society.
  3. Beneficence: Doing good for the patient.
  4. Nonmalfeasance: Making sure you are not harming the patient.

These values aid professionals in considering all values, facts, and logic to determine the optimal course of action.

Artificial Intelligence in Medicine[edit | edit source]

Background[edit | edit source]

Artificial Intelligence (A.I.) is a type of machine learning that uses complex programs and algorithms to perform human cognition functions such as perception, reasoning, and learning.[4] As computing power and data quantity has increased in the past decade, A.I. based systems are used more frequently to improve and develop many fields and industries, such as agriculture, government, finance, and healthcare. In healthcare, A.I. based systems improve the accuracy and efficiency of diagnosis and treatment across many specialties. For example, computer-aided diagnosis (CAD) is routinely used in the detection of abnormalities in medical images. Radiologists currently use computer output from CAD systems to serve as a "second opinion" in breast cancer diagnosis.[5] IDx is a A.I. diagnostics company that developed IDx-DR, a CAD system that analyzes images of the retina for signs of diabetic retinopathy. IDx-DR is the first autonomous A.I. based system to be approved by the FDA to provide a diagnostic decision in any field of medicine.[6] A.I. based systems can also use clinical data science to analyze data and assist physicians. Clinical decision support systems (CDSS) utilize the rapidly increasing quantity and quality of data for patients, such as electronic health records, patient surveys, and information exchanges, to develop medical recommendations. In order for high-quality computerized CDSSs to be effective, thoughtful design, implementation, critical evaluation is necessary.[7]

Incorporation of A.I. based systems into clinical settings is slow as many remained concerned about such systems replacing physicians. The American Medical Association (AMA) Journal of Ethics supports A.I. based systems as a complementary tool to aid, not replace, clinicians. They bring attention to several ethical risks that arise within A.I. based systems, such as threats to confidentiality, privacy, informed consent, and patient autonomy.[8] In a case study conducted by the AMA Journal of Ethics, the use of CDSSs in clinical settings was investigated by analyzing Watson, an advanced question-answering computer system developed by IBM. In this study, the benefits, risks, and precautions of using such tools in clinical practices were outlined. CDSSs like Watson have some benefits, such as the ability to detect patterns that human physicians might otherwise overlook. However, these systems may also generate recommendations that are outside the current clinical standards of care, increasing uncertainty in a doctor's final diagnosis. Liability concerns were also addressed as technological innovations increase the opportunity for error in diagnosis. This can result in detrimental outcomes to patient health, posing an additional liability on doctors when making professional recommendations. As the amount and complexity of patient data increases, the need for automated and intelligent systems like Watson increases.[9] However, thorough analysis is necessary to determine the limitations and risks of any A.I. based system prior to utilization in healthcare settings.

Modes of Failure[edit | edit source]

As A.I. becomes more ubiquitous in healthcare, identification of A.I. failure modes proves necessary to avoid ethical issues in the field. For instance, A.I. can fail if the goals of the model are mis-specified. If a model is programmed to maximize profit instead of to save lives, the model may introduce racial bias or patient preference. Incorrect environmental assumptions, such as an algorithm trained primarily on extreme cases, can lead to models that fail to identify a patient as normal or healthy.[10] Any of these modes of failure or others can lead to ethical challenges. Models may be hard to interpret, making it difficult for doctors to explain their findings to their patients. Models may overuse and overshare information, or use features such as income, race or sex that reflect societal injustice more so than predictive information.

Related Case: Optum Algorithms[edit | edit source]

Optum, a company in Eden Prairie, Minnesota, developed one of the many commercial algorithms health systems rely on to identify complex health needs of patients of all ages and ethnicities. The algorithms are designed to increase efficiency in health care, reduce costs and, particular to Optum, “increase revenue cycle accuracy.” [11] Optum developed an algorithm to estimate risk scores for patients their urgency for treatment. The algorithm was used by eight of the top us health insurance companies, two major hospitals and the society of actuaries, effectively influencing the care and treatment of millions of patients in the U.S. In October of 2019, an article was published titled “Dissecting racial bias in an algorithm used to manage the health of populations,” exposing that Optum developed a racially biased risk classification algorithm.[12] The study found that at any given risk classification estimate, Black patients were considerably sicker than White patients. Particularly, Black patients with the same number of chronic conditions as White patients were predicted to be at a lower risk at every risk score percentile predicted by the algorithm. Further, for every number of chronic illness, hospitals recorded a lower total medical expenditure for Black patients than for White patients. The researchers found that remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.[12] In this study, the authors highlighted that although ancestry was not a variable used in training the predictive algorithm, healthcare costs were. As healthcare costs is considerably colinear to race, it was identified as the source of bias in the algorithm. Optum repeated Obermeyer’s analysis and were able to validate the algorithms bias. Obermeyer offered to rectify the injustice with Optum to find variables predictive of risk score other than health cost. Their combined efforts reduce predictive bias by 84%.[13] In an interview following the publication, Obermeyer underscored that bias in healthcare risk classification is “not a problem with one algorithm, or one company,” but a complication “with how our entire system approaches this problem.” [13] In a parting note, Obermeyer noted that A.I. must be supplemented with decisions in health care, and not relied upon if the field is to avoid these types of ethical issues in the future.

Physician's Professional Discretion[edit | edit source]

Background[edit | edit source]

Physician's discretion is a prerequisite for the confidential relationship between doctor and patient, otherwise known as DPR, doctor patient relationship. Professional discretion regulations are put in place to enhance the success of medical treatment and protects the patient‘s privacy and personal rights.[14] Coping with the diversity of legal regulations can be very difficult for physicians. There are numerous ethical regulations put in place by both federal law and by healthcare administrations.

Professional Discretion Regulations and Challenges[edit | edit source]

The following are some of the aspects that regulate medical professional discretion.

Relatives[edit | edit source]

Physicians cannot legally make therapeutic decisions for their own relatives. The reasoning is rather self-explanatory, there is a worry that bias will be involved when it comes to live saving measures.[14]

Fellow Doctors[edit | edit source]

Medical discretion must also be observed among fellow doctors. This is unproblematic if the patient remains informed in the case of joint/further treatment. The following cases are problematic: consulting in additional physicians without the patient's knowledge; joint practices and practice networks.[14]

Media[edit | edit source]

As far as the media are concerned, there is no case whatsoever that would justify a patient being presented in the media without his or her consent.[14]

Mental Illness[edit | edit source]

The Doctrine of Professional Discretion outlines medical professional discretion regarding care for patients with mental illness. It is a principle under which a physician can exercise judgment as to whether to show patients who are being treated for mental or emotional conditions their records. Disclosure depends on whether, in the physician's judgment, such patients would be harmed by viewing the records.[15]

HIPAA[edit | edit source]

The Health Insurance Portability and Accountability Act (HIPAA) includes many stipulations regarding patient privacy. With regards to profession discretion, the Privacy Rule and Doctrine of Informed Consent is critical. Informed consent is a process for getting permission before conducting a healthcare intervention on a person, or for disclosing personal information.[16]

Patient Advocacies[edit | edit source]

Patient advocacies become largely involved with ethically controversial cases, whether it be a family member, friend, or professional advocate. Advocacies place two different tensions on the DPR. First, there may be “conflict between what can reasonably be an expected duty of health care practitioners, and what might be beyond reasonable expectations”. Second, it can be difficult for any trained professional to “distinguishing between what is actual representation of patients’ wishes, and what is an assertion of what the advocate believes to be in the best interests of the patient” [17]

New Patients[edit | edit source]

Physicians do not have unlimited discretion to refuse to accept a person as a new patient. Physicians cannot refuse to accept a person for ethnic, racial,sexual-orientation or religious reasons.[18]

Life Support and Palliative Care[edit | edit source]

Advancing technology is allowing us to sustain a life longer, threrefore resulting in a rise in quantity and complexity of ethically controversial cases that doctors face regarding removal of life support devices. In the United States, the withholding and withdrawal of life support can be legally justified primarily by the principles of informed consent and informed refusal.

Clinicians' Rights[edit | edit source]

Conscientious Objection[edit | edit source]

Conscientious Objection is the doctor’s ability, often enshrined within the law, to opt out of providing or offering certain kinds of intervention on grounds of conscience. They must ensure that patients know about the decision and that they are able to receive care that they are entitled to from another professional in a timely manner. If the delivery of medical services to patients on conscience grounds is compromised, the responsible physician are punished, likely by removal of their license to practice.[19]

Ethics Comittee Consultation[edit | edit source]

Most healthcare systems have a resource that should be consulted in any ethically ambiguous situations. Physicians can call for help from a member of the ethics committee, in order to review the correct legal proceedures. An ethics committee is a consultation service for any patient, family member, or employee who believes there is an ethically challenging issue that is not properly being addressed and would like further help in solving. It is a multidisciplinary team composed of physicians, nurses, social workers, administrators, chaplains and other employees.[20]

References[edit | edit source]

  1. Markose, A., Krishnan, R., & Ramesh, M. (2016). Medical ethics. Journal of Pharmacy & Bioallied Sciences, 8(Suppl 1), S1–S4. https://doi.org/10.4103/0975-7406.191934
  2. Sendín, R. (2010, December 13). Definition of The Medical Professional. CGCOM | Consejo General de Colegios Oficiales de Médicos. https://www.cgcom.es/print/2821
  3. What Is Medical Ethics, and Why Is It Important? (n.d.). Medscape. Retrieved May 6, 2020, from http://www.medscape.com/courses/section/898060
  4. Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7. https://doi.org/10.7717/peerj.7702
  5. Doi, K. (2007). Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential. Computerized Medical Imaging and Graphics : The Official Journal of the Computerized Medical Imaging Society, 31(4–5), 198–211. https://doi.org/10.1016/j.compmedimag.2007.02.002
  6. Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Npj Digital Medicine, 1(1). https://doi.org/10.1038/s41746-018-0040-6
  7. Wasylewicz, A. T. M., & Scheepers-Hoeks, A. M. J. W. (2019). Clinical Decision Support Systems. In P. Kubben, M. Dumontier, & A. Dekker (Eds.), Fundamentals of Clinical Data Science. Springer. http://www.ncbi.nlm.nih.gov/books/NBK543516/
  8. Rigby, M. J. (2019). Ethical Dimensions of Using Artificial Intelligence in Health Care. AMA Journal of Ethics, 21(2), 121–124. https://doi.org/10.1001/amajethics.2019.121.
  9. Luxton, D. D. (2019). Should Watson Be Consulted for a Second Opinion? AMA Journal of Ethics, 21(2), 131–137. https://doi.org/10.1001/amajethics.2019.131.
  10. Anton Korinek, by, & Balwit, A. (2020). AI Failure Modes and the AI Control Problem.
  11. Healthcare Solutions & Services for Businesses. (2020). Retrieved May 3, 2020, from https://www.optum.com/en.html
  12. a b [Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
  13. a b [Simonite, T. (2019). A Health Care Algorithm Offered Less Care to Black Patients | WIRED. Retrieved May 3, 2020, from Wired website: https://www.wired.com/story/how-algorithm-favored-whites-over-blacks-health-care/
  14. a b c d Richter-Reichhelm, M. (1999, December). Medical discretion towards relatives, colleagues and the media from the physician's point of view. https://www.ncbi.nlm.nih.gov/pubmed/10683889
  15. DeVore, A. (2015). The electronic health record for the physicians office: with SimChart for the medical office. St. Louis, MO: Elsevier.
  16. Compliancy Group. (2019, October 11). HIPAA and the Law of Informed Consent. Retrieved from https://compliancy-group.com/hipaa-and-the-law-of-informed-consent/
  17. Schwartz, L. (2002, February 1) Is there an advocate in the house? The role of health care professionals in patient advocacy, Journal of Medical Ethics, 28(1), 37-40.
  18. McKoy, J. M. (2006, May 1). Obligation To Provide Services: A Physician-Public Defender Comparison. Retrieved from https://journalofethics.ama-assn.org/article/obligation-provide-services-physician-public-defender-comparison/2006-05
  19. Savulescu, J. (2006, February 4). Conscientious objection in medicine. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1360408/
  20. Ethics Consultation. (n.d.). Retrieved from https://www.urmc.rochester.edu/highland/patients-visitors/hospital-services/ethics-consultation.aspx