Regulate the robots

Regulate+the+robots

Madison Ling, Campus Life Editor

There has been much debate about what role artificial intelligence (AI) should have in health care. Some say that its role should be unlimited and that these algorithm controlled machines should be allowed to hold human lives in their hands. Others argue that it should not be used and question how a robotic contraption could possibly react to a comparable level of efficiency and compassion as a human physician. However, what hasn’t been considered is that this technology should be used in a regulated manner due to ethical and intellectual dilemmas.
According to the American Medical Association’s (AMA) Journal of Ethics, “Some of the most exigent concerns include patient privacy and confidentiality.” This is in reference to the Health Insurance Portability and Accountability Act of 1996, which was put into legislation to protect patients’ personal information.
Physicians are expected to uphold this policy and if violated, it can result in severe repercussions. The use of this technology begs the question of what information artificial intelligence (AI) should be privy to and who is liable if and when a violation occurs. In fact, the National Institute of Health (NIH) reports that nearly half of doctors in America think that the technology will not work properly or meet expectations, thus leading to fatal errors both medically and ethically.
The next concern associated with this technology is its intellectual capabilities. This isn’t only in regards to their extent of knowledge, but their potential for bias when diagnosing common medical problems in different races and genders. In fact, a Gallup survey conducted through Northeastern University found “69 percent of millennials worry that the emergence of new technology will exacerbate inequality.” Emphasis was placed on the rich and poor gap in this survey, but could apply to other differences.
A 2016 study conducted in Germany provided a real life example when they invited experts to compare their clinical expertise to a computer model where they coined a ‘neural network.’ The model was programmed to identify benign moles and malignant melanoma, a type of skin cancer. Although the technology was found to have a higher accuracy rate than the professionals participating in the study, it was mentioned in its research that more than 95 percent of the subjects used in the trial were Caucasian. No further testing has been done to test the rate of accuracy across varying populations.
It should also be mentioned that artificial intelligence is not regulated or tested for bias in its trials. This is especially interesting since it’s a known fact in medicine that minority populations have higher rates of misdiagnosis and mortality. In fact, the National Academy of Medicine reported in 2017 that Alaskan Natives had a 60 percent higher infant mortality rate than any other group.
When it’s truly considered, artificial intelligence is still in its clinical trials, much like other medical treatments. If this is the case, why isn’t it being held to the same standard under the FDA to prevent harm and bias to those it aims to help?
Although artificial intelligence (AI) does have great potential to assist both physicians and patients, the technology should still undergo trials to test its limits and be regulated to uphold the most important value of medicine: Do no harm.