University of Toronto Magazine University of Toronto Magazine
Illustration of intersecting yellow lines, as well as red lines in the shape of a stethoscope.
Illustration by Yarek Waszul

Six Questions We Need to Ask about Using Artificial Intelligence in Health Care

A U of T research team is examining ethical issues raised by the new technology

Prof. Jennifer Gibson, director of U of T’s Joint Centre for Bioethics, is leading a new research project, “Ethics and AI for Health,” to study questions of privacy, responsibility and safety around artificial intelligence. “So often, technologies outpace our ability to address ethical questions,” she says. Some of the important issues Gibson’s project will examine:

Who will benefit from AI?
“If you’re building something that can help patients live better lives, it’s very difficult to prevent someone from using that tool to maximize profit – potentially at the expense of those patients,” says Quaid Morris, a U of T professor in molecular genetics.

A company could develop an AI tool that is very effective at, say, tailoring cancer treatment to individual patients – and then limit its availability to wealthy patients who can pay a lot for it.

Health-care systems vary around the world, which means new AI tools may be applied very differently from place to place. Some jurisdictions might charge for the tool or limit access to certain groups. A public health-care provider might focus machine-learning algorithms on early diagnoses and preventive medicine to lower health-care costs, whereas a corporation might develop customized tools that serve only those who can pay.

Who will protect patients’ privacy?
When working with medical data, Vector Institute researchers follow strict laws and guidelines that protect individuals’ privacy.

But as AI moves from research to application, it could become increasingly difficult to keep genetic and clinical data anonymous. People have gotten used to giving up private information to companies such as Facebook, Google, Amazon and Netflix in return for more personalized recommendations. They may well be willing to disclose medical information in return for better care. This information could end up in the hands of insurers, employers or in the public realm without a patient’s consent.

What will happen when the machine is wrong?
No machine will be perfect. There will always be a risk of a wrong diagnosis. And even the best possible data-driven recommendation might still end with a patient not surviving their illness. Who should be held responsible: the medical team, the algorithm designers or the machine itself?

How will we avoid machine bias?
Algorithms carry as much risk of bias as any human. Search for “CEO” in an image search engine, for example, and your computer will return mainly pictures of white men. Algorithms tend to amplify the built-in sexism and racism of training data.

In the health sphere, algorithmic bias could mean the machine recommends the wrong treatment for groups that have historically been marginalized from health research.

What will the impact be on doctors and other health-care workers?
Doctors might find themselves freed from repetitive tasks and able to spend more time with patients. Some technicians might find that computers have taken over their work. Other frontline workers – nurses, paramedics – may see their roles change in unexpected ways.

What will we do about unforeseen consequences?
In the near future, researchers expect machine-learning algorithms to empower doctors and patients to make better decisions. They won’t make decisions themselves. But beyond such limited predictions, nobody really knows how far and how fast artificial intelligence will develop or how it will change society. Gibson believes we should be preparing for big changes, not incremental ones.

“We ought to think of this more as a disruptive, revolutionary technology and not find ourselves surprised five years down the road if we are too passive about it,” she says. “It’s not about raising the alarm just for the sake of raising the alarm. It’s about moving forward with intention.”

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. One Response to “ Six Questions We Need to Ask about Using Artificial Intelligence in Health Care ”

  2. Tesfahun Fikre Sulito says:

    In the context of AI, protecting patients' privacy will likely require a combination of technical measures (e.g., encryption, access controls) and ethical guidelines (e.g., informed consent, data minimization). The research team may examine existing regulations and standards and explore how they can be adapted to address the unique challenges posed by AI. They may also investigate how different stakeholders (e.g., patients, healthcare providers, researchers) perceive and prioritize privacy concerns.

    The benefits of AI in healthcare could be wide-ranging, from improved diagnosis and treatment to more efficient and cost-effective care. However, the research team may also consider potential downsides, such as exacerbating existing health inequities or creating new ones. They may examine who stands to gain the most from AI (e.g., patients, healthcare providers, insurance companies) and how these benefits could be distributed fairly.

    The consequences of AI errors can vary widely depending on the context. The research team may explore different types of errors (e.g., false positives, false negatives, bias) and how they can be detected, prevented, and mitigated. They may also examine the legal and ethical implications of AI errors, such as liability and accountability.

    Machine bias is a well-documented problem in AI, particularly in healthcare. The research team may investigate different sources of bias (e.g., data bias, algorithmic bias, cognitive bias) and how they can be addressed. They may also explore best practices for designing, training and validating AI systems that are fair and unbiased.

    AI has the potential to transform health-care delivery, but it could also have significant implications for health-care workers. The research team may examine how AI could change the roles and responsibilities of different health-care professionals, as well as how it could affect job security and career advancement. They may also explore how health-care workers perceive and adapt to new technologies.

    Finally, the research team may consider the broader ethical implications of AI in health care, including potential unintended consequences that may not be immediately apparent. They may investigate how to anticipate and prepare for these consequences, as well as how to balance the benefits of AI with its potential risks. They may also explore different models of governance and oversight for AI in healthcare, such as ethical review boards and regulatory frameworks.