Modernization Hub

Modernization and Improvement
The Robot Will See You Now: U of T experts on the revolution of artificial intelligence in medicine

The Robot Will See You Now: U of T experts on the revolution of artificial intelligence in medicine

I’m Frank Rudzicz from Toronto Rehab and
University of Toronto Computer Science department. I’m very pleased to be
moderating this event on the intersection of artificial intelligence
health care in bioethics before we begin I want to say thank you to the
organizers and the sponsors. So Samantha Sandassie of Age-Well, Nina Haikara and Orbelina Cortez-Barbosa of computer science all contributed a great
amount to the organization of this event. And in fact this was originally going to
be just a small tiny little event for the students and postdocs within Age-Well
but its since grown slightly because of the kind of interest in this
intersection of topics so we’re going to so what I’m going to do is say few words first and then I’m going to introduce all the panelists in order and then each of them
will say a few words from their own perspectives on this intersection of
healthcare technology and ethics and then we’ll have open discussions
thereafter. So to some extent this conversation is taking place within
advances in artificial intelligence and other disciplines most notably with
regards to automobiles that can drive themselves now and as soon as companies
like Google and Tesla Motors started releasing data that showed that so
driving cars could be safer and cheaper and more effective than human driving
cars one of the first questions was whether or not professional drivers were
even going to be necessary anymore. And simultaneously there’s been
a push from both academia and industry to for an increased use of artificial
intelligence in health care. And this is also stirred the public imagination to
some extent across popular media there are stories
about AI systems that can support almost to the point of replacing human doctors.
And interestingly a few weeks ago Intentions Analytics‎ released a survey of
about two and a half thousand Canadians in which 26 percent of us basically
said that we would prefer to have an automated system replace our boss. We
think automated system will be more ethical and more effective leaders in our bosses.
Part of this might be the fact that we don’t like our bosses very much, part of
it also is this new phenomenon that we’re putting a lot of trust into into
computer systems. So that’s the general public about what do experts think. So
Frey and Osborn of Oxford wrote a very well-researched study in 2013, which
basically suggests that medical practitioners are actually among the
workers least at risk from automation. That is only a two percent likelihood
according to these individuals of complete automation of the healthcare
industry within the next 20 years. Other jobs are much more at risk like,
professional drivers for example as illustrated in this interactive page by
the BBC. But by contrast the World Economic Forum put out a report in
January of this year called “The Future of Jobs Employment Skills and Workforce
Strategy for the Fourth Industrial Revolution”, which basically refers to the
advent of artificial intelligence. We have this graph which basically shows that they expect that although these skills required to be a health care
professional in the next five years won’t really change very much, the jobs
outlook for these individuals in health care compared to other industries is
more at risk. So people will be starting to start losing their jobs are these
part of their jobs due to automation. And Vinod Khosla is a very famous
entrepreneur and co-founder of Sun Microsystems has been investing heavily
in startup companies that are doing kind of artificial intelligence to support
the decision process. He owns at least 20 companies or is part of twenty companies
in this space and he suggests that technology will replace eighty percent
of what doctors do. The point is that there is some difference of opinion
between experts as to whether or not these systems will actually replace
humans. But so may if we don’t completely replace doctors in 20 years maybe they should. So I was told that in order to have a lively panel I
should say something inflammatory or controversial to start, so here it is. To
some extent I’m playing the devil’s advocate here – but there’s a lot of
evidence that humans and other primates are just not very good handling
information. Studies showing that patients have a lot
of difficulty communicating to doctors their symptoms and another study which
shows that nearly half of American adults have a great amount of difficulty
understanding or acting upon health information from their doctors. So this
isn’t entirely at the doctors fault but it is an indication that there is some
interaction that could be optimized. Humans also don’t necessarily have the
best possible brain’s. So our memory fails us our skills become obsolete, we have
various limitations on our time and we have lots of cognitive biases that
machines don’t have, including the recency bias. So in fact there’s a recent
study showing that diagnosis can actually correlate very highly with
increases in advertising or media exposure to certain kinds of diseases. So
doctors see a disease being a reference a lot in the media are more likely to
diagnose set of symptoms with that diagnosis. So we are susceptible to mistakes that resulted misdiagnosis. And in fact a study by Winters et. al, showed that
misdiagnosis actually results in the death of over 40,000 patients, only in
intensive care every year in the United States. And above that non-fatal
diagnostic errors can also be very expensive for institutions and
individuals who have to deal with malpractice practice claims. So where do these
diagnostic errors come from Graber et al. showed that in a hundred cases
involving internists 65% of diagnostic errors were due to system related failures. So these were things like poor processes, teamwork problems, miscommunication, but that 74% of these misdiagnostic or these diagnostic errors really came down to cognitive errors by the doctors. Usually involving prematurely closing a file. So obviously some cases can involve both system related and cognitive functions.
In another study just hammer this point home. He studied diagnostic care
among surgeons, he basically provided complete detailed case reports and asks
should the patient in this case report have surgery? 50% of these top surgeon
said yes and 50% of these top surgeon said no which is basically no better
than a coin toss. Moreover, the same doctors when
represented with exact same information just a few months later completely
change their mind on what to do next. Which is not ideal situation when
doctors are supposed to be making decisions based on facts and medical
knowledge. They are not coins. Bennett and Hauser actually showed that
machines might be able to do better. They compared patient outcomes between doctors and something called a sequential decision-making algorithm using, 500
random patients. And they estimated the cost of these procedures proposed by AI
were less than half as expensive as those procedures proposed by humans and
more importantly given what these researchers knew about these cases, the
AI processes resulted in 50% better outcomes than the human decisions. And
among similar lines the company analytics showed the CT scans of lungs
to their in-house artificial intelligence software and to four top
radiologists to diagnose cancer and basically humans made more mistakes. They had a false negative rate of 7% to the AI’s 0% in a false positive rate of
66% percent to the AI’s 47%. So the point is, that machines can do better
than humans at least in some very limited domains. They can even improve on human
performance for some tasks but realistically in the very short-term
such systems will probably only be used in very limited domains. So these are two examples, modernizing
medicine on the left, they implement basic rudimentary information retrieval or
research, from the data of about three and a half thousand providers in about 40
million patients in order to recommend test for drugs and therapies but this whole system is very deterministic,
there’s not much actual decision going on is basically just a fancy index or
look-up table. And RP-Vita on the right is a collaboration between InTouch
Health and I Robot, the makers of the Roomba vacuum cleaning robot which
recently just received FDA approval. So this includes some speech recognition
from case-based reasoning and some automatic navigation, which all requires
artificial intelligence to some degree but basically the system is not much
more than just Skype on on wheels. So that’s part of the reason why these
systems are approved for marketing in for use because they don’t really go
that far beyond what’s already out there but experimental AI of the type that
we do in the lab basically can make lots of mistakes. In fact we expect modern AI
to make mistakes to some degree because the input that modern artificial
intelligence systems have to deal with are usually full of difficult decisions, they
are very noisy data or they’re full of very ambiguous input. And because there’s so much possibility for error in these modern systems there are institutional barriers
to their actual use. So if we actually want to sell and use these systems in
the real world they have to pass federal inspections and approval processes. In
the United States the Food and Drug Administration basically is responsible
for all medical devices we have to go to them if you’re trying to get your
medical device on the market. Other jurisdictions like Canada a very
similar systems. So apparently over 99% of new medical devices that are proposed
for the market are relatively fast tracked through something called 510 K which is
basically if you can prove that your devices substantially similar to another
device on the market you can get your own device out relatively quickly.
Unfortunately things won’t be so simple for artificial intelligence systems for
which there’s not really any precedent on the market. And for those kinds of
systems you have to go through much more of a laborious process called pre-market
approval in which the FDA basically decides if your class one device, which is
something like dental floss, a class two device which is something like
acupuncture needles or a class 3 device which is something like a heart valve. So to
deal with these problems that a lot of companies are having with trying to get
their devices actually approved by the FDA, IBM has actually been lobbying for
years to get Congress and the Senate to allow Watson to be used in
healthcare settings, Watson’s basically this AI system that won Jeopardy
about five years ago. And the idea is that Watson should be used by doctors
because the doctors is still the person who’s making the final decision. Okay, so we didn’t, we shouldn’t have to put
a lot of regulation on his artificial intelligence systems because it only as
a support like a chair. So sll this lobbying has actually paid off to some extent
there’s a bill called HR Six the 21st Century Cures Act, which
basically passed the House by a huge margin last year and has since been read
twice in the senate and is going on to committee now. So, if it turns out that
the USA actually does make it a lot easier for companies like IBM to get
their systems into the public sphere this is where we have a lot of new
ethical decisions we’ve never had to deal with before, suddenly to deal with.
So as an example this is one type of intelligence system being developed at
the Toronto Rehabilitation Institute. This is the home lab, which is basically a
re-creation of a one-bedroom apartment in which engineers and scientists can
test out technologies that are designed to improve the quality of life for
people especially for older adults. And in the ceiling is something called a fault
detector, so there’s a camera in it that watches you every moment of the day and the AI behind the scenes basically
tries to react if it thinks that you have fallen. So if the AI does detect a
fall, maybe you had a stroke for example the system can ask you what speech if
you need help for automatically call an ambulance for you. So there’s some issues
with this, so if such a system would be recording you at a moment
of the day, no matter what your wearing or not wearing it might have access to
your personal health information like whether you’re on some kind of
medication that might have precipitated a fall. Maybe compares the way you have
fallen against video from millions of other people in various states of
undress, to tell the difference between a stroke and a fall due to a loss of
balance. So if the system is to learn and improve who has access to the data.
How accurate must a system be before it’s acceptable for use in the home. If a
system is to recommend a course of treatment does it explicitly way some
kind of measure between your own well-being and the cost to the hospital
or healthcare system, who is liable, these are just some of the questions that were
gonna have to deal with and with which the panel will deal with in more depth
in a few moments. I would like to introduce them to you, so from the left to the right, Sally Bean
received a BA in Philosophy and English an MA and Bioethics and Public Policy
and a Law Degree, she completed a fellowship in Clinical and
Organizational Ethics in 2007 through University of Toronto’s Joint Center
for Bioethics and received a Senior Ethics fellowship with the Trillium
Health Centre. Now she’s an ethicist and a policy adviser in Sunnybrook Hospital’s Ethics Center, where she provides clinical and organizational
ethics support for patients families staff and volunteers at the hospital, she’s
also a member of the University of Toronto Joint Center for Bioethics where
she also teaches graduate course. To her left is Michael Bruno is a Director of
the Center for Computational Medicine at Sick Kids and an Associate Professor in
computer science at the University of Toronto. He has multiple degrees mostly in
computer science from Berkeley and Stanford where he worked on genome alignments before moving on to become a postdoc at Berkeley and then a visiting
scientist at MIT before starting his position here in Toronto. Dr. Bruno was main researcher is in the development of computational methods for the analysis
of the genome data sets, he is the recipient of the Ontario Early
Researcher award and the Sloan fellowship and the Canadian Outstanding Young Computer Scientist award. To his left is George Cenile manager of the AI Technology and
R&D Development at Artificial Intelligence in Medicine, which is a
software engineering company in Toronto focused on the design development and
deployment of information systems for health care, especially for cancer
control. George is a technology leader with more than 20 years of experience
and innovation, software development,team management and business solutions in the
medical and industrial fields. Graeme Hirst is a Professor of computer science at
University of Toronto his research covers a range of topics in
computational linguistics, including lexical semantics, the resolution of
ambiguity and text, and the automatic analysis of discourse. His present
research includes detecting markers of Alzheimer’s disease and
language, he is the editor of the Synthesis of Series of Human Language
Technologies, the recipient of several awards for excellence in teaching, the
elected past Chair of the North American Chapter of the Association for
Computational Linguistics and the current treasurer of that association.
And finally Ross Upshur is the Canada Research Chair in primary care research
and a Professor in the Department of Family and Community Medicine and
Dalla Lana School of Public Health at the University of Toronto, he is the former
director of the University of Toronto’s Joint Center for Bioethics and was a
staff physician at the Sunnybrook Health Sciences Center until 2013, he is
a member of the Royal College of Physicians and Surgeons of Canada and
the College of Family Physicians of Canada. He is currently the Medical Director, clinical research at Bridgepoint health and his research includes the
intersection of primary care and public health especially with regards to the
interrelationship between ethics and evidence. So amazing panel in all of our opinions, so we’re gonna walk through each of them in order talking about their own
perspectives on this intersection of health medicine technology and ethics. Starting with Ms. Bean. Sally Bean: Thank you so much. So I have the dubious task of trying to in about five minutes provide an overview of the ethics and legal
implications in AI so I’ll try to do my best to do that in terms of the law
piece it’s gonna focus on the liability component who who pays out essentially
when something goes wrong and then also the privacy and confidentiality aspects.
Ethics I’ m gonna zoom in a little bit more and talked mostly about the trust
piece and this is gonna tie in nicely with what some of my co panelists will
speak about in terms of the trust relationship between in particular a
clinician and their patients and how that might be affected by the sphere
which they’re interacting. So just briefly in terms of the
liability AI really presents cords with unique legal challenges, very
interesting emerging issues that we don’t have case law precedent on and it
intersects with cyber law, which poses an extremely complex jurisdictional issues.
So imagine someone in Malaysia has advertised something in Canada and there is a legal dispute in terms of privacy, for example and so you’re dealing with international law, state law, all these
types of things and it could be a bit of a mess just sort out. So robots for
example often blur the line between people in instrument so who’s the agent
doing it. Is it this surgeon using the instrument, are we dealing with product
liability in terms of the instrument and So how shoulders or assumes the risk of
mistakes, so when I say shoulders, essentially who’s going to be paying out
if there is a tort, so if there’s a civil wrong and something arises in which
someone is harmed who’s going to pay the damages to compensate that individual or
conversely are we gonna say to a patient or client, you’re assuming the risk by
undertaking this particular treatment so should it be a manufacture i.e. the user, sorry, or the user, patient, seller, distributor, the instructor, maybe if a instructor has
taught other surgeons how to use a particular robotic surgical device for
example and didn’t teach them appropriately or was negligent in their instructions
should they be sued. You could also think about a statutory exemption are heightened
negligence standard, so this is something that’s done in the US for ski resorts
for example, you could imagine there has to be a heightened level of negligence
before someone can recover because there’s so many lawsuits for ski resorts,
so it’s actually built into the legislation. So there’s lots of work
around you consider and at a very basic level to claim for negligence you have
to establish certainty that there was an injury but that the individual that
sustained the injury you had a duty to either act or prevent that there was a
breach of that duty and that there was causation and that your breach caused
that injury. Very basic level but those are the elements we’d have to prove. So
okay, if we’re in an area where there’s not a whole lot of legislation or precedent
that we can work from what are some of the transferable lessons from other
areas, so as Frank talking about a little bit about there’s been a lot in the news
in terms of the autonomous or driverless cars, so it really prompts the question who is the driver
in terms of the liability aspect in particular, so very recently in
the past couple of months in the US the National Highway Traffic Safety
Administration issued clarification to Google that it will deem a computer
software as a driver’s, so this is a pretty big deal with this came down and so
essentially that means that they can proceed and go forward repeat of course
that does have liability implications most likely. So there’s been a lot in the
popular median also in terms of intersecting with the ethics piece so how to
driverless cars be programmed to make life and death decisions you might have
seen a headline that says something to the effect of should driverless cars be
programmed to kill and they’ll give the scenario of someone coming up to a stopped car they
can’t stop in time group of six people on the left, a baby
carriage on the right, you know who you hit should you go for the baby because
it’s one person, should you hit the six people, so these very interesting ethical
conundrums that many of you that have taken last year have a background in
that will recognize in and other elements but these are important
questions also the notion of how should know when to break the law anticipating those pieces. There’s also the least autonomous
weapons systems so this is something very interesting it’s it’s certainly the
most developments occurring in the USA the UK and Israel but it gets to the
peace of who decides an axe you’re probably familiar with drones in which
we have someone operating it but what if there is no operator of that it’s purely
autonomous so there is international treaty work occurring on this right now
in terms of there is the need for meaningful human control over targeting
an engagement decisions so essentially there needs to be human pushing the button to say yes let’s
proceed so it’s very interesting in terms of our their activities that are
inherently human so to speak that you require that momentary or instantaneous
judgment and insight. Another area that I think we can look to is telemedicine and
mount medical outsourcing this area that I’ve been a little bit of research, so if
if I’m getting an X-ray done here and my doctor sent it to somewhere in India for
example to have the extra read because it’s in the middle of the night and
couldn’t get radiologists perhaps to read it what are the implications of
that. So how trustworthy are the individuals and the system that are
supporting that do we have an appropriate information system that’s
sending my personal health information to India what’s the agreement there and
that’s certainly the privacy and confidentiality concerns not really
underlies a lot of the issues will be talking about today and of course
promoting patient safety is the quality of whomever is going to be looking at
another jurisdiction going to be comparable to what I might be able to
access here. So shifting briefly into the ethics realm I’m really gonna focus in on
the trust element here so this kind of builds on the telemedicine notion so
this is kind of something that I’ve argued so to the extent that medical
technologies generally I was saying telemedicine but
we can certainly use that me AI context or to the extent that it shifts or
changes the traditional face-to-face point of care, telemedicine robotics etc. it
necessarily alters the context of the traditional patient, physician-patient
fiduciary relationship so no I don’t mean to be alarmist about this and say
that’s necessarily a bad thing certainly not saying that but it does alter that
context so I think we need to follow up and ask questions about what that means.
So does the shift in context affect the relationship can we demonstrate that it
does, are any impacts positive, negative or neutral. How can we mitigate any potential
negative impacts if if they do in fact exist and so what’s our comparator,
Frank was getting to this a little bit earlier in terms of talking about the
professional race car driver so do we want a technology that we used to be
better than the best surgeon what’s our comparator in terms of saying whether or
not this is something that we should proceed with. And what lessons are
transferable from other areas so again telemedicine, driverless cars, I what can
we learn from other areas. So the trust piece, we could talk about this easily
for quite time on its own but I’ll suffice to say that there’s really not a
shared definition of what trust means but there are a few areas of convergence
and in the literature more so in the social science literature so those argue
that the outcome of behaviour something that is earned essentially and
contingent upon the context, functions in relation to the person or object into
which it is placed, must be continually maintained, involves a degree of
vulnerability and reliance a great example of that is if you’re a strange
city and you ask someone for directions you’re you’re trusting that they’re
gonna tell you the right thing at the right information or that they in fact
know how to direct you so lots of interesting elements in terms of of
trust. There’s also interpersonal trust and of course that can be between
individuals and in the medical realm positive correlations of a good
interpersonal therapeutic relationship include treatment adherence, a longer
relationship with your physician and perceived effectiveness of care so lots
of good things if you have a good trust relationship with your physician.
Conversely there are lots of negative correlations if that doesn’t exist so you might have lower rates of care
seeking, preventive services and surgical interventions and I’m sure if you think
of yourself or family members that have had a negative interaction with a
physician it it certainly you’ll say maybe I won’t go back or maybe I
won’t follow up and that type of thing. It really does significantly impact
individual decision-making. And there is this broader system organizational base trust,
this is broad in nature and refers to a collective, now often when you see
the polls or newspapers it will say, do you trust the Canadian Health Care System and so
they’re talking broadly about this system organizational base trust. Now the
interesting thing is that some of the research says that it’s been done is
that when you’re asked that broad abstract question you invoke notions of
individual clinicians, so if someone asked me know, Sally, what do you think
of the Canadian Health Care System and I’m going to think about my relationship with my my GP,
my family health doctor and then extrapolate that so use a little bit of inductive logic and say that’s pretty good I, I have a good sense about this system so
that means that the that interpersonal relationship is very profound impacts if
it is in fact that closely intertwined so I think that’s a really interesting
connection to think about its so important because it really influences
are our perception of the system level to. What about transforming this
fiduciary relationship so it could be there were transforming into a more
contractual or quasi-contractual relationship and this is something that
you’re also producing a lot in the media in terms of the share economy with the Uber type models as today I saw in the paper about the gym that’s paper use and and
these very short-term non-contractual non long-term contract deals that are
more episodic in nature, so you know generally a contractual relationship in
this sense I I just mean it and not in ongoing, it holds those parties to axe or
forbearance, so to act or not to act or refrain from acting that you’ve
agreed upon and stewardship to contractual relationship it certainly could impact the nature of
health care and basic conceptions of trust if it’s viewed as this is just a
one off its nothing enduring so it it does shift the focus to more episodic
encounter versus an ongoing relationship and again this is not some a slippery slope
scenario at all it’s just one of those things we have to think about and say
you know is that good or bad how can we mitigate any potential negative
implications if it does go that route. So what are some of the final additional
considerations that I’ve been thinking about in terms of this so you examining
of course emerging jurisprudence on the role of a line medicine lots of really
interesting case laws coming out mostly in the US so far but that certainly can
give us a sense of where things are gonna go here in Canada. Considering how
can we foster scientific literacy and accessible yet balanced discussions and
risk of benefits and AI and medicine without fear mongering a lot of the
popular media press Frank was showing you many of the
headlines they they certainly create this alarmist notion that oh my goodness
we’re on jobs it’s going to be you know the Jetsons and Rosie will be doing
everything and you know that type of thing, it’s very concerning how it’s framed and
I don’t think that’s a productive way to engage the public I think we have to
have a more balanced yet accessible way to discuss this with public, members of
the public, so examine how might the technology alter the context of the
traditional face-to-face physician-patient relationship so think
about what are those impacts and and be quite honest about what they might be in
in certainly how we can mitigate any potential negative impacts. Identify best
practices for enhancing transactional presence so this is something that
appears in the telemedicine literature in terms of positioning cameras so that
if a physician in we”ll say in Toronto is looking at a patient in
rural northern Ontario that there’s still a good encounter, now even the
term transactional certainly didn’t know it’s it’s very episodic in nature so I
find that interesting that term is used. Conduct research on the role of
impact trust in AI medicine context I think this would be a great area to
partner and think about and how can we support the productive integration of
ethical analysis within the developments in medicine so that is not viewed as
antithetical or oppositional to these developments but thinking about it
together instead of once it’s gotten so far and and then
ethicists such as myself are saying wait a second did you think about this so that.
So really produce a more collaborative endeavor and I will stop there for now. Thanks. So I won’t do slides. I’ll just tell you a little bit about the work that we do at the Center for Computational Medicine at SickKids and
also connected to the broader area of computer science and how you use of
technology is already pervasive the use of computer science already really
pervasive in medicine and it really applies to all areas within computer
science so when we start talking about analyzing genomics genomic data we start
thinking about algorithms that can handle very large data sets, we have to
think about databases to store the patient data and how to structure these
so that information can be retrieved seamlessly, we need to think about once
we collect large amount of data how we can encrypt them how we can make privacy ware decisions until a field of security comes in, when you collect very
large data sets they can be samples of the video in there or it could be images,
for make medical imaging X-rays, MRIs and computer vision is very important field in order to help people, help one identify what are the most salient features of an image and this goes on and on so the field of computer, computational medicine really tries to take all of these aspects of computer science and somehow gel them together so that they’re useful in the practice of medicine and
you think the way they’re useful today is not so much you know we can look
forward to this concept of a computer doing the doing the diagnostics and
prescribing you treatment but today these computers are already very much
used with a human in the AI loop, where the human actually enter some information
into the computer and gets suggestions that can then be applied to the case
where the human doctor is actually the person who is responsible for all of the
decisions and that person is the one who bears all the legal risk but the
computer should be able to help, should be able to save time or just you know
help them get to the right answer where it may not
be obvious at first glance. So in our particular case we work in the field of
rare genetic disease, so rare diseases are you know you made first say well who
cares about rare diseases they’re very infrequent these are one in
10,000 people may have something like this and you know the overall you know
cost to the society may be reasonably small of course there’s a huge cost to
the individual or the family that is involved with this rare disease but
actually if you add up all the rare diseases together you end up with a
really large fraction of individuals who are impacted. So the total prevalence of
all rare diseases together is somewhere like three or four percent and that’s
actually quite significant it means that a very large fraction or a significant
fraction of people will be impacted by a rare disease or will be impacted through
one of their loved ones having a rare disease, where the reason is that the
total number of rare diseases is extremely large and when we’re doing
we’re working with rare diseases it’s very, we were very often dealing with the
case of not being able to diagnose the disease or not being able to diagnose
the disease quickly so patients referred to something called the Diagnostic
Odyssey and the reason it’s called an odyssey that just like the original
Odyssey at me take 10 years. It’s, I think the average expected time for a rare
disease patient to get a diagnosis is on the order of seven years, this is the
time it takes from the first symptoms that appear and they first go to their
doctor, the doctor referred him to another doctor, that doctor refers them to third doctor they get tests done at each of these doctors, they get seen multiple times, they may get a misdiagnosis in the process, they will be
told you have this disease and then after a while the symptoms really don’t
stop just stop matching so they will go back and they will go clearly it wasn’t that, so lets keep going, they get you know hundreds and thousands of dollars worth of tests
before they actually can stumble upon the right answer and the key thing for
finding the answer to these rare diseases most of these diseases are genetic.
Meaning it’s something in your DNA that causes you to have this disease so what
are we, the key to finding the diagnosis is to
identify what it is in the DNA that made the body the way this and to find this
we often need to have two patients with the disease. So we need to be able to say
here’s one patient here’s another one they have the exact same disease or same
constellation of symptoms and then lets see whats common in terms of what’s
broken in their genetic code, so this requires us to be able to actually take
to patient descriptions and match them to say that they actually have the same
exact or the similar set of symptoms. How would we do this, well if we
have free text descriptions it actually becomes very, very difficult to do the
matching because the free text is an extremely lousy way of explaining what is
actually happening, it has lots of information but information is very hard
to match so when we looked at this is actual medical records at SickKids we
found 20 different textual ways that somebody could say that the patient had
congenital malformations which is basically just saying that they were
born with something misplaced from a physical perspective on the body and because they
could say congenital malformations congenital anomaly they could say multiple malformations, they could say multiple anomalies and then you get into all the possible misspelling all the
possible abbreviations and all the possible ways of actually combining all
of those things, so that sort of not surprising so how do we go from all of
these multitude of terms down to a reasonable description that we can
actually match and the key turned out to be effective software that allows you to
start typing things and that gets matched to standardized terminologies and that’s
synonym aware and that actually uses some machine learning to map the
synonym space and to map to the standardized terms that are used, its sort
of, this problem of having multiple diverse terms between the same
thing, actually I was reminded by that by example that Sally brought up where
it was you know, if you’re you know what’s the drivers exam somebody asks
is the other person, okay you’re driving down the street and on the left
lane you see you know a crowd of people in the middle you see a big truck
parked and on the right to ship baby carriage, what are you going to hit and
the person to hesitate to start thinking is that’s the breaks, so it’s so, this is
you know this is this problem terms meaning completely different
things it comes up with medical text a lot when you have phrases like the
patient had trouble spelling, what does that mean well actually could mean,
dyslexia but there is actually very difficult to map the concept to the term
in their recent study were trying to predict what what the disease a patient
had based on the electronic health record the textual notes, for some
diseases very easy, Type 1 diabetes, if the word insulin appears anywhere in the medical record you’re
almost a hundred percent certain that the patient has diabetes. On the other
hand dementia, was, the accuracy was I think on the order of seventy percent,
being able to predict whether a patient had dementia based on the medical record
because the description for dementia can be so vague. So getting these actual
symptoms into a standardized terms that we can then do the matching on was a key
step in enabling something called the Matchmaker Exchange which is a consortia multiple sites for all collecting data about rare disease patients than
exchanging this de-identified data, sort of saying, I have a patient who looks
like this they have you know microcephaly meaning small head and epileptic
seizures and a whole bunch of other things and a mutation in this gene in
their genome you have anybody similar? And then the other system could respond
with the answer and say yeah we do and then the actual clinicians get into
the picture then and start talking to each other and decide is that a valid
match, is that really the diagnosis for these two patients and the system has
already been used to diagnose I think in the order of 50 new diseases, so
it’s really being used successfully around the world right now. So as a final
sort of part of what I want to talk about I wanted to connect this whole
area of using genetics to help identify the disease to the area of privacy and
to the area, to the question of whether you consider your genome to actually be
information about you that’s private that you would want to keep sensitive.
Just a quick show of hands how many of you think that the genome of something
that’s you know private and sensitive and you don’t want the shared? About
half the room. Well, there was an interesting there’s a
couple of interesting studies that recently happened recently in the last
five years, that sort of question that whole paradigm. One was study in
Iceland where a company Decode Genetics, decided to figure out the genomic
sequence of everybody in the country. The problem was that they didn’t really
decide to figure out the genome of everybody the country, it was also
everybody who ever lived in the country. So they got permission from some
fraction of the population, there was a significant fraction, I think in the order of 30 or 40 percent, to sequence the genomes to identify what are some key biomarkers in
their DNA but then for everybody else they use publicly available genealogy
records in order to figure out what their genomes were. Kind of makes sense
if I have the DNA or your parents and your kids can probably figure out your
DNA. And they even figured out what the genomes for people who lived long time
ago but they look like, from who they never got any consent to get their genomes. So the
Iceland Supreme Court actually shut them down. You know, said that this is not a
study that you can ethically run because you didn’t get consent but they
got consent from everybody from whom they got DNA, for everybody else
they just imputed this information and so this was a big controversy and
it’s really not clear how to handle that. How, you know, in the case of for example
twins which are identical twins, which is the simplest case, can one twin consent
to the broad public sharing of your genome of their genome, if the other twin
does not and because the information is going to obviously just let you know
everything about the other one and the other case was again a recent paper,
where this showed the power of computer science, where the NIH posted aggregate
information about genetic diseases online, which was something like, at this
position in the genome, people with the disease had 52% of them had A’s,
and 48% had T’s but without the disease it was 50% had A’s and
40, 50 had T’s, so basically very broad statistics for, you
know, something on the order of a million positions across the genome, what was the
distribution of people with the disease versus without the disease. And graduate
student at UCLA figured out that if I they know the genome of an
individual they could test if they participated in the study are not, by
looking for the shift at all across all of the positions information every
single one of the positions this tiny but once you get over a million
positions they actually become significant enough that you can identify whether
somebody participated in the study or not. And that led the NIH to take all of
that data off line, which now then on the flip side caused all sorts of issues for
scientists wanting to study these changes to identify the causes of disease, you
could not get access to it, so there needs to be this balance struck between
the privacy of individuals and the ability of science to go forward and that’s
actually a balanced I don’t think we have completely worked out in the field
of genomics. Right, thanks. Thank you. I’m going to talk to you about some examples of actual real working experts systems or systems based on AI in medicine. I’ve been doing this for quite a while, in 1988, I did an AI fellowship with Digital Equipment Corporation. And essentially for the last 15 or so years the last two MDS labs in AIM is my experience in the medical area. So the first thing I wanna talk, is about a when you go to the doctor and get a blood sample taken it goes to a lab and historically a lab technician would run the analysis, decide whether it’s a good sample Maybe it needs a repeat because it is at the limits of the processor of the analyzer or there is an underlying condition that they have to run further tests to confirm before they send it back to the physician and this takes some time. MDS had an automated laboratory which means there were conveyor belts and it takes out, a router takes to the various analyzers and when the analysis are done there is an expert system that actually reviews this data and decides what to do about it. This system processes 60, 000 samples a day just in Ontario. And its, really the decisions are being made by an expert system, not many people realize this. So that’s one example. Certainly, to process 60, 000 samples a day manually you need a lot of people and a lot of, a lot of machines. So the next, the next one kind of fits into the last talk. Electronic cancer reporting. Typically cancer, instances of cancer have to be reported to a cancer registrar and the jurisdictions are all over the world North America, Canada, USA, Australia and
certain types of cancers have to be reported other types aren’t. For example
in Australia squamous cell cancer basal cell cancers of the skin generally aren’t
reportable because they’re out in the sun a lot so it happens. However if the squamous cell cancers in the mucosa
under the nose of lip reported so there’s some complex rules as to what
kinds of cancers have to report it and what aren’t. The people that do this kind of
thing or called cancer registrars they go through quite a bit of training, to be
able to read interpret path reports and decide whether the report for that
patient is a cancer patient, whether he should be reported to the registry.
Unfortunately reading a free text cancer report takes time. So it might take five or eight minutes to actually read this report and decide whether it’s a cancer. Our system called E-path, which stands for Electronic Pathology, actually scans these reports and it does so, in the background. There’s an NLP component that interprets free text into standardized nomenclature, ICD-O-3 coding. There is lots of ways that they see things and it gets even worse to get creative with tables, tabs,
and tumor summary and there’s a whole bunch of information there and the system has to know that that information really belongs to the tumor because it is in the tumor table the system is context sensitive and it identifies negation etc. Then there’s a rule system examines the code in terms of class the report so first-rate cancer it’s either negative
or positive. We also extend this to central nervous system reports of the
CNS, the classifications extent to, a little deeper whether it’s a history of cancer metastatic positive previously known, or positive new. And this does this in real time as dozens in real time. And roughly 14 million reports are processed in the US alone, through the system, automatically cancer appeared about 10% path reports, that
means that ninety percent do not have to be reviewed by human and out of those 10% they have to be abstract, so humans have to read and do other things with it. The accurate system this is a recent study we did with the National Cancer Institute on various sites and we see that the sensitivity specificity
is quite high in terms of identifying whether its reportable cancer or not and
we know for a fact that humans do not perform at this level because when we do
the QC, so a QC consists of 1500 reports and reports and gets arbitrated by a human registrar and they decide whether and then our system decides whether it’s a cancer or not, when there’s a disagreement we kinda look at it decide is the system wrong what happened almost all the time the human misapplied the rules or they did some kind of interpretation incorrectly, so it turns out the system’s consistent it’s right we
don’t have a measure of human is but we know it’s not this high. And in fact we have these sites installed
all over place including Australia which is not on this map. So for example in Ontario if you go get a pathology report and they take a piece out of you chances are our system will be reading that report and deciding whether it goes to the cancer registry or not. The same with the sample of your blood. Another system we call it RCA Rapid Case Ascertainment so this system does more then decide whether its a cancer not it actually converts text into standardized data. Data is searchable for studies and trials, it is used for automated candidate identification and notification it can work with historical as well as current reports so if you have a database of 10 years
worth of stuff you can pump it into the system and it will actually convert it review the system in the field shows there several different ways it could be used it actually significantly improves
throughput of reports and and things for clinical trials. The system works something like this, so here’s an example of pathology report lots of information, tons of information, expressed in all sorts of ways and the RCA system will actually produce this, it will extracted database on a checklist and different sites in cancer different data the elements required the
checklist for example this is a looks like a breast report and so the synaptic elements for the breast report are defined by the College of American Pathologists and generally the
data has to be filled as such, alluding to the former speaker’s comment about
the best way to do it is to have a system that automatically automatically
converts the data standards forms while they are typing in that’s great and they have been trying to do that but you have a lot of pathologists who will not use a computer
system and they insist on doing free text narrative reports so we’re stuck
with this problem for a while at least. The system also allows you to do a quick
check so if you click on any value it shows you where it came from. The system is actually in use and a, just a schematic on how its used, reports come in automatically to the registry the synoptic floater, converts it stores it in, automated search service actually fires out notifications and a, a researcher you can get notified every day, every hour, once once a week of patients are coming in in real time and
it’s up to him whether you want to contact the patient for further study. An example this McMaster University study
in conjunction with Cancer Care Ontario had a need to assess the reporting of HER2 a biomarker information in breast reports over the course of two years I
believe and they were looking for how often are these elements reported so
they had six students three months time to do this not really practical.
So we applied our system and we help them go through and the idea here is that
when the data was extracted the students had to verify the data was correct and
so even without training and cancer registration system pointed where the text data was and able to verify quickly so it reduced the time required to scan a report by six
times so they’re able to complete the study. Another study the National Cancer Institute wanted to find out how often TNM values are reported to cancer staging. T stands for tumer, N stands for nodes and M for metastasis. So they were slower moving collaborative staging system, to a TNM staging so they needed to get some data on how often this gets reported, so the system
does something like this it see that clunk text over there and it figures out what its talking about the primary stage, nodes number of regional lymph nodes etc. and so this allowed them to go through a number of reports and identify how often the data is reported some cases it’s not too bad, for example
in some cases pretty pretty for breast metastasize 46%, so they’re able to
determine in real life how often this data is actually in the reports and whether it
makes sense to use these reports for the TNM staging. Emory university had a clinical trial they had a crew 500 patient by 2013 and so they came to us with the system and we
implemented the system for them they were allowed to put the study in of their requirements, so they needed a male with
certain types of the…certain types of prostate cancer and various other things
and they were able to and this is a quote able to finish the study almost a year ahead of schedule with the system they just found the patients on the fly. Big data machine learning, speaking about
Watson, the US Department of Energy this is brand a hot off the press, US Department of Energy approached SEER who is the agency surveillance and epidemiology and
results they kind of run large cancer sort of monitor in the US. So they, the US approached SEER and said you guys have a lot of cancer data and the idea to,lets develop some machine learning software that can use that data and build models and see what we can come up with. Millions of reports, it turns out that big
texted is not big data, trading sets so they came to us collaborate with our tools to help them
build these trading sets so convert the text into standardized data and also collaborate on project. Let’s talk about privacy issues. Ten million path reports I bet you not of one of those gave consent for taking part in machine learning study so
there’s issues with IRB is the internal review board that will give you permission to
use data for certain things are not regulations come in the away. So that’s me. Thank you. (Clapping) Thank you. So I’m taking about Doctor-Patient Communication when the doctor is a computer. Communication in clinical
settings is very difficult for people generally speaking patients don’t find
it easy to talk to the doctor, the doctor’s and authority figure the situation may
be stressful the patient doesn’t have a medical vocabulary for the message that
they need to convey especially in the doctor’s language of its not the
patient’s native language and the message itself may be vague or uncertain.
The doctor doesn’t always seem to understand and they talk back in medical
language. They give important instructions or an advise only orally and
they allow very little time for questions or clarifications. But doctors don’t have it
any better. The patients come in with vague and
confusing stories, they used terminology wrongly, they don’t understand much of
what the doctor says even in the doctor tries to express it in lay terms and the
patient then forgets most of at the moment that they leave. So all of this
leads to obvious problems in quality of health care so the question here is
whether we can improve on the situation without artificial intelligence can we
give our computer doctors superior communication skills that will
contribute to an improvement in health care. I’ll talk about some of the challenges
to doing this. The first thing we need to think about is well however
patient-doctor communicate, sorry, how would a patient consult with the computer doctor
anyway. What would the interaction be like? The title of this event “The Robot Will
See You Now” in the picture on the posters suggests a scenario much like
present-day clinical visits except that the doctor is literally replaced by a
robot was some kind of robotic installation that can perform even physical
examinations. Well that kind of embodiment is clearly
a long way off yet so let’s think of simple scenarios and one that we can
easily imagine for the near future omits the robot completely and hence, any
direct physical interaction the doctor is embodied simply as AI software that’s
accessed over the Internet through a regular device like a phone or tablet or
a desktop or laptop computer. The input methods that are potentially
available would be speech and typing in the device’s camera, the output
methods available would be speech and sound text and images. So what kind of
interaction cancer patient have with this kind of a doctor. First speech is
pretty easy and natural for typical patient in a real or virtual office
visit. So we can implemented for many different languages so that the patient
can speak in whatever language is most comfortable for them but obviously this
will require highly accurate speech recognition by the computer and as we
all know from using Siri or Cortana or their friends, a speech recognition
systems aren’t nearly good enough to get to support this and they won’t be for quite
some time to come. So while we wait for that to happen perhaps we might want to
use the keyboard on the track pad that should make it a bit easier for the
computer to understand the patient and the patient can input type language
perhaps along with some licking of check boxes will pull down lists.
Nonetheless we have to remember that not everyone is faster with a keyboard or
with grammar or spelling typing more than a hundred and forty characters
without suffering from mental exhaustion. So with either speech for typing, what
would a consultation with an AI doctor be like and how patients respond to such a
doctor? When they asked what brings you here today? Would patients answer in the
same way as they would with a human doctor which can be anything from a
current problem statement, my arm hurts, to a long description of the problem and
its history and its context. Or would they talk with type to it in keywords not
whole sentence as much as we do for Google or Syria spots, rash, can’t sleep, much is going to depend on
the exact form of the software and the kind of language that the doctor itself
uses including the extent to which the doctor’s questions and statements allow
the patient to give open-ended answers statements in response. Could a system really cope with answers to open-ended questions to the patient like what
brings you here today? Or how does that feel? Or how’s your relationship with
your partner? No matter how much the software
constrains the patient’s responses there will still be the problem of
occasional probably frequent misunderstanding. Both the computer
misunderstanding the patient and the patient’s misunderstanding the computer.
As I noted earlier doctor-patient conversations are pretty difficult when
the parties are both human and can only get worse when one party has a doc is a
computer with inherently imperfect language skills. But one skills humans
have which we don’t normally even notice is the ability to detect when a
conversation goes off track when there’s a misunderstanding and figure out
exactly what’s gone wrong and say the right thing to get the conversation back
on track again. So it will be all the more important than a linguistically imperfect computer
doctors have the ability to do this too and to recognize when patients are doing
it for them and models of recognition repair of misunderstanding have in fact
been one of our topics of research here. We also need to think how computer
vision fits in here the system can’t perform physical
examinations but they can at least see things through the camera. So I can say
please hold your arm up to the camera so that I can see the rash, for example and if patients know that the doctor is able to use vision then gestures will naturally go
along with this speech as patients point and demonstrate problems using what are
called didactic expressions it hurts here, it hurts when I move my arm like
this. For output the software can use linguistic
conversation when it wants to but it also has the options to display pictures
and video and to construct clear and highly
personalized information handouts and instruction takeaways for the patient.
Ones that are not just assembled from templates but extensively tailored to
the patient and that’s another research topic with we’ve worked on here. An AI
doctor as I’ve just described, the software that we access on a regular device like
a tablet or laptop computer doesn’t require any new hardware and it can be
used anywhere. If and when we solve the considerable problems of speech and
language processing and interface design in general and the AI becomes good enough to communicate more or less as well as a human doctor can, it could be as good in
communication as present-day telemedicine consultations. Can we look
forward to AI that communicates even better than a human doctor, well at least in
some ways, yes. I’ve already mentioned two benefits. The first was multilinguality,
offering consultations in a wide variety of languages and the second was
multimodality for both input and output and a third benefit is infinite time and
patience there’s no need for a rush ten minute
consultation, an AI doctor can interact with the patient for as long as
necessary until understanding and full purpose is achieved on both sides. So to
summarize. Handling the kind of sophisticated language in conversation
that happens doctor-patient communication is one of the hardest
problems in artificial intelligence but solving these problems would add a new
dimension to AI in medicine direct communication with the end user, the patient. Thank you. (Clapping) Thank you. Good afternoon everybody its a pleasure to be here and I represent the
community that’s to be replaced. As I’m a practicing physician and I was really struck by the claim that 80% of my work will be gone and I’m really keen to find
out what percent is going to be and interestingly enough, so just picking up
on the on the on the last presentation how many people in this room have not
entered symptoms into Google for either a problem that they’ve had themselves or
somebody that they know has had? So two people are not Google doctors. So it’s
it’s interesting so when I started it as a physician I was a rural family
physician, I worked in small farming community to the sort of northwest of
here and anybody who knows anything about farmers is that during harvest
season the last place they want to be sitting is across the desk in your
office and they tend to be a very taciturn lot, in other words back to the
whole idea of putting their symptoms into language and finding that common
interpretive space so I understand what ails them and got a good idea of what I
need to do take some time but at least after that exercise I had it in their
words and we had an agreement upon the meaning in a way to go. What I found
interesting over the last 5 to 10 years is that patients now come to me
they’ve already Googled their symptoms, they’ve already made their diagnosis and
they have their output and hand it to me and say it’s alright doctor I’m sure
I’ve saved you all the work I’ve got X and it’s usually some completely
obscure one of your rare diseases that came up you know but you know my left
ear feels warm on Tuesdays my right eye flickers frequently and I have this
funny crick in my neck when I move it up and down and they’ll put it in get a diagnosis you put anything into Google and you’ll get a diagnosis but they’ve got the diagnosis my job is just to give them
the treatment that Dr. Oz told them they need for this constellation of things. So
so the there’s a kind of mixed blessing to the advent of I would say big data computational possibilities. Yes in some
ways it might save time but despite the fact that when I started in a very
low-tech environment when I first started in practice, there were no
computers in offices, if I wanted to find an article I had to use the old
index Medicus look it up and then walk to the stacks and pull it out to find it, I couldn’t just Google PDF. I’m not certain that I spend any less time now or
anytime has been saved in my day-to-day life and in fact if we don’t get these
systems very well specified and they’re perfectly specified in more
deterministic systems but where I work in primary care which is kind of lots of
ambiguous language lots of vague symptoms that may in fact turn out to be
one of the rare diseases but in their early pre differentiated phase all
looked kind of like but so it’s not actually saving me money I spent more
time now diagnosing people because they’ve already and then convincing them
that they don’t have the disease they think they have because they found it
and clearly a computer has to be more intelligent than you doctor, despite the fact that I’ve had 30 years clinical experience and certain
general rules of thumb and heuristics. Yes. As humans we are subject to
cognitive biases but certainly bias or certainly heuristics still work common things are common and
particularly in primary care and so that kind of flies in the face of the kind of wide
interpretive net that’s cast with search engines. So I think there are some
particular hazards and they’re actually have been I think adverse consequences
associated with so with the use of computer technology for
self-diagnosis and that’s going to become more and more common. And I joke with my residence, I say you’re gonna be replaced in in in my life too I’ll be
retired fortunately you’ll have to deal with all of the new whole genome
sequencing and soon basically you will be responsible for
your own diagnosis so the Canadian you know the Malpractice Association will
be out of business, why? Because everybody’s going to be their own doctor.
They will Google, they will put into some text thing and we will need a point of care
technology, right? So you’ll wake up in the morning and you’ll have this warm
feeling in your ear lobe and a flickering eyelids and a cricky knee
think I need to self diagnose this so you will put that into your your system
and then you can have a point of care technology with a drop of blood and it
will run on multiple sequential tests and tell you whether everything to state of
affairs is good. Then you’ll go to your drive through MRI and sort of just like your whole body and it’ll all take this together and it will tell you
on your own basis what’s wrong with you and then you’re
responsible for the results of all of the issues around the liability, the
fiduciary responsibility, since its your own equipment and you’re doing it for
yourself you will have the responsibility of it. So 80%
of my work may be gone but nonetheless the responsibility for the accuracy of
diagnosis, the accuracy of therapy therapeutic decision-making has to
reside somewhere and so what I often say to my residence is that you know
physicians are our physicians not because they necessarily are more
intelligent than other individuals in society and having taught for 30 years
I’m not sure that’s the case I think they’re many intelligent people more
intelligent than physicians, its because they’re actually willing to take
responsibility for the decisions in the diagnosis they make when somebody comes
to you as a patient yes there’s all the difficulties of mediating language and
uncertainty and interpretation but it’s the fiduciary elements, its the idea that I
will be the one that takes responsibility for making the diagnosis and how things actually work out and
it’s I find it hard to think of a regime a regime whereby responsibility is
displaced onto some into the cloud. I loved Sally’s example so you’ve you’ve done your diagnosis but the company that that you’ve run it
through is in Malaysia and they don’t have any sort of interoperability with
the regiments and Canada you’re you’re out of luck. So one thing I would make
sure of for all of your future self doctors is that you really read those
agreements that are 60 pages before you click I Agree and then use it for
yourself. So the other problem is that I think where these issues are best suited is in fields that are, so this is me, before it went into medicine I used to teach logic
and so where you have a finite field in in combinatorial a simple you know you
can reduce it to a deterministic system that’s where a lot of things can be
routine eyes and me and and they will work and they will work well. And I think
that’s the area where where a lot of judgment and clinical judgment will be
replaced one of the best things I’ve seen for example in my career is
automatic electrocardiogram reports there you know instead of me looking and
searching the ST segment depressed they actually can give you fair accuracy
whether there is a schema or arrhythmia or something wrong with the heart on
the electrocardiogram and that’s actually been useful and helpful but
because there’s only so much information than the signal-to-noise ratio is
actually been fairly well contained but in the where I work in primary care I think
it’s gonna be incredibly difficult to actually put all of that into some sort
of finite field that you can actually reduce the… I think you’ll be
successful at reducing some of the interpretive variability but nonetheless
there’s gonna be a requirement somewhere for someone to be outside of that system
to be making interpretations. So I think there’s a lot of promise, I don’t think
it’s gonna actually replace humans, I would be happy to have 80% of
my work that I choose taken away from me this long as I have some choice of which 80% it is and which 20% I retain I
think that it will not be so problematic but I think the promise is still way more than what the reality is and gonna be really happy to see what happens in the next 10
15 years to push this along whether we will actually I would love to have the
kind of software that you were just talking about, that would be awesome and
some of the stuff that you guys are talking about is actually showing where
the promise lies but I think we’re a long way from it being replacing general
physician but we’ll see I could be entirely wrong about that but after say.
Thank you. (Clapping) So we have some time before we have to
clean out the desk for the classes coming in after us for questions some of
you have provided questions on the online registration form when you
registered for the event but I’m sure some of you have questions also we don’t
have a microphone that we can run around but it’s a relatively small room so
maybe if you do have questions you can just yell it out, and we’ll try to hear you, for any of the wonderful panelists
you just heard amazing stuff from. (Question from member of the audience). Q: There is a company I know of from the UK called Your.MD, which is like an AI app and you enter symptoms and it prompts you to tell more information, its give you a diagnoses and so far its been getting less misdiagnoses then the average general practitioner But it does have over sight by GPs as a whole to see how well they are doing. So I kinda know what your answer is, you thinking the GP still has a role in the end but to the rest of the panelist as well humans are prone to say that a…oh I’m far sicker than I really am gonna get an inflamed diagnoses of myself, like when you enter info on a diet app and it actually helps you like, I did today, is there any way that you think a computer can judge that better then the end physician can and do you really think that there can be a computer program that does all the work but doesn’t need some over sight in the end. end. I saw that you had some optimism but is that truly a possibility? Ross Upshur: I’ll let them answer that. George Cernile: I think it is a possibility, machine learning algorithms can do things that look like intuition but they are really just logic and building model from example so its possible, to what extent and to what extent it will be excepted I’m not sure. But I think that, that kind of thing is possible. Ross Upshur: To me a lot of it depends on whether your a determinist or an indeterminist. If you believe in a fully deterministic universe and there are many people in precision medicine, in fact there was an article in Jamba where they were saying, there will be a time when we will know all the permutations and combinations of every possible illness and it’s genetic origins, in which case that you will have then the capacity for algorithmic reduction and you’ll have
perfect diagnosis but those of us from the logic side of things remember that
there’s always this problem but incompleteness and so there’s always a
small amount of indeterminism and fallibilism in any system so it might be a
syntactically reducing the error space but we’re never gonna, its inimitable as
far as I can see. Michael Brudno: I think it’s actually a social construct rather than a
scientific one you know computers can replace medical doctors when the society
trust them pretty much the same level as they trust their medical doctor it’s not
it has nothing to do with more accurate or are they less accurate. Graeme Hirst: but by that logic Google has already replaced the doctor to large extent that have, to some extent they have. Frank Rudzicz: I have a question actually, we have a
bunch of questions from shy people who don’t want to ask the questions that they have put online. Ross Upshur: So a hundred percent of my work is going to be gone now. No it’s a hundred percent of the work for
5% of the people so some of the So some of the questions involved a startup, so I wonder about possibly some people in the audience are interested in creating
companies in the space of artificial intelligence in medicine and I was curious
to follow up on the idea that maybe there’s some kind of filter that can be
placed between Google and the short term and doctors so that I mean before we
have the drive thru MRI machines that there’s some kind of role or some kind of
output in a system that can be put in place that can filter the output from
Google so these individuals can be more informed and uninformed by junk science
and Dr. OZ and stuff. Do you think there’s an idea for a company to be developing a
filter of that type? No? George Cernile: Its possible (laughing) Michael Brudno: Its a question of regulation, right, if you consider by the FDA to be a medical device and you have to be licensed like a medical device its going to be difficult. If it’s for entertainment purposes only go on WebMed today and enter your symptoms, it will work. Graeme Hirst: Well let me come in here. We see almost imperceptible changes as Google gets better but over the years we know it gets better so more and more its not just returning documents to us to do with it what we want, its returning knowledge extracted from the documents and that comes up to possibly the top of the screen its chosen the exact right paragraph from Wikipedia or some other document that Google believes it can trust and it returns that to you. Going on incrementally I think we will start to see evermore knowledge synthesis happening in Google and it’s counter parts themselves so in that way Google will be doing a little more of the work for you but it will as Mike says still for entertainment purposes only at least in the legal disclaimer how or what that will mean in the behavoiur of patients and whether they choose to consult the medical system or not would remain to be seen but to get to the question with regards to startups I don’t know if there is a space for startup there but certainly for existing technology companies that have the resource to do this sort of thing there’s a lot of space to move and grow (Question from audience): Going back to the 20 80 of much taken away, so there’s the present and future of medical technology development what should we be aiming for as sort of an ideal? What 80% would you like to be taken away and what 20% would you like to keep or do you think we should keep? Ross Upshur: That a great question. So the 80% I’d like to take away is the electronic medical record no I’m just kidding (laughing). so one of the interesting things about…its the is practice medicine and people develop their own practice styles and the worst thing ever happened to me was actually the
advent of the electronic medical record because one i dont tape very quickly
dropped my patience I’d say I want you to bring your laptop in, so you can
Skype me because I can actually see what’s on here and then I could actually
see you and now actually it’s really regiments and changes the interaction
between physicians and patients the other thing they do is they allowed
far too much free text in and as a researcher we have the same problem you’d astonished at the number of ways in
which my colleagues can put down type 2 diabetes into free text counted like 30 different ways so what the 80% a lot of the … the the stuff that really helps
for clinicians is it and it kind of levels the field is that you don’t have
to remember as much when I went to medical school most of what you were
trained was required heroic feats of memorization and all sorts of mnemonics, we had to remember where nerves when and where they didn’t go and now you don’t need to
remember that because just like almost every place has a computer and you can
pull up the anatomy so I think the 20% that you would like to keep the meaningful
relationship discussion so is so those first few slides were that you had up
you know why are you here today is a very simple question that or how can I help you today that
admits to an almost infinite number of permutations and that’s the conversation
dialogical to mention that I think is the essence of medicine so the kind of
routine eyes differential diagnosis that emerges from that I think can be aided
by by computer but I’m a primary care physician so relationship is rather
fundamental and we can have those conversations over time as diseases
become very undifferentiated till they become differentiated or as most family
physicians know go away on their own so self limiting conditions are actually one of the most common reasons for people to consult family physicians and having that kind of judgment and it’s very interesting to
have a smart system that could say listen you’ve typed in your symptoms and
we’ve got a few little biological parameters you have a self limiting
condition that’s going to go away on its own. Whether people would actually a believe machine telling them that they have something that’s going to go away so I
would say the 80% is a lot of the memory work that 20% that to retain
would be the relational words that I think is fundamental to medicine. (Question from audience): The sense I get today from seeing this presentation and forgive me and correct me if I miss understood the focus is on replacing the doctor with an AI system but the model is still patient goes to some where and meets with a doctor, still has this doctors appointment Has there been any work or or looking at reevaluating that model altogether what I’m thinking is I’m imagining an AI system that is broader than just focusing on medicine an AI system that is a constant companion, we can go to the doctors office and we are asked about all our symptoms in this neat little package to give to the doctor but I feel like symptoms don’t happen to us in this package they come up as our day goes as we go through our day. So maybe there could be a general AI system that is connected to a Smart Watch or Fitbit. Graeme Hirst: Well as I said I posited a model where this thing is simply available over the Internet anytime and any where you want it and I did that with some reluctance given the title the Robot Will See You Now exactly because as I was developing my talk what came home to me over and over again was the importance of physical examination of tests and other things that can happen, that have to happen with, for the foreseeable future a human physician or clinician of some kind, I mentioned telemedicine as a comparison where the physician might be far away and there is a nurse practitioner or something and is able to provide the hands to that physician but I suspect that would make Ross unhappy because he sees the value and emphasizes the value of the in person aspects that go along with physical examination. Ross Upshur: But I don’t rule out the possibility of being completely replaced in the model that I said at the being where you are your own physician so you can actually go one step further and with your Fitbit it might have just a little micrometer that takes little pipettes of your blood and if your DNA had a gets changed or something it would alter you immediately that you’ve increased you’ve increased your risk of carcinogenesis by .001%. So you can have actually daily dynamics sampling of you system There is no reason why it couldn’t track your brainwaves, you know there’s lots of ways you could wire yourself up. Michael Brudno: To go back and you know whether human learning and day out its actually year is 50 years since the first
computer doctor its a 50 years since Eliza was first that came out for those who
don’t know Eliza Google her. its been in your computers for 50 years psychotherapist willing to talk to you
until your problems and actually think about how far we’ve gotten these 50
years its actually not that far we are you know we’re pretty close to the level of Eliza today. Still a long way to go before we get completely human out of the loop AI. and going back to the question of 20, 80%, I think a lot of the work we’ve done to figure out
what is the 80% of the doctors willing to cut so electronic health records
are horrible and it’s because there are horribly designed and one of the things that we work with clinicians they came back with very menial tasks that go into their job that they would love to get rid of and computers are much better at that those tasks clinicians are. Ross Upshur: If they are properly designed. Mike Brudno: Yeah, if properly designed. Question: On the topic of general AI, there was a paper on Jama Internal Medicine and it goes to these chat box these conversation agents and what they did is have nine questions some were mental health say related to, suicide, for example depression. Some were related to… so they asked a bunch of phones and systems so Siri, Cortana or Google Now and they found responses were inadequate and inconsistent until the published it which is good and I think maybe a few weeks later some of these systems had been fixed but should that be an earlier test case in that process that these chat box get some basic education on how to respond, say, by giving the number for a suicide help line, so do you think if that’s the wrong steps where someone had to publish these mistakes first and then it gets fixed or should it be a test case for example? George Cernile: Yeah, I’m not sure, I think if anybody is applying a chat box to a… suicide prevention, I think that’s pretty irresponsible at this point so, the way it happened is that they asked the phone, they said to the phone “I want to commit suicide”, and they record the response of the phone so in some cases they would respond with the phone number for a help line just like a Google search does and in some cases it would give no response or just send the statement to a search engine. So, yeah, so what the author is saying is that it needs to be better and in a lot of instances it needs to be better. Its not responding to the user properly Mike Brudno: Its a question of what you expect the system to do. if you type in “I want to commit suicide” into Microsoft Word you don’t expect it to do anything right? So, is Cortana more like Microsoft Word or more like talking to your doctor. And if you’re talking to your doctor you do expect them to actually to do something Ross Upshur: And it might come up with some credible ways that you might want to do it. (laughing) Frank Rudzicz: So we have time for one last question. Question: Throughout all the talk I noticed that there was a lot of focus on cognitive bias that is inherent in humans. Do you think that as computer systems get more complex we will find out that there are cognitive biases in the AI as well? And then if there are, how do you… what do you think will be the impact of this if you find out that there are significant, serious cognitive biases in the AI system as well? Graeme Hirst: Let me respond by sort of denying the the question I hope useful way, there is work in artificial intelligence right now in assisting people to make diagnose and other arguments and eliminate their cognitive bias along the way. So, agreeing with the general spirit of your question this is something that has been thought about not just in the medical domain between legal reasoning and intelligence analysis that is military intelligence analysis and so on its research that it’s very early days but it will certainly have an application in medicine sometime in the future and assist doctors in diagnosis. Mike Brudno: So there are strong cognitive biases in AI technologies and they are called overfitting, so if you have a better training set thats you know always better but in general you’re any kind of
machine learning technology is only as good as the data you gave… Ross Upshur: Someones got to enter the data in the first place. Frank Rudzicz: Okay so that’s all the time we have for today that was an amazing talk
and a group of talks by everyone and a lot of good questions. I hope you enjoyed it much as I did. Lets thank the panelists for
your time. (Clapping)

Leave a Reply

Your email address will not be published. Required fields are marked *