Modernization Hub

Modernization and Improvement
Allen School Distinguished Lecture: Ayanna Howard (Georgia Institute of Technology)

Allen School Distinguished Lecture: Ayanna Howard (Georgia Institute of Technology)


Good afternoon, everyone. Welcome to the last
distinguished lecture of the year. It is my great pleasure
to welcome Ayanna Howard from Georgia Tech. [APPLAUSE] Whoo, yeah. So some of you might
know I actually got my PhD at Georgia Tech,
and Ayanna was actually on my thesis committee. She’s been a great
mentor and a role model, especially an inspiration for
pursuing impactful problems, societal problems. So Ayanna’s talk today is
going to actually weave her story into the talks. I don’t want to give it all
away with the introduction, but here are some highlights. So she is the Linda J., Mark
C. Smith professor and chair of the School of Interactive
Computing at Georgia Tech. This is the department I was in. She was the chair
while I was there. Her work spans AI, assistive
technology and robotics, with the common thread
of societal impact, making the world a better place. She’s the founder
and CTO of a startup, Zyrobotics, which develops STEM
educational toys for children with diverse abilities. And in case you don’t
know, she’s very famous. Her face pops up on
my Facebook like once a month, to my delight. She’s been on the Time
magazine, Forbes, USA Today, Business Insider, among others. Now for a fun fact– this is why I’m the host– Ayanna is actually a
Pro Zumba fitness– she’s a certified
instructor for Zumba. So I don’t think we’re
getting dance moves today, but you can definitely see the
Zumba in her positive attitude and energy. AYANNA HOWARD: [LAUGHS] MAYA CAKMAK: So with that,
let’s welcome Ayanna. AYANNA HOWARD: Awesome. Thank you. [APPLAUSE] So one of the things that
you have an advantage as you get older is that you can start
talking about your journey and talking about all
the mistakes you’ve made, and you’re OK with it. And so I’m going to talk
to you through the journey. And through it, I
usually have these places where I make statements
that I believe in, because I think, as an
engineer, computer scientist, we tend to only respect
other engineers and computer scientists. And so I also love
this, as a platform, to basically promote
certain things that I think are important
for our community to look at. And so I’ve been
doing robotics– and back then, it was
artificial intelligence, but it wasn’t like
the AI of today– all the way since I was a
summer intern first year at JPL NASA in 1990. So I actually
started as a NASA– and of course, it
was an internship. But then one of the things
about NASA and government is you get really
valuable experience if you’re halfway smart. And so what happens
is from very early on, I had these opportunities
to do some amazing things in this world, which
positioned me quite a bit. And so what does that mean? So it means that some of the
opportunities and the journey that I will talk about has to
do with opportunity and people just saying, oh, I think you can
do it, and that opens the door, and being open to
then take risks because of those opportunities. So by the time I
finished my PhD– so every summer, I
would come back to JPL. I would work on some cool stuff. Things like surgical robotics
was like, what was that? Like here was a
surgical robotic, and what can you do with it? I’m like, I have no idea, two
rovers and things like that. And in 1997, we successfully
landed our first roving mission on Mars, Sojourner. And of course, it was July
4th, which was very special. Now, one of the
things about it was that it was what’s called
a science experiment. So if you remember
science experiment like when you were young
and did the volcano, you take all the different parts
and you try to see what happens and you get a grade. So we’ve been doing
robotics, and by then, I had been working
on my masters, PhD. So I was actually a
research scientist 1 level because I went to school while
being employed at JPL, which is really silly and stupid,
now that I look back, but that’s what I did. And so we’ve been doing
robotics in the lab. And what happened
was NASA had decided to do a special project which
was called Rovers on Mars, because the scientists
had thought maybe we can do something with this. And what a science experiment
with NASA basically means is we will give you no money. We’ll give you money
for the launch vehicle, and you guys figure out what
you can do with the rest. So that’s really what a
science experiment is. And what happened was it
was highly successful. And so if you think
about looking back, you probably don’t even remember
the news of the first rover on Mars. Do you even remember
the headlines? Some of us who are like
total geeks, but most of us probably don’t realize it. Because of course– AUDIENCE: [INAUDIBLE] AYANNA HOWARD: Well yeah,
that’s probably true. Well, no, this was ’97. Like everyone– [LAUGHTER] Well, OK. Yeah, you got a point there. You got a point there. So there was not a
whole lot of headlines because it was thought
that it may not work. It’s a science experiment
where these high-risk things. You didn’t get a lot
of budget, but it was like let’s see if it works. And it was one of these highly,
highly successful missions. And why was this? Because it landed on Mars. This rover, this beautiful rover
touched the surface of Mars, and it moved. That’s like, oh,
well, it’s moving. It’s a rover. Of course it’s going to move. But it was the first time. And the scientists, this
was an enabling technology. So they could see images
that moved and changed on the surface of Mars, and they
could provide some information. So they said, there
was a rock that’s like two centimeters away. There’s a rock there. Can we go closer? I just want to image that. And so you would do these. You would have human
expertise in that regard. AI was human AI. So the humans, the
mission planners, would figure out what the
rover commands would be, they’d upload it, and then the rover. And they’d look
at the next images and like, oh, 0.5
centimeters away. OK, let’s do, again,
the mission planning. But that was it. And it was exciting because it
was the first time scientists could think about really
exploring the science value of another planet. And imagine, then, if this
is a science experiment and you have no
money, what could you possibly do if we actually
made this a real mission? That was the start
of Mars exploration, the whole program,
in terms of we are sending rovers and landers,
and it was intersperse. It was basically an
every two-year cycle. Like you guys did
it with no money. Now better, faster,
cheaper, and you got to do, hopefully,
all three, but maybe two. So that was the whole model. So here I am. I’ve been working,
doing all this research. We had to figure this out. ’99– remember,
I started in ’90. ’99, I got my PhD, been working
for the same supervisor. And my supervisor’s
like, oh, you have a PhD? I want you now to lead the
team, because you know, all that stuff you’ve
been doing in school, you should know
what we need to do. And I want you to lead and
think about the next generation of rovers, in terms
of navigation. What does this look like? How does it work? I’ve been practicing in school
with AI, neural networks, and human robot interaction
in a play format, because it was like
education in school, but those are the things. So here I was. I’m a 27-year-old
and I’m required to think about the next
generation of rovers. And of course, I
didn’t know what to do. Like what am I supposed to do? We had this great mission,
and I didn’t know. My mission at the time, ’99,
which means my first mission, would have been either MER ’03
or the time we knew we were going to do another
mission after that. I had no idea. So this is the
whole world of, what do you do when you have
no idea what to do? How do you expand? So I thought about what do
you do if you have no idea? What do you do when
I can’t go to Mars, I don’t know anyone else
who’s gone to Mars– and I did call people. No one had. So what do you do? What you do is you
ask the experts. Like who cares about this? The scientists cared. So the scientists had, in their
idea and their thought process, what this looks like. And so I started
putting together teams of engineers, computer
scientists, and scientists who we sat together to
try to figure this out. And learning the language
of what that was, like they would say stuff
and I was like, I don’t know. And we would say stuff and
they were like, I don’t know. And so figured out,
well, the best way to figure out how to
translate is immersion. So taking scientists– and
so the desert is an air JPL– we put them out in the
desert, we had them navigate. So imagine, you’re on Mars. What would you do? And so we would
actually monitor. We would retrofit. We take cameras. We’d have rovers. They would navigate. And we looked at what
they would do, i.e. they would look at the terrain
and they would make a left. And you’re like, I have no idea. Like why did they make a
left at that moment in time? And so we’re writing. We’re looking at the images. And then they would see a
rock and they pick up a rock. And they’d look at it. I’m like, there’s
a lot of rocks. Why that rock? What could possibly be
interesting about that rock out of all the other rocks? And they would look at it,
and maybe they’d lick it. And I was like, OK, I don’t
think my rovers could lick, but OK. What does that mean? What does that look? And then afterwards,
we would start talking. Like, OK, what did you do? You saw this terrain
and you made a left. And they would say,
well, when I was there, I saw this feature
that was way over. And it looked like it
was a small mountain, and I wasn’t really sure, and
the other one was just flat. And so even though it
was a small feature, I said, well, maybe that’s
something that’s interesting, and so that’s why. And then what about the rocks? He was like, well, I
picked up that rock because it had a slight change. It was a little bit discolored
from the other rocks. And so usually, when
you have a rock that’s different than those around,
it means something important. It really does. And so I picked up that rock
because it was different, and therefore, it may have some
more information that I can do. And I licked it, because my
thought was, well, maybe there was some type of composition. Maybe it had been soaked in
some aspect of water, maybe some minerals. And so that’s why I licked
it, because I wanted to see was it basically salty. And I looked at this. I was like, OK, so one is
the terrain is important. So we need to make
sure that when we design this next
generation of rovers, that when we look
at the features, it’s not just about the
rocks in front of us. It’s also about the features
and the topography that’s way out in the distance that
maybe I don’t even care about. And it’s just slight
differences that I need to be concerned about. And in terms of the rock,
what does that mean? It means that when
I map the terrain, that the diversity
of the differences is just as important as the
density of all the rocks, and making sure
that our rover, when we look at interesting
things, like interest as part of this one element is
different than all the others, so let me look and
explore and come up with some analogy or
some kind of composition. The other thing I learned
is that scientists who have a different
language and engineers can actually learn how
to communicate and work together in the
same type of team, in the same type of environment. And so that was
really the lesson, which meant that I could do this
and practice it and figure it out. And so this was where I
first became a really true human robot interaction person. Of course, then,
I will tell you, classically trained
as an engineer, my HRI was like I had a
classical feedback loop, for any engineers out there. And the human was
like the anomaly, so that was a
disturbance vector, is how I thought about it. But I was like, OK,
need the scientists. They’re going to
mess up my equation, but I had to model
around their knowledge. So that was how I
thought about humans, but I was still a human
robot interaction person. Then I left NASA, long story. Shuttle accident. Basically research got frozen. So I came to Georgia Tech,
and what does this mean? I’m at Georgia Tech. All I knew was field robotics. I work with scientists. What could I possibly do? What place could I possibly go? Maybe the moon. The moon was hot at the time. Mars was still hot. I actually asked some of
my NASA program managers, I’d ask if they’d
give Georgia Tech a large, multi-billion dollar
funds to send rovers to Mars. And of course, they
didn’t even like laugh because it was ridiculous. And so what happened
was I had to figure out, I do field robotics. I’m at Georgia Tech,
which is a university. What can I possibly do? And I looked at the Earth
and I met some scientists, and I would go to these
science conferences because I’d been doing
that, and I still went. And I ran into a climatologist,
and we got to chatting. I was like, yeah, I’m
looking for a problem. I know rovers. I’m designing rovers in
a lab and I have no home. And he’s like, well,
maybe we can chat. I go to an area
called an Antarctica about every other year. And what I’m trying to
do is I collect data, because I’m trying
to understand what’s going on with our ice sheets. Because ice, large
bodies, are melting. And they melt. They
go into the oceans. And so he had all these
complicated models, and he needed data from the
interior of these places. And so he would literally go. Like every other year,
he would bring in troops. He’d get some funding. And he’d go and collect science. And he was like, yeah,
here’s my science data. And of course, it was not data. It was like points,
but not data. We communicate and I was like, I
think we can do something here. I know rovers. We send them to places
where scientists want to go but we can’t send scientists. Maybe we can do something here. And so this was a whole
era of taking the knowledge from a different place,
a different environment, from NASA, and retrofitting
it and trying to think about, what is it that
we can do that had some of the same
characteristics to elsewhere? Because that’s what
I was going to do and I just didn’t know, and
again, finding that problem. And so I designed an
entire suite of them– called them the snow motes– of these rovers
that can navigate on earth in hazardous
environments, i.e. the equivalent of Mars hazardous
environments, but of course, on snow, in terms of
collecting data and extracting. So easy. Done. Snow motes. We design, we deploy. It’s awesome. So here’s the first lesson. I knew rovers. I knew Mars. I’ve never been,
but I knew Mars. I knew deserts. I grew up in California. JPL is in a nice, warm spot. I’m at Georgia Tech. Georgia Tech is in the south. For those of you who know,
south is pretty warm. Snow, yeah. All right, so how am
I going to do this? So again, I was like,
I know how to do this. And so we talked
to the scientists. I was like, what do you do? He’s like, oh, when I go to
Antarctica, what I do is I go and we suit up. And I go and I walk around
and I put sensors down and we collect data. I was like, oh, OK. We’re going to come
up with a walker. That makes perfect sense. Because what do
you do scientists? Because we couldn’t
take the scientists and put them in the
field, so we had to do this verbal
kind of understanding. And so we designed
a whole suite of– these were called spider motes,
which had all the sensors. It had data, it could extract,
and it could walk around. And so it was awesome. In our first suite, we
had designed six of them. And we had color-coded,
like orange and red, so that we could
actually extract, in terms of the
visual interface, so we can identify each one. And then we had to test it. And remember, I’m in Georgia. So we literally had
to pack up, and we drove for about six
hours to find some snow. We found this place. It was actually a snow
slope, like people ski, and we took them out
and deployed the rovers. And they started walking. It was so gor–
we had the camera. ‘Cause you always, when
robots actually work, you cut on the video. Like that’s rule number 1. Rule number 1, no,
1 is batteries. 2, video. Go off. And they started walking. They were navigating. They were communicating. They were trying to
do this configuration. And as they walked– you guys have snow
here like once a year, and so it’s finding
after the fact. But when you have legs
and snow, something happens, which is that your
legs start sinking in snow. And our rovers,
our spider motes, were really beautiful because
they made beautiful snow angels in the terrain. I think they may have
gotten one feet maybe. So total disaster. And this was like
a real field trial. So we have three days of
trials and here was day 1. So we’re like, OK,
we’re packing up. This is not going to work. And so we went up and so
we said, OK, what else? And we talked to the scientist. We’re like, it didn’t work. What’s wrong? He’s like, oh, you
forgot the snow shoes. It’s like, oh, we sure did. OK, so we are going
to design snowshoes. So this is the next
rendition of the rover. We designed snow shoes. And because we had
gone out to the field, we had taken this
video, we noticed that there was a difference
between ice and snow. So not only did we
design the snow shoes, but also, underneath,
we had wheels so that it could
actually navigate. So the whole concept was
it could walk on snow, but then when there
was ice, it could have this controlled
acceleration on ice, so it deployed. And it really fabulous. This is where I totally
delved into understanding CAD. I hired a mechanical
engineering grad student. Understanding, OK, how
do you design this? What’s the CAD schematics. I mean, beautiful experiment. But we only designed one,
because it was actually fairly complicated. And the scientist came out. I was like, we’re
not going out yet. We’re going to have
him come and give us some input about what
we can improve before we built all of them and went out. And he came down, he flew
in, and he looked at it. And he said, it’s really cute. [LAUGHTER] And then he goes, is
that as fast as it goes? [LAUGHTER] And I didn’t get upset
and I didn’t get mad, but I did say speed
was not a requirement. And I looked through
all our notes. Like there was distance. There was how long it
had to be in the field. We thought about battery. There was nothing about
how fast it should go. And of course, we should
have known that, but it never came up and we didn’t ask. And so one of the
things about when you’re immersed
with a target group and you’re addressing
their problem, sometimes they don’t tell
you the real problem. Sometimes they don’t tell
you the real requirements. And sometimes you don’t
even know to ask that. You don’t know. And so now, whenever we talk
with anyone, we ask questions. Is this going to
be seen by aliens? I’m just asking
because we may have to have some alien
robot interaction technology in there. So we do that now. And then we did it. So we went back. And we’re like, OK,
walkers, he’s walking. Snowshoes, obviously. We couldn’t make
this any faster. It was just too complex in
terms of the control logic. So what should we do? And we are trying to figure out. And the scientist
wasn’t very helpful. We were actually strategizing. He was like, I don’t know. This is what I do. I go out. And we were strategizing. And so we did what every
good scientist does. We cut on National Geographic
and we watched YouTube. And guess what happens? Of course. These scientists and
researchers and people, when they go out into these ice
places, they have snowmobiles. Who thought about that? And one of the things
is when we saw it, and we saw these things
that we kept doing, the very first set of
videos, we kept saying, people are walking. People are walking. We kept saying that. And eventually, we said,
OK, let’s look back. Let’s not look at what
we know, let’s look at what we don’t know. And those same videos that had
the people walking actually had snowmobiles,
because they would take the snowmobiles to places
and then they would walk. And we kept focusing
on the walking, like what are we doing wrong? This is the answer, even
though the answer was actually in front of us. Because our lens on the problem
was so focused on the problem we had already thought we
solved versus the problem that actually was there. So we designed the entire
suite of snow motes, which was awesome. And so we took them out in
the field, different glaciers, grabbing science data. Back then, even, our
snow motes would blog. And so we would have
schools in Boston. They could log in
and communicate. OK, it was fake communication. So they would blog in and the
robot would say, it’s cold. Of course it’s cold. But it was fascinating
to the students, because they felt like they
had this real connection. And we showed data, live stream,
live stream, kind of delayed, the live stream. And so it was
fascinating and amazing. And one of the things about
working with this demographic and working with any
target demographic, when you bring in technology
and you solve their problem, is that you get a reputation
of providing value. And so what it meant was that
when we got back home, when we continued working
with the scientists, who were starting to get calls
from other scientists. I didn’t have to
look for the problem. People would come. It’s like, we know
you guys are cheap, because you’re a university,
but you’re interested in science and you understand scientists,
and so we would really like to work. And so we had everything
from continuing on working with astrobiologists. In fact, she’s now
at Georgia Tech designing rovers that
could go underneath. So basically,
underwater vehicles that could go underneath the surface
of the glaciers in Antarctica and look and navigate and find. She wants to go to Jupiter,
in terms of one of the moons. And so we were able to do that. In fact, in that
one, my students had to spend two
months in Antarctica, which I didn’t actually– I had to not sign
up for that trip. And then we also
worked with aquanauts, in terms of analogist
environments for the astronauts,
in terms of training. And if I’m designing a rover
for the next generation of moon, what does that look like? How do we practice
those things, in terms of handoff and information
and communication? And so Designing rovers. And it was awesome
because people would come and it was like, yeah, we can
do that, and assign a student. And the technology was
basically the same, it was just the application
that was different. And so I was like,
bam, there it is. I got a rover for you. I got a rover for you, a rover
for you, a rover for you. And it was awesome
and it was amazing, but something happened. So when you’re a researcher,
you like to do research. That’s what drives you. And when you get
good at something, it no longer becomes research. It becomes something
for someone else. It becomes a tool, but it
doesn’t become a research. And so even though we
were doing pretty well, like funding was not an issue,
it was actually sort of boring. And remember, I
had grad students. So trying to come up with
a theoretical thesis based on a tool became a lot harder. And so I had to
think and step back. It’s like, I’ve got to
figure out something else. I’ve got to do something
else that’s different. And I didn’t know what it was. So just so you know
a secret, when you’ve been doing something for
many, many, many years and you had to figure
out what to do, there’s a chapter at
the end of every thesis, whether it’s a masters
or a PhD, and it’s called Future Work Section. So the Future Work
Section is magical. So I remember, and I do
this with my students, when I was required to put
my Future Work Section in, the requirement was
it has to be something that you can’t do in six months,
because I will make you do it. [LAUGHTER] And so you have to think
outside of the world. Like, OK, what could
possibly be done outside of? And so that was
what my advisor did, is like you have to think
beyond what’s possible, because if it’s
possible, that’s going to be six months on your time. And so I said, let me look
and see what’s going on. And one of the nice things is–
this is actually a picture from my thesis– and we didn’t have fancy stuff– from 1999. And I went back and I was like,
this neural networks world, they’re making some
really nice things. And this deep learning,
this is something here. And I’m looking
and I’m like, OK, my sample size is
a little small, but we can maybe
do a linkage there. And there was this
manipulation, deformable object, computer vision, stereo. Like all these things
were becoming a reality. And I looked at my future work
and I was like, oh my gosh, I was a genius. The things that
were happening now were now possible based
on my Future Work Section. And so I decided I’m
going to do a reboot. I’m going to still do
the field robotics, because funding was
there, but start exploring those things in my chapter. And so what does that look like? So throughout my entire
career from NASA days, I’d always done outreach. And some of the madness
behind that is NASA’s funding is driven by people
who pay taxes. And so one of the
things at NASA is that they would take
scientists and researchers and they would actually
train us for public speaking. And why is that? Because if you’re
out there and you’re doing outreach and talking
about what you’re doing at NASA, you are basically building
up this affinity of, oh, NASA’s important. I don’t know what they’re
doing, but they’re doing some cool stuff
because it sounds cool. And so they would train us. And so we would go out
on the field and do that. And we each picked a value,
and so mine was student. So I worked a lot with K through
12 students and some college students. And that was what
my training was for. How do we get them
engaged so that when they do become tax payers,
they will remember that NASA was a friend of theirs. So that was the thing. So when I came to Tech, that
was still part of my DNA, because I had been
doing it for years. And so when I came, I was still
doing these robotics camps and these outreach camps
on Saturdays in the summer. And one year, I had a young lady
who had a visual impairment. And the technology,
basically, I thought it was awesome technology. It just didn’t work. And so I had a grad student
sit and kind of walk her through it, and basically
kind of went through it. And I didn’t know
what was wrong. I was like, something
is not right here. She’s really smart. She’s got it, but this thing. And I didn’t have the words
for assistive technology. I didn’t have the words
for accessibility. I just knew that this was a
smart kid and my interface that we had designed, this
entire curriculum that I’d been using with kids all
along, was not working. And so I developed an
entire suite, a curriculum, around robot
programming and camps because of that experience. And so when I looked at
the back of my thesis and I looked at what was
available that was out there and I looked at things
that I liked and thought was important, what I came
upon was this whole concept of, how do we increase
the accessibility and the technologies, through
robotics, for everyone? So how do I improve
my social impact? The other thing is my
thesis, target demographic, was in health care robotics,
which, again, I didn’t even have a word for it then. And so what I do now
and where I translate it and where I transform is my lab. And we still do a
little bit of field robotics, but much smaller now. But now we focus quite a lot
on developing technologies, robotic technologies, and using
AI to provide some adaptation, and we focus on children
with diverse abilities. Now remember, field robotics,
like literally field robotics, the next day, I’m like, I
want to do something different and I want to do this. I don’t know anything
about health care, and my thesis was
like this imaginary, we’re going to do health
care robotics in a hospital. And we like read
something, that was that. I didn’t know much
about health care. I didn’t really know
much about pediatrics. And I actually started working
with older adults with stroke, and quickly, within
six months, realized that was not what
I wanted to do. So pediatrics was the next
thing I was going to try, and actually, I found
my passion zone in that. And I didn’t know when I could
do with robotics in that space. So there were so many unknowns. And so taking the
same lessons that I had learned with scientists and
field robotics was immersion. So just a little side note, so
this is one of my platforms. In fact, Richard probably even
knows these better than me. So worldwide,
there’s a diagnosed– and when I say this, loosely,
150 million children worldwide diagnosed with a disability. We actually think it’s larger,
because places like the US, we’ll disclose. There are still some countries
that parents don’t disclose. So we actually think that
it’s probably larger. There’s probably an
entire demographic of kids that are not being
counted in certain countries. And if you think about,
just in general– and so this is for you, who
maybe don’t care about kids, don’t care about children,
because maybe you don’t have any, even though
we were all children once. For older adults, what
we’re starting to see is also a rise in the
number of disabilities found in older adults. And one of the reasons is
because we’re living longer and we expect a higher quality
of life as we’re living longer. And so this is a thing. So if you are a faculty
member and you are 60, 40 years ago you were
expected to retire. Now 60 is young. Maybe because I’m
getting close to that, but 60 seems awfully young. And so now, it’s like well, 80,
yeah, maybe you should retire. But then it’s going to be 100. And so the thing is
we’re getting older, we’re living longer, but our
bodies are still 80, 90, 100. Our bodies don’t just
magically grow robotic arms, and so we can’t,
all of a sudden, lift 200 pounds
because we’re older. Our brain connections. We have an 80-year-old brain. So these are things
that are physical that just age with time. And so some of the
disabilities, some of it’s based on cognitive limitations. Some of it’s based on
visual limitations. I know I can’t see
in dark anymore. For those of you who commiserate
with me, after a certain time, you can’t see in the dark. Yeah, you guys
are looking at me. It will happen to every
single one of you. [LAUGHTER] And so this is the thing. And so this is the
target demographic that– I like the pediatrics,
but it’s something that I think we
should all work on. 1, because I tell
my students this. Get it right now
because you will be users of this in 50 years. So that’s some motivation. But I think what it is is if
you’re working on the problem now, it means that when we
are 40 or 50 years from now, it won’t be a problem. It will just be natural. It will be part
of our ecosystem, where it won’t be
like, oh, we’re designing assistive technology. It’s a technology
and it just happens to be useful by everyone. And that’s the mindset
that a lot of us are trying to incorporate in
the technologies we design. So that’s my platform. So I want to do this area. I know nothing about
pediatric health care. I know nothing
about this domain. What do I do? So just like with
the scientists, where we dump them in the
middle of the desert and monitor and map, we do the same thing. So starting off for the first,
literally, year and a half, it was we’re going
into the clinic. We’re going into the classroom. We’re retrofitting. We’re putting sensors
on kids, on parents, on clinicians in the room to
just see what it even means. What are the pain points? What are the things? And one of the things, just
like with the scientists, they’ll do something. And you’ll be like,
that makes no sense. And then you’ll ask, and they’ll
be like, oh, because of x. And they’ll do something. You’re like, that
makes no sense. And you’ll ask, and they’ll
be like this is because. And what we found in this
space, a lot of the why did you do that was because
this is the best we have. That is really surprising. In the scientists, there was a
very specific thing, but a lot, in this case, and that
includes the pediatric, it was, well, we do it
this way because this is the best we have. This is the best
solution we have. And yes, it’s low
cost, but nothing else works for this child, because
this child is different than the next child is
different than the next child. And so this was one of these
things that, as an engineer, you coming in and there
was so many things that we could work on. So many things, it’s like, I can
get a student to work on that. It’ll only take
them a year and we can have a solution for that. And I can get a
student work on that and it would only take
them maybe two years and have a solution. And it was so many
things that, 1, I realized that this is
a demographic that we, as a scientists,
don’t focus on at all. And there are so
many opportunities to actually have and make
a difference in the space that it doesn’t take
a five-year PhD to do. And so I when I did
these observations, it was the first time that I
realized that whatever we did, as long as we had our best
intentions, that we did it right with the
target demographic as its co-development, that we
were going to change something and we’re going to change
people’s lives because of that. And so this is what we do. So designing virtual
reality games. This is where we got into this
whole aspect of HCI, Human Computer Interaction,
and what does it mean to engage students
in terms of games, engage children in terms of
the virtual reality aspect? And what does that mean? And there’s a reason why video
games and the gaming industry is like this billion
dollar industry. There is a science to it. And I don’t know if any
of your gamers, it’s like, oh, it’s a fun game. But there’s actually
a science to how do you engage someone for
hours and hours and hours so that they even
forget to eat and sleep. There is a madness to it, and
we wanted to capitalize on that, but use it for the social
good, to do therapy for kids. We also did it with
respect to the robots, because one of the
things about games is that if you are
interacting with the game, even if it’s a therapy
game, how do you know that you’re doing it right? How do you get feedback
on how to do it better? And so we had to use
the robotic agents to come in and actually adapt
and provide them information and provide them feedback
based on what they were doing. And so maybe it was, as you’re
interacting with this game, we need to increase maybe the
protocol is range of motion. So we want you to increase
your shoulder range of motion. And so the robot can interact
and look and say, OK, we know that we want to
increase by 20 degrees. So what that means is you
need to reach a little higher, or you need to reach a little
faster, taking what we know, in terms of the math, but
decomposing it into language that a kid can understand. And so that was an
entire process of, how do you take things that are
fairly complicated, in terms of direct analysis, and
compose it into a language where a kid will be
like, oh, this is fun. This is my playmate. This is a game. So we can do that. And when you put
it all together, we can engage kids in
all of these aspects. And yes, I did learn
about emotions. So I remember when
I first started in human robot interaction,
this whole world of emotion recognition. I personally was
like, I have no idea why I would want an AI
agent to have emotions. That makes no sense. I mean, I love data,
but I did not like him after he had the emotion chip. I was like, that’s not right. But what we found,
and this is actually through our own
studies, is that when you have an agent that emotes
in a way that corresponds to the emotional
state of the child, it actually enhances compliance
and it enhances engagement. And so it’s not that
robots should emote, it’s that robots need to engage
with the user in a way that’s appropriate in order to
achieve the protocol. And so understanding not
only the child’s emotion, but also understanding
how the robot should emote to then evoke
the correct behavior, the correct correspondence,
became an entire science. And so we delved a lot into
recognition of facial motions. And for kids, it’s
slightly different. Well, it’s very much
different than adults. And then children
with special needs, depending on the demographic,
it’s even more different. And so trying to understand
these nuances, what does happy actually mean versus
angry versus frustrated? And when we put
that all together, we can actually
have these sessions where we would put the
robot, in terms of– so this is one of our clinics. This is a child with autism. So children with autism,
children with cerebral palsy, children with
motor disabilities. We work with the whole gamut,
with the underlying concept that we’re trying to
improve motor skills, ’cause that’s where
we’re looking at. And so we can have a
robot emote accordingly, respond accordingly. And one of the things that
we do, what we’re modeling is human-human interaction. We are not developing
fundamental things in natural language processing. We’re not developing fundamental
things or technologies in computer vision. We don’t do testing
in the lab with kids. We do testing with
adults, which is students. And then we go in the field
from day zero with kids and kids with special needs. And so the technology has to
work irrespective of the child, irrespective of the environment. And so if you think about
human-human interaction, again, this is a lesson learned. Because we develop and
design and we would be like, it’s a perfect system. And then we take it in the
field and it wouldn’t work. We had some very bad ones
at the very beginning. As an example, if I’m talking
to you and we’re talking, we’re talking, we’re talking,
and then you say something. And if you stop,
that usually means that I’m expecting you
to respond to something. It’s like human nature. Conversation is
a back and forth. You talk, you pause. That means that you want
me to say something. And so we use these. Remember, we collected
all this data, and so we saw how
people converse. We saw that when I look
at you, that typically means I want something from you. And so if a child is
working or doing something and the robot looks
at them, well, we notice that the child will
look at the robot back. And so we can use that
information to do something. We could also use information
such as computer vision. How do I do face recognition
and emotion recognition? What do I do? So where is the face located? We have kids of different
heights, different locations. Well, let’s think about this. Our kids are at a table. Sometimes they’re
in a wheelchair, but they’re at a table. The tablet or the device
is located somewhere. The robot it’s
located somewhere. Pretty much very easy math. You know I don’t
have to look down. The face is probably not down. I don’t have to look up way,
because the child’s face is not there. And if I just look sort of in
your direction, guess what? You will think I’m
looking at you. It’s baffling how it’s
like, oh my gosh, yeah. Because that’s what we expect. If I expect you to be looking
at me and talking to me and you’re looking in
that direction, of course you’re looking at
me, even though I might be looking at the person. Maybe I’m thinking about what
I’m going to do for dinner. It doesn’t really matter. It’s your perception, and
so we capitalize on that. And so we had a bunch of success
in that, which is amazing. So I’m going to talk
about one other thing, and then I’m going
to close for answers. So we know, with this aspect
of emotion engagement, we know with this
aspect of understanding human-human interaction
and capitalizing on that and bringing the robot, we also
know that this enhances trust. And we do a bunch of things
because this is actually a problem. So my group does a bunch of
things in looking at overtrust. And so I want to tell
you about one story, just because it’s funny. We very rarely do adults. This is the one adults
say that we did. Because people kept
saying, well, it’s kids. Of course kids are vulnerable. Of course they’re
easy to manipulate. And we’re like, no,
I don’t believe that. I know human-human interaction. Adults are easy to
manipulate as well. And so I wanted to prove
that, mainly because all the naysayers, I was
like, we’re going to prove that I
understand my science, even though I’m an engineer. And so this was an experiment,
very reactive experiment. This is emergency evacuation. Acquired, everyone
knows what to do. If the lights go off,
you do you do you? You lift up, you go to the exit. That’s just kind of
everyone knows what to do. Irrespective of
where you grow up, you have this acquired behavior. So what we wanted
to do is see what would happen if we had a robot. And in this case, we
didn’t put emotion. Well, we didn’t make
the robot emotional, but I’ll explain
what I meant by that. A robot would come
lead you to a room. You go into the room. You go into the room. There was this big poster that
said, please close the door. Read this survey
about navigation and answer these
questions for the study. Because, again, people knew
that they were in a study and so we tried to bias it. So the room inside
was the study. So they go in, they
close door, they start reading, da, da, da, da. And as they’re reading,
answering questions, what we did is we filled
the building up with smoke. [LAUGHTER] And then we set off the alarms. And so this is what they saw. So the alarms go off. And everything was videotaped. They get up, they walk. And they open the door and
this is what they would see. And so the reactions
after this, the reactions were quite interesting. We have all of the video
of what would happen. Some people would walk,
some people would run. But what we did is,
intentionally, we put a robot to show you to a
place where you didn’t come in. There’s an exit sign, and
it was near the exit sign. And most people, just like you. I mean, some of you
probably are like, oh, I didn’t know
this was an exit sign. What we actually
found was that there was such a focus on the robot
that the robot was the truth. And the robot literally was
not near the front door, was not near the exit. And we just wanted to see
what kind of influence would the robot have. And what we saw was
actually quite interesting, and again, it validated
this aspect of overtrust, that it’s not just about kids. It’s about this relationship. If you, as a
researcher, understand human-human
interaction and you’re designing technology
that is based on that, it means that people,
whether they’re kids or whether they’re adults,
will be a lot more compliant to this, which is
a problem, which is why we do an
overtrust in ethical AI now, because this is
actually a problem. I want to have good. My robots are good. They have social
creatures, social impact. And so thinking about
these ethical issues is. And so we would have
individuals like dark room. People would do things
like push furniture to go where the robot
said and pointed to. We would have individuals
stay by the robot. Again, remember, there’s
smoke, fire alarms. They would stay by
the robot waiting. And we’d ask them and
we’d say, oh, well, why did you stay by the robot? And it was like, oh,
well, I interpreted that meant stay in place. And you’re thinking, OK, yes. I get that, but there is smoke. And so we actually pushed
it, pushed it, pushed it, and we finally broke it. And when we broke it was when– when they were
coming in, the robot would basically stop,
just stop and turn around. And one of the researchers would
come and say, scripted, sorry, the robot broke again. Please follow me to the room. So they would go to the room. And then when they
came out, it had to be an area where they
physically had to do something, and that’s when it broke. Basically, we literally
kept pushing and pushing and pushing the risk. And we basically
got to the point where the risk was so great
that people went, oh, wait. This is really serious. So some of the
reasons why we think is that evacuation, smoke. What we’ve done is we’ve now
raised the emotional aspect. And so even though the robot
itself wasn’t emotional, we put you in an
emotional state, which means that when you
think about your logic and you think about
your thinking process, it’s here’s a robot. Of course it’s a robot. It’s going to be safe. It has this big emergency
evacuation robot, and so there’s this here is
my power because you just seem like it’s a safe vehicle. Because it says emergency
evacuation, so of course, you are going to be the savior. And so this is a problem, and
we’re actually poking on it. The other thing is that
there’s a lot of bias. In our own work, what
we found is early on– we do emotion recognition– we found out kids are
different, in terms of emotions. We had started on all
this adult emotion. Kids are different. And one of the biggest
studies we had found was a gender difference. And so in one of our
studies, we collected data. Afterwards there was a study
with children with autism. And we found was
that the results were different for the girls in
our study than the boys. We feel like, that
makes no sense. And I mean, we had everything. We had this equal number
of girls and boys. We did the study. And it wasn’t
statistically significant, but it was one of these
trends that it was different. And so we looked at the
data to try to figure out what was going on. So in the world of autism,
there is a higher incidence of boys than girls. When we were collecting
data, all this data, in terms of observations,
to create our algorithms, our focus, our lens, was
on clinician and child. I’m an engineer, I’m just
happy that you have me there. We never noticed, because it
was just the collecting data. It was just the data phase. We never noticed that all of
the kids happened to be boys. It just never occurred to us. But then when you do
the real study, when you do the real
study, then you’re very concerned because this is
what you’re going to publish. So you think about,
OK, I need to make sure that I have gender diversity. I need to make sure of
that so that when I report, I can report that this works
for the different demographics and ages. But the data that was used
to feed the algorithms, we hadn’t even thought about. It didn’t even occur to us. And so think about
this of, how is it that we couldn’t figure
out– and my team is diverse. Like we should’ve
figured this out, that it’s because of the lens. And wherever you’re
going to go, it’s your lens that is going to
bias your data, whatever that lens is. I grew up in Atlanta. When I look at my
data, I’m probably going to be very concerned
about rural Georgia, but I probably won’t be as
concerned about other places because it won’t be part of my– like there’s a Midwest. I probably won’t
even think about that because it’s not part of my DNA. And so we’re doing
a lot of things, and we’ve done a
bunch of findings, that all of the commercial
applications that are out there also have
some of this bias with kids. Again, we look at kids because
that’s our target demographic. I mean, it makes sense. So if you think about
all the applications that are out there with respect
to public available APIs, they’re scraping
the web, which I have a whole other issue with. Who are posting kids pictures? Their parents. Parents don’t post
kids who are crying. They post happy kids,
like kids that are happy and having fun in playgrounds. And so if you have an algorithm
that’s scraping the web and you have people who
are then coding emotions, you’re going to have a
large majority of happy kids that are smiling. You’re not going to
have the kids that are angry or upset or
frustrated, because parents are like, oh, no, no. We’re not taking
a picture of that. That’s the reality. And so we’re doing
a bunch of this and we’re actually coming
up with methodologies to fix it, as well, because
with our algorithms, they are deployed
in the real world. They are being used. And so we need to make sure we
get it right, irrespective of how we train it. And so there is hope. So with that, I am going to end. I think it’s our responsibility
to think about accessibility, to think about this aspect
of bias in our applications. And we are at a
crossroads where, as engineers and
computer scientists, we are the cool kids. Like we weren’t
always the cool kids. Now we are. And we can basically mess it up
just as much as we can save it. So I really do think it
is our responsibility to make sure that
we make it a better place than when we leave. And so that’s it. Thank you. [APPLAUSE] MAYA CAKMAK:
Questions for Ayanna? AYANNA HOWARD: So dance is
actually a universal emotion, which is actually amazing. I can get any adult
and kid to smile when you have a
robot that dances. It’s actually an amazing thing. MAYA CAKMAK: Is that Zumba? [LAUGHTER] So I’ll kick off the questions. So actually, Richard
can help me out on this. So there is evidence
that academic work that has more direct
impact on society actually attracts more
underrepresented groups. I think the studies
we often cite in some of our
accessibility projects are particularly with women,
but have you also found this? You mentioned you
have a diverse group. AYANNA HOWARD: Yeah, I do,
and it’s actually amazing. And I think it’s
because that what you do– so we all do great
engineering or great computer science. But when you can see someone
else who is basically– like your mom, your dad is
always going to love you, but when the stranger also
loves you, that means something. And so in this regard, it’s
like your advisor is always going to love your
technology and your thesis, but when someone who is
not connected actually says that that’s worth
it, it actually, honestly, gives students like,
oh, this is why I’m here and this is what I want to do. And I think that’s
one of the reasons. AUDIENCE: Thanks for the talk. I thought it was
really inspiring to see all this really cool work you’ve
been able to do as a professor. It’s motivating to me to
look at that career path. I’m kind of curious
how you’ve balanced doing some of the
research work versus doing things that were required for
practical applications of it. You talked a little bit
about this earlier with, oh, we were doing some
field robotics stuff and what parts that were
research, et cetera. So could you talk about
that, how you’ve done that? AYANNA HOWARD: Yeah. So I’ve had to balance for
the sake of my students. I will say that I made a
mistake with my first student, like figuring out
and what is theory? I remember my first thesis,
when my student, they said, is this really engineering? And I thought, OK,
we need to fix this. And so now my students, they
take a little bit longer because they have to
have a human subject. That’s a requirement. But we also carve
it so that they get some fundamental
theories out of it, and the theories are
basically for us, as academics, because
at the end of the day, most of the algorithms don’t
work in the real world. But we do that balance, and
I am intentional about that because of my students. AUDIENCE: I had a question. So you talked at the end
about this ethical dimension and trust between these
kids and these robots. I mean, on the one
hand, trust is something that you’d probably want to
cultivate to a certain extent, or there’s this
emotional connection that you want to cultivate
with the kid with this robot. But you also talked
about there’s a lot of pretty clear dangers
of this kind of trust. How do you think
about doing either– not even just doing
ethical research, but designing ethical robots
to help people in this space? AYANNA HOWARD: Yeah,
so it’s a balance, because we want to ensure
compliance and engagement, but we also want to
minimize overreliance. And so some of the
things that we look at is we’ll see if
it gets published. We’re able to now
model when you get into that aspect of overtrust. And our thought
is, at that point– like explainable AI. When you get into that
point, we’re now looking at, how do you break
trust, but break it so that it’s only temporary? So the thing is we want you
to just stop, think, breathe, and then continue, and
doing that balance. And so it’s still an
open-ended problem. We have a couple of ideas
that we’ve been deploying that seem interesting. We’ve had people yell at
the robot when we’ve broke– [LAUGHTER] Some funny videos. I haven’t talked about it
here, but yell at the robot when trust has been broken. And how you break the
trust has them yelling or it has them being
like, oh, that’s OK, I’ll still work with you. It’s a delicate balance, though. AUDIENCE: Thank you
for the great talk. So one of the things
that it seemed like you talked
about a few times was these sort of unknown
unknowns in your research. So like in the example
of a snow robot, then also in the
bias at the end. And it seems hard to build those
out because they’re unknown, but particularly as we’re doing
more data stuff in robotics, it seems like data bias and
that maybe the biases we don’t understand are going
to become more and more of a threat for bad systems. How are you dealing
with that, or how do you get better at that? AYANNA HOWARD: Yes. So in terms of physical
robots versus what we do the data collection. So physical robots,
we go out in the field before we’ve even finished. Literally, not
even before alpha. So pre-alpha, we
actually go in the field to see how individuals
interact with the technology is where we discover a lot
of the unknown unknowns. And so that’s deployment,
and sometimes it’s brutal and you’re sad because your
stuff doesn’t work at all. And then the other is in
some of our methodologies, we’re actually trying
to get our systems to– right now, we’re able to have
them self-identify that there’s issues. We haven’t yet figured
out what those issues are. So we can have our
systems say, there are some gaps in your data. And again, these
are just with kids. In one of the
charts, we’ve come up with all of these differences,
disability, economic, all the things that are
linked to survey census, where our system can identify
if there is a gap in anything. What we haven’t figured out
yet is, how do we fix it? But we can at
least identify when our systems aren’t quite right. AUDIENCE: My name’s
Leah Perlmutter. I’m a student of Maya’s. And I wanted to ask, at
one point in your talk, you mentioned designing
for a child audience. And one of the
challenges you mentioned was that at some points, you
had better access to adults for testing the technology. That really resonates
with my current research, so I was wondering if you
had some ideas for how to make that work well and
what the pitfalls might be. AYANNA HOWARD: So
I have a phase. We do adults. We do children. We do children with special
needs is how we do the phase. And my philosophy is, depending
on where we are, alpha or beta, if it doesn’t work
with adults, it’s not going to work with kids. If it doesn’t work
with kids, it’s not going to work with
children with special needs. And that allows us to
at least figure out the unknown unknowns,
at least in some regard. Not all of them, because every
time we learn something new. But our sample sizes are
small, and so it only generalizes to the individuals
that are in our sample set. So we do things like we hold
constant certain things. Our age demographic is
we only look at age– our primary age demographic
is like six to 10. Even though we’ll
have 12-year-olds, we’re like, oh, that
would be lovely, but that data is not going
to help us to generalize. So we make some
choices like that. We don’t have an education
department, for example. Maybe then we’d
have access to kids. Maybe that’d be better. But that’s our process. AUDIENCE: Can you hear me now? AYANNA HOWARD: Yes. AUDIENCE: Yeah, that
was a great talk. So in my space that
I work in, disability is really a complex subject,
especially with children. And in the typical case,
not always, but the parents want to, in quotes,
“normalize” their child, bring them back to a
state like a normal child. But when these children
grow up and become adults, they move away from that. They really want to be
accepted as who they are. So how do you
navigate that space with the parents
and the children? AYANNA HOWARD: Yeah. So we focus on
functional improvement, which is one of the reasons
we focus on motor limitations. So our platforms, their
focus is on getting kids, as they’re moving,
to be independent. So it means that if I have a
child, some of the protocols are I have a child where
the clinician wants them to be able to
improve midline crossing. And that’s just so that
they can button up. And so those are the
things where they’re very, very this is what
the protocol is, this is what the protocol is,
and it’s all based on function. But we do get some
of that from parents. We’ve had to deal with
saying no as researchers, which sometimes that’s hard. Because it’s hard to find
individuals to participate, but we’ve had to say no
on a couple of occasions, that we think it’s
detrimental to the child because there’s a
mismatch as well. Yeah. MAYA CAKMAK: OK, let’s
thank Ayanna again. [APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *