MIT Quest for Intelligence Launch: The Future of Intelligence Engineering

MIT Quest for Intelligence Launch: The Future of Intelligence Engineering


Good morning. My name is Daniela Rus. I’m a roboticist and
I’m privileged to be the director of CSAIL
where our mission is to invent the
future of computing and make the world
better through computing. Now, you can’t understand
artificial intelligence at MIT without knowing a little bit
about where we came from. So picture this, it is 1956,
and the young Marvin Minsky decides to gather his
friends and then off they go to the woods
of New Hampshire. And when they emerge from
the woods they announce to the world that they have
invented a new field of study, artificial intelligence,
the science and engineering of understanding how
machines exhibit human level intelligence in how they
move, , how they speak, how they see how they play
games, and even how they learn. Now, the founders of AI believe
that machines could one day exhibit human level
intelligence and we at MIT have been pursuing this
objective ever since. And with the MIT intelligence
quest initiative, we’re taking our efforts
to a whole new level. We continue to dream about
creating increasingly more autonomous and more capable
machines that are smart, and obedient, and support us
with cognitive and physical tasks. Now, to make this possible, we
need to solve the challenge, how to make
intelligent machines. Specifically, machines are
made of a body and a brain, and we need the tools to make
the body and the understanding of how to create the brain. Now the body may be
inspired by nature, for example, a humanoid
robot, but it may also be matched to do the
best possible task. So for example, we could
engineer the Roomba robots to clean your
floors, or the body could also be optimized for
computation for example, in the case of a GPU machine. The brain is a software,
the implemented algorithms that give the machine the
intended capabilities, for example, to walk in the
case of the humanoid robot, to sweep the floors in
the case of the Roomba, or to crunch numbers
in the case of a GPU. The function of the machine,
the function of a brain may be inspired by
human cognition, or it may be developed as
an engineer’s computation engine that optimizes the task. Right now, since we know
very little about the brain, the vast majority
of the algorithms are driven by
mathematics and physics. As we learn more
about the human brain, we will be able to develop more
nature inspired algorithms. And within the MIT
intelligence core, we are excited to advance both
the engineering algorithms and the nature
inspired algorithms for the future
intelligent machines. In the process,
we hope to create new hypotheses that
will help our colleagues in cognitive neuroscience
develop new experiments. Now, as a field,
AI is very broad. AI researchers have explored
many different strategies than problems over the 60
plus years of the existence of the field and these are some
of the sub-disciplines that have emerged. As you can see the
field expanded greatly, but it still seems
far from developing a computational account
of true intelligence. However, the
solutions are becoming part of what we
would say is a tool kit that enables us to
create increasingly more capable machines. I think of these themes as
the brain, the collection of fundamental
implemented algorithms that give us support with
cognitive intelligence by developing AI at rest and
with physical intelligence by developing AI in motion. And now I would like to give you
just a couple of examples that show you where we are in this
quest and where we hope to go. So progress is happening in
three fields simultaneously. With artificial
intelligence we hope to give machines intelligence. We hope to enable
machines to see, hear, communicate like humans. With robots we put
computation in motion and give machines
the ability to move. And with machine learning we
hope to aim, to learn from, and make predictions using data. So let me give you just a
couple of examples of AI machine learning and robotics at
work, synergistically, and all together. I will point to a
number of connections between the brain
and physical machines and there are so
many more examples than I can give you today. So this is the MIT
CSAIL autonomous car. Its body is a
Toyota car extended with sensors like laser
scanners and cameras for perceiving the world and
also controlled actuators like actuated steering,
brakes, and acceleration for moving in the world. Now, the video shows how this
car learned from single images to steer like a human. And as you can see, this
car is driving very smoothly on a road it has
never seen before. In fact, I would
say that the car drives way better than my first
drive, which I still remember. Now, the same sensor
and actuator package, and the same suite of algorithms
can be ported to a wheelchair to turn that wheelchair
into an autonomous device, as you can see here
in the MIT CSAIL autonomous wheelchair. And the same technology
for autonomous driving can be mapped on
wearable devices to help provide safe navigation
and better situation awareness for blind and visually
impaired users. Now, a wearable laser belt
and camera combination can map the world locally
and then describe it to the blind user, for example,
describing a fabulous window display on a Braille
buckle or triggering the presence of a moving
obstacle, or a friend, or even something
as small as a cat. So we can use this technology
to replace the walking-stick and provide blind
and visually impaired people with a greater experience
in their surrounding world, to experience the world
in much richer ways. But at the same time,
these advances we hope are also inspiring
cognitive neuroscientist’s to advance the
understanding of blindness, and even identify
solutions to reverse it. While advances in robotics
are providing feedback to cognitive neuroscientist’s,
advances in neuroscience are also providing
feedback to roboticists and here’s an example. So recently, cognitive
neuroscientist introduced a sensor
called the EEG caps. So EEG caps have 48 sensors that
monitor your brain activity. And in a recent
work, my group was able to show that
with such an EEG cap we can detect one particular
signal that all of you make when you notice
that something is wrong, it is called the error
potential, and it’s localized, it can be detected in
about 100 milliseconds and then it can be used
to control a robot. For example, in
this video you see the use of the sensor to
correct the robot’s mistakes. Here the robot is asked to
sort paint can in a bin labeled paint, and wire spool
in a bin labeled wire. The human observes and
when the robot randomly chooses a direction that
is correct nothing happens, but when the robot randomly
chooses the direction that is incorrect the human
triggers the error potential, which can be
detected in real time and transmitted to the robot
to correct its mistake. So today, we can only
correct simple tasks like these binary
choices that robots make, but imagine a future where
the same suite of capabilities can be used to control
the body of a robot just like we control
our own bodies. That would be an amazing
future, because we would have a very intuitive and
direct interface to a machine. It will be a future where
machines and people will be working together,
collaborating together with machines supporting
us with cognitive tasks and physical tasks. Autonomous driving will
impact our work profoundly. It is not a matter of if,
it is a matter of when. And when we get the support
of machines for mobility we will ensure that our parents
and grandparents have a higher quality of life in
retirement and all of us will be able to go
anywhere, any time. Machines supporting us will
enable a profound change in our lives just
like smartphones have enabled a profound
change in our lives through computation. Thank you.