Google and NXP advance artificial intelligence with the Edge TPU

Google and NXP advance artificial intelligence with the Edge TPU


BILLY RUTLEDGE: Hi, everyone. I’m Billy Rutledge from
the AOI team at Google. And we’re here today at
CES in the NXP Pavilion to introduce our new
product– the Edge TPU Dev Board that features our edge TPU
chip combined with the NXP iMX 8 SoC as a kit for
developers to experiment with artificial intelligence
for the first time. So the board itself is
actually two pieces. It’s the base board here,
which has all the connectors that most developers would
use to prototype a new product idea. And then the SoM module– SoM– includes the
CPU, GPU, and TPU chip, as well as the memory
and Wi-Fi and Bluetooth. And it actually snaps
into the baseboard using high-density connectors. And so that allows
you to experiment with the actual hardware
in a development setting, but be able to buy the SoM
part for production line when you’re ready to
take your smart speaker, smart dishwasher, smart TV to
a scalable production plan. So today we’re showcasing
a few different demos of how you might experiment
with this type of technology, and hoping to inspire people to
explore using AI on the edge. PETER MALKIN: Hi, my
name is Peter Malkin. I work for Google. I’m a software tech
lead for AOI projects. And today we’re showing you
a demo of facial detection that runs on Edge TPU. The key point about Edge TPU
is the privacy and security. From now on, your
pixels do not need to travel to a data center. You do not need to contribute
your data to any company. You can run all your machine
learning inference locally on the chipset. In this case in
particular, we’ve tried a network that can
recognize human face. And it’s running
locally on device on a small embedded
system that runs Linux. JUNE TATE-GANS: Hi, my
name is June Tate-gans. I’m actually one of
the software engineers working on AOI projects. One of our demos here at CES
is actually a teachable machine where we actually
use local inference to train a model directly on
the device with no network connectivity. We call this our
teachable machine demo, and it’s right here. And essentially it has a
camera pointing up at the sky. Now the first thing
I have to do is train it to teach it about
what the background is so it can differentiate
between the objects I’m about to show it and
what the background is. And the first thing I do is
I press one of these buttons to actually tell it
what it’s looking at. So now it knows what
the background is. I can now train it on an object. In this particular case, I’m
gonna use this ice cream. So hold the ice cream
over, press the button. It now can differentiate between
background and ice cream. And you know it’s machine
learning and doing inference because I can show
it a different color and get the same
result. And this can be extended to other objects
as well, such as this hot dog. So hot dog, ice
cream, hot dog again. And the same thing with a donut. So donut, hot dog,
and ice cream. LEONID LOBACHEV: Hello. My name is Leonid. I work for Google
for AOI project. And I will talk about our
[INAUDIBLE] demo here. So if you notice that under
each of the bigger demos, we have a small display
underneath with a depth camera. And it shows the time. And that’s important
characteristic because it tracks how much
time people spend looking at the other bigger demos. And you can notice that we
display the bounding boxes around the people’s faces. So here is my face. And there is a green
box showing that I’m looking toward the demo stand. And if I turn away,
like right now you can probably see the red box. I’m not sure myself, but
it should have been red. And this has been run totally
on the development board. You can notice that,
behind the display, we have the same words like
[INAUDIBLE] demos here. So there is no internet or
cloud connection required. And that’s a good application
because we are ourselves interest at home many people are
looking at other bigger demos. And you can notice
like, in the middle, we have something like 4
and 1/2 hours right now. And in the corner stand there
is like slightly more than three hours. It’s quite explainable,
but still interesting, statistics together. BILLY RUTLEDGE: So
we’re just beginning to scratch the surface
of what’s possible with artificial intelligence today. And we’re excited to offer
the Google Edge TPU Dev Kit for the world to experiment
with on-device AI specifically to explore the capabilities
with high performance on device, security with having all the
data on the board itself, and performance by being able
to process everything locally on the machine. We think it will open up
a world of opportunities for new product development. We’re excited to see what
you might build with it next. [MUSIC PLAYING]