Artificial Intelligence: The Common Sense Problem

Artificial Intelligence: The Common Sense Problem


THE END OF THE TIME
I WAS AT MIT, THERE WAS A GRADUATE
STUDENT NAMED CHARNIAK WHO WAS TRYING TO MAKE A PROGRAM
THAT WOULD HAVE COMMON SENSE. AND IT WAS SUPPOSED TO HAVE
AT LEAST THE INTELLIGENCE OF A FOUR-YEAR-OLD CHILD. AND TO SEE WHETHER IT
HAD THE INTELLIGENCE OF A FOUR-YEAR-OLD
CHILD, IT WAS SUPPOSED TO BE ABLE TO ANSWER
QUESTIONS OF THE SORT THAT A FOUR-YEAR-OLD
CHILD COULD ANSWER AFTER HEARING A SIMPLE STORY. NOW, THIS IS HOW THE PROBLEM WAS
PRESENTED AND HOW IT EMERGED. THEY WERE GOING TO USE
MINSKY’S IDEAS OF FRAMES. AND MINSKY HAD SUGGESTED THAT
THERE WAS SOMETHING THERE. FOR INSTANCE, THE
BIRTHDAY PARTY FRAME TELLS YOU THAT IN BIRTHDAYS,
CHILDREN HAVE PARTIES. AND THEY BRING
PRESENTS, AND SO FORTH. AND WE KNOW THAT AS
A LIST OF THINGS. AND YOU COULD GIVE THE COMPUTER
A BIRTHDAY PARTY FRAME. THIS IS WHAT [INAUDIBLE]
DOES FOR MILLIONS OF FRAMES. SO THE IDEA WAS YOU TELL THE
COMPUTER THE FOLLOWING STORY. IT WAS JACK’S BIRTHDAY. MARY AND JANE WERE
GOING TO JACK’S. LET’S TAKE HIM A
KITE, SAID MARY. NO, SAID JANE. HE’S ALREADY GOT ONE. HE’LL MAKE YOU TAKE IT BACK. NOW, WHAT’S
INTERESTING ABOUT THAT IS THAT IF YOU ASK A CHILD WHY
WERE THEY GOING TO JACK’S, THE CHILD WILL KNOW IT’S
TO A BIRTHDAY PARTY, THOUGH IT’S NEVER SAID IN THERE. IT JUST SAYS IT
WAS HIS BIRTHDAY. AND THEY WERE GOING TO JACK’S. NOW, THE FRAME
WILL FILL THAT IN. AND IF YOU ASK
THE CHILD, AND WHY WERE THEY BRINGING
HIM A KITE, THE CHILD WILL KNOW IT’S A
BIRTHDAY PRESENT. ALTHOUGH IT DOESN’T SAY ANYTHING
ABOUT BIRTHDAY PRESENTS, THE FRAME WILL FILL IT IN. BUT THEN COMES THE TOUGH ONE. IT ENDS BY SAYING, NO,
HE’S ALREADY GOT ONE. HE’S ALREADY GOT A KITE. HE’LL MAKE YOU TAKE IT BACK. NOW, IF YOU ASK THE QUESTION,
WHAT IS THE IT REFER TO– DOES IT REFER TO THE KITE
HE’S ALREADY GOT, OR THE NEW KITE THAT
HE’LL HAVE TO TAKE BACK? THE GRAMMAR WAS SET UP
SO YOU COULDN’T JUST LOOK BACK AND FIND OUT. IT SAYS, HE’S ALREADY GOT ONE. THAT IS, THE OLD KITE. HE’LL MAKE YOU TAKE IT BACK. GRAMMATICALLY, HE OUGHT TO MAKE
THEM TAKE THE OLD KITE BACK. BUT EVERY CHILD PRESUMABLY,
EVEN BY THE TIME THEY’RE FOUR, KNOWS THAT IF YOU’VE GOT
TWO THINGS LIKE THAT– IF YOU GOT ONE AND
YOU GET ANOTHER ONE, YOU HAVE TO TAKE
THE NEW ONE BACK. YOU CAN’T TAKE THE OLD
ONE BACK TO THE STORE. NOW, THE QUESTION IS WHERE IS
THAT BIT OF KNOWLEDGE PUT AWAY. IT’S NOT IN THE
BIRTHDAY PARTY FRAME. WELL, THERE ARE TWO
THINGS ACCORDING TO THIS CHARNIAK THESIS. ONE, THE RULE, IF YOU’VE
ALREADY GOT ONE THING, YOU DON’T WANT ANOTHER
ONE JUST LIKE IT. IT’S HARD TO KNOW WHERE
TO PUT A RULE LIKE THAT. SECONDLY, YOU HAVE TO
TAKE THE NEW ONE BACK. YOU CAN’T TAKE THE OLD ONE BACK. THAT WAS THEIR
UPSET, BECAUSE WHERE, IN THE ORGANIZATION
OF KNOWLEDGE, ARE YOU GOING TO PUT THAT? IN THE DEPARTMENT STORE FRAME? IN THE BIRTHDAY PRESENT FRAME? IN THE MIDDLE-SIZED
DRY GOODS FRAME? IT’S JUST A PIECE OF
KNOWLEDGE WE KNOW. AND THERE’S MILLIONS
OF PIECES OF KNOWLEDGE, PRESUMABLY, LIKE
THAT THAT WE KNOW WHICH WOULDN’T BE
ORGANIZABLE INTO FRAMES. AND WE WOULDN’T KNOW
WHAT TO DO WITH THEM. AND IF YOU PUT THEM ON
JUST ONE GREAT BIG LIST, THEN THE RELEVANCE PROBLEM
APPEARS AGAIN, EVEN WORSE. AND THEY NEVER SOLVED IT. THEY DON’T KNOW HOW
TO ORGANIZE KNOWLEDGE SO THAT KNOWLEDGE LIKE THAT IS
GOING TO BE IN THE RIGHT PLACE. BUT I THOUGHT, WHEN
I HEARD IT, THAT’S JUST THE BEGINNING
OF THE PROBLEM. TAKE THE RULE IF
YOU’VE ALREADY GOT ONE, YOU DON’T WANT
ANOTHER JUST LIKE IT. THAT CERTAINLY MIGHT
APPLY TO KITES. I’M NOT EVEN SURE. BUT IT CERTAINLY DOESN’T APPLY
TO DOLLAR BILLS, OR MARBLES, OR COOKIES. IF THE RULE REALLY SAYS,
EVERYTHING ELSE BEING EQUAL, IF YOU’VE GOT ONE, YOU DON’T
WANT ANOTHER ONE JUST LIKE IT, BUT THIS EVERYTHING ELSE
BEING EQUAL HAS PACKED IN IT THE WHOLE KNOWLEDGE OF THE
HUMAN CONDITION IN THE RULES. YOU HAVE TO KNOW WHETHER, FOR
INSTANCE, COOKIES– IF YOU’VE GOT ONE COOKIE, YOU
MIGHT WANT ANOTHER. BUT IF THE COOKIE IS
HUGE, MAYBE ONE IS ENOUGH. BUT THEN AGAIN, IF
YOU’RE A COOKIE MONSTER, ONE WOULDN’T BE ENOUGH. AND OUR WHOLE BACKGROUND
UNDERSTANDING OF HUMAN COPING, THIS KID, AT FOUR,
ALREADY HAS, AND HAS IT IN A WAY WHICH IS
VERY HARD TO UNDERSTAND HOW IT’S ORGANIZED, AND CAN’T
BE CAPTURED IN STRICT RULES, BUT ONLY WHAT’S CALLED
CETERIS PARIBUS RULES, WHICH ARE THESE EVERYTHING
ELSE BEING EQUAL RULES. AND THEN THE COMMON
SENSE KNOWLEDGE PROBLEM REAPPEARS IN THE EVERYTHING
ELSE BEING EQUAL. AND IT WAS CLEAR TO
ME, AT THAT POINT, THAT THEY’RE NEVER
GOING TO MAKE IT, THAT COMMON SENSE
KNOWLEDGE JUST ISN’T GOING TO BE GRASPABLE IN THE
FORM OF KNOWING A LOT OF FACTS, THAT IT’S GOT TO BE ORGANIZED
IN SOME ENTIRELY DIFFERENT WAY. AND THEN, I THINK, NOW,
PEOPLE ARE BEGINNING TO UNDERSTAND THAT IT’LL
HAVE TO BE ORGANIZED IN SKILL AND PATTERN
RECOGNITION WAYS THAT DON’T USE THE COMPUTER AS
PHILOSOPHERS WOULD HAVE USED THE COMPUTER, NAMELY AS HAVING
IN IT SYMBOLS AND RULES, BUT USE THE COMPUTER
TO SIMULATE THE BRAIN, BECAUSE THE BRAIN
OBVIOUSLY DOES IT. THE BRAIN LEARNS TO MATCH WHOLE
MILLIONS OF INPUT PATTERNS WITH APPROPRIATE ACTIONS. AND WHEN WE
UNDERSTAND THAT, WE’LL UNDERSTAND SOMETHING
ABOUT INTELLIGENCE. THERE’LL STILL BE A
HUGE GAP, BY THE WAY. I’M NOT ABSOLUTELY OPTIMISTIC
ABOUT EVEN THE NEURAL NETWORKS, BECAUSE THEY HAVE TO TAKE
THE INPUT THAT THEY GET AND– HOW WILL BE PUT
IT– THEY AREN’T HUMAN, SO YOU CAN’T USE MENTAL TERMS. BUT LET’S USE MENTAL TERMS. THEY HAVE TO TAKE A GIVEN INPUT
AND SEE IT AS SIMILAR TO INPUT THAT THEY’VE ALREADY GOT
IN ORDER TO PROCESS IT, OR TAKE IT AS INPUT. THEY HAVE TO PROCESS IT–
HERE’S THE WAY TO SAY IT– AS SIMILAR TO SOME
PREVIOUS INPUT, BECAUSE IF THEY’VE LEARNED
THAT INPUT LIKE THIS– I COULD DRAW SOMETHING ON
THE BOARD OF 0’S AND 1’S– IS PAIRED WITH THIS ACTION, AND
THEN THEY GET AN INPUT WHICH HAS SIMILAR BUNCH OF 0’S AND
1’S, BUT NOT QUITE THE SAME, THEY HAVE TO SEE IT AS
SIMILAR TO THE APPROPRIATE, PREVIOUS INPUT TO PAIR IT
WITH THE APPROPRIATE ACTION. AND SO SIMILARITY IS VERY
IMPORTANT IN NEURAL NETWORK LEARNING. BUT THE PROBLEM IS, THERE
ISN’T ANY SUCH THING AS INTRINSIC SIMILARITY. EVERYTHING IS SIMILAR
TO EVERYTHING ELSE. THIS CUP IS SIMILAR TO ME. IT’S IN THE ROOM. IT CONTAINS LIQUIDS. IT’S WARM. AND IT REFLECTS LIGHT. YOU COULD GO ON AND ON ABOUT
HOW THE CUP IS SIMILAR WITH ME. BUT THEN, THE CUP IS VERY
DIFFERENT FROM ME, TOO. IT’S NOT ALIVE. AND IT WILL BREAK IF YOU
DROP IT, AND SO FORTH. SO WHAT COUNTS AS
SIMILAR DEPENDS ON US, AND OUR INTERESTS,
AND OUR PURPOSES, AND, FOR INSTANCE, OUR BODIES. THINGS ARE SIMILARLY
BIG IF THEY’RE BIG WITH RESPECT
TO THE SIZE WE ARE. AND THINGS COULD BE SIMILARLY
ACCESSIBLE OR NOT ACCESSIBLE GIVEN THE WAY WE WALK TOWARD
SOMETHING, OVERCOMING BARRIERS OR STOPPED BY BARRIERS. SO MY POINT IS IF A NEURAL
NETWORK IS JUST IN A BOX AND IS TRYING TO BE
INTELLIGENT, IT’S GOING TO RUN INTO THE LAST
OF MY FOUR OBJECTIONS. IT MAY BE ABLE TO DO RELEVANCE. IT MAY BE ABLE TO
STORE COMMON SENSE. IT WILL BE ABLE TO GET
SKILLS, BECAUSE SKILLS ARE INPUT-OUTPUT PAIRS. BUT IT WON’T BE
ABLE TO RECOGNIZE THE APPROPRIATE SIMILARITIES. AND IF IT DOESN’T
SEE THE SIMILARITIES WE SEE, THEN IT WILL JUST
COUNT AS STUPID BY OUR ACCOUNT.