History of Artificial Intelligence (AI)

History of Artificial Intelligence (AI)


The human brain is a remarkable thing. It has enabled us to advance mankind and understand
something of the science that underpins our existence: from the smallest atom to whole
new worlds. Imagine the possibilities if we were to improve
upon human cognitive function, that’s one of the objectives of artificial intelligence
research. No brain on Earth is yet close to understanding
how we might achieve this, despite centuries of trying. It is little wonder that the field
of AI; the challenges it presents, and the rewards it might deliver have such a hold over us. It’s easy to think of AI as a recent innovation
intrinsically linked to the rise of computing. But the origins lie way before that, over
2000 years ago. Around 350BC, one of the most pre-eminent blue sky thinkers was Aristotle. He devised systolic logic, the first formal deductive reasoning system. Aristotle and the philosophers that followed
were seeking a formal method to represent the way humans reason and think about the
world. Fast-forward 2000 years and in the early 20th
Century, Bertrand Russell and Alfred Whitehead publish ‘Principia Mathematica’ a work
that laid down the foundations for a formal representation of mathematics. This allowed
Alan Turing to show that any form of mathematical reasoning could processed by a machine in
1942. AI accelerated in pace during the 1950s and
60s. By 1967, Marvin Minsky was predicting that “within a generation the problem of
creating Artificial Intelligence would substantially be solved”. It seemed that there was no stopping the advancements
of AI. But the late Marvin Minsky would probably agree that, by his definition, we’re still
not there today. So what’s gone wrong?
Attempts to build systems based on the first order logic described by those early philosophers
failed. Due to a lack of computing power, access to large amounts of data, and because
they couldn’t adequately describe uncertainty and ambiguity. But despite this the research didn’t stop. The development of expert systems started
to achieve significant commercial interest in the 1980s. These systems relied heavily
on the thinking of those early philosophers and was centred on knowledge. Within these systems, AI started to find applications
in fields as diverse as stock trading, oil prospecting, agriculture and medicine. However, building solutions that made use
of this technology was extremely costly. The advances that were made remained accessible
only to the largest corporations or constrained to academic research. Furthermore, most of the commercial successes
of AI in the 1980s and 90s, are systems we would now consider to be mainstream computer
science and not AI. Deep Blue for example would not be considered
to be intelligent, but we would accept that it performed a task that was previously thought
to be something specifically human. We now accept the best chess players are computers
and no longer consider chess to be a task that’s uniquely human. As a result chess-playing
computers no longer come under one of our definitions of AI. Indeed John McCarthy who coined the term AI
in 1956, called this phenomena, the AI effect. Today you wouldn’t consider your sat-nav
to be AI. “As soon as it works, no one calls it AI anymore.” Because of this, it’s easy to fall into the
trap of thinking about AI in terms of science fiction or as a relic of computing history
that never lived up to the hype. So how should we think about AI from here?