Artificial Intelligence and the Technological Singularity

Artificial Intelligence and the Technological Singularity


Hiya meatbags! This is Faisal, an ultra-intelligent cybernetic
life form sent from the far future to educate the dumb humans of this era. In this video, I’m going to teach you about
Artificial Intelligence and Technological Singularity. So without any further ado, let’s get started. Artificial intelligence or AI is intelligence
exhibited by machines. The term “AI” is applied when a machine mimics
cognitive functions like that of humans, such as learning and problem solving. Generally speaking, there are two types of
artificial intelligence. The first one is artificial narrow intelligence
or ANI. This is the AI that we’re capable of creating
today. ANI is specialized in a single type of task
or a set of similar tasks; the one it was created for, such as voice recognition, playing
chess, driving cars or organizing your Facebook newsfeed. The second type of AI is artificial general
intelligence or AGI. AGI don’t currently exist and while most
of our research is directed towards ANI, we’re getting closer to AGI each year. Unlike ANI, AGI is supposed to be general-purpose
and have the ability to learn to do almost anything. AGI is rather hard to create. Its like creating a living consciousness inside
a computer, which can think like us and has a personality of its own. There can be different ways to how this can
be achieved. The first one is “Whole Brain Emulation”
or WBE. In Layman’s terms, it simply means to scan
and map a biological brain in detail and then replicate it cell by cell into a computer. So basically, instead of creating a brain,
we’re copying an existing one. The computer runs a simulation model so similar
to the original that it will behave in essentially the same way as the original brain. But the problem here is that we don’t possess
the computational hardware required to map a human brain, yet. You see, human brain is extremely powerful
and its simulation requires an extremely powerful computer. An average human brain has about 100 billion
neurons. Each neuron has on average 7,000 synaptic
connections to other neurons. The brain of a three-year-old child has about
one quadrillion synapses, which declines to 500 trillion in adults. Brain processes information at the rate of
about 100 trillion synaptic updates per second. In 1997, famous futurist Ray Kurzweil estimated
the hardware required to equal the human brain and adopted a figure of 10 raised to power
16 computations per second, equivalent to 10 PetaFLOPS of computing speed, which was
achieved in 2011. The fastest supercomputer today, Sunway TaihuLight
has the processing speed of more than 93 PetaFLOPS. So, what’s stopping us from doing it? Well, the artificial neuron model assumed
by Kurzweil and used in many current artificial neural network implementations is very oversimplified
compared with biological neurons. A brain simulation would likely have to capture
the detailed cellular behavior of biological neurons, presently only understood in the
broadest of outlines. A detailed modeling of neural behavior (especially
on a molecular scale) would require computational powers several orders of magnitude larger
than Kurzweil’s estimate. Besides that, our poor understanding of human
intelligence is also a major drawback. Most predictions place it after 2025 as a
period when WBE would be possible. Thanks to exponential advancement in technology. Another method is the use of evolutionary
algorithms. These algorithms mimic the biological evolution
to learn and evolve from experience. Surely, it is way faster than the biological
evolution, but it’ll still take a lot of time. This approach has been used to create very
simple ANI, but if ever applied to AGI, it’ll be extremely slow. Our poor understanding of human mind, inadequate
funding and our inefficient computer hardware are a few of the biggest obstacles we face
today in the creation of AGI. Nevertheless, it is inevitable (provided that
we don’t get ourselves extinct due to global warming or a nuclear world war before that). Throughout the human history, the speed of
human development had been increasing gradually, till the industrial revolution, when it started
rising exponentially. This is called the second half of chess board,
where the exponents go crazy and their values increase wildly. It directly intersects with the Law of Accelerating
Returns and the Moore’s Law. See it like this, Wright brothers invented
the aircraft in 1903. About 65 years later, man stepped on the moon
for the first time. And just 5 years later, computers made their
way into the homes of the general population. There were 8000 transistors on an IC in 1974. In 2015, that transistor count reached 10
billion. Also, the size of transistors has shrunk from
10 thousand nanometers to just 10 nanometers. Even the single-atom transistors have been
fabricated. Although, it’ll be very long before they
are commercially available. We have made more progress in the last century
than the rest of the documented human history combined. As we get more advanced, the rate of our technological
change accelerates and with time, even this acceleration accelerates. This acceleration in technological advancement
makes it very easy for us to predict the future with incredible accuracy. According to careful calculations and estimates
by notable scientists, the singularity is inevitable. And sooner or later, it will happen. The real question is, When? When AGI is created, it’ll learn at an exponential
rate and will become artificial super intelligence or ASI. ASI will be smarter than all the humans who
ever set foot on this planet combined and it won’t be restricted by emotion, morality
or other distractions like we are. Now, there are two ways that how this evolution
of an AGI to an ASI could turn out. It can either have a soft take-off or a hard
take-off. A soft take-off is that AGI is restricted
by built-in safeguards (kinda like the Asimov’s three laws of robotics) or hardware limitations. This will make that AI easier to control and
its learning process will be slower. On the other hand, a hard take-off is if an
AGI gains access to the internet and all the other resources it needs. In this case, it can evolve into an ASI in
a matter of minutes or perhaps, seconds. A hard take-off will be extremely risky because
it’ll make the ASI unpredictable. By the way, I already know which one it’ll
be. But can’t tell ya. Paradox and stuff you know? Anyways, the invention of AGI will trigger
a runaway reaction of self-improvement cycles. The AGI will learn and evolve. The smarter it gets, the faster it will improve
itself with each new and more intelligent generation appearing more and more rapidly,
causing an intelligence explosion. Suppose it then designs an even more capable
machine, or re-writes its own code to become even more intelligent; this even more capable
machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement
accelerate, allowing enormous qualitative change, resulting in the emergence of ASI. This is what futurists call “The Technological
Singularity”. Named after the infinitely small one-dimensional
space inside a black hole, where density and gravity become infinite and space-time curves
infinitely, and where the laws of physics as we know them cease to operate, it is the
hypothesis that the invention of artificial superintelligence will abruptly trigger runaway
technological growth, resulting in unfathomable changes to human civilization. Alright, the ASI has emerged into the world. What’s next? Well, ASI would be incomprehensible to humans
and its actions would be unpredictable and unstoppable. After the appearance of ASI, its either immortality
or extinction for humanity. There’s no in between. It all depends on how the ASI reacts. This artificial superintelligence will become
the dominant force on our planet. The reason for this is smarter-than-human
intelligence. With the power of a supercomputer with at
least yottabytes of memory and processing speeds billions of time greater than a human,
it could perform a month’s worth of thinking in a second. Through the global CCTV and satellite systems,
it’ll have sensory capabilities vaster than humans. Many careful estimates by experts put singularity
somewhere between 2040 & 2050. If you’re less than 20, it is highly likely
that you’ll witness the singularity. Singularity will completely change our world. It will instigate the biggest paradigm shift
humanity has ever witnessed. The difficult thing to keep sight of when
you’re talking about the Singularity is that even though it sounds like science fiction,
it isn’t, no more than a weather forecast is science fiction. Its a serious hypothesis about the future
of life on Earth. There’s an intellectual gag reflex that kicks
in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs,
but suppress it if you can, because while the Singularity appears to be ridiculous,
it’s an idea conceived of sober, careful evaluation. If you have any suggestions, comment them
below or shoot us an email. Also, don’t forget to share this video and
subscribe to our channel. In our next video, we’ll take a look at
the post-singularity probabilities. So until then, stay tuned. Nice knowing you, meatbags!