Le Jeu de la Vie — Science étonnante #49

Le Jeu de la Vie — Science étonnante #49


Hello everyone.
Today, we are going to talk about the game of life. So, no, I’m not referring to a board game
or a variant of Russian roulette, but a funny little computer program
imagined in the 70’s by the mathematician John Conway. For almost 50 years,
this game fascinates mathematicians and computer scientists, but also biologists and philosophers because it shows us
how a system evolving according to simplistic rules can cause
incredibly rich results. And in a way,
it helps us to understand better how a big pile of interacting atoms can end up being
the complex and thinking beings that we are. The game of life is played on an array of squares
we can take as big as we want And we imagine that each of these square
can host a cell, like a living cell. If the box contains one,
we color it in black, and conversely, if it is empty,
we leave it in white. Here is for example a possible configuration,
very simple, in which we have only six living cells. Now, the idea of ​​the game of life, is to evolve a population of cells,
in a turn-based way, according to rules that are fixed in advance
and who are childish. Rule # 1: One cell will survive the next turn only if it is surrounded by two or three neighbors. If you want, below two neighbors,
the cell dies of isolation and above three,
it dies of overpopulation. And rule # 2: if an empty square is surrounded
by exactly three neighbors, it becomes alive on the next turn. A cell is born there. To see what that gives,
try to apply these two evolution rules on our previous example. Here, we have a cell that only has one neighbor,
so who will die of isolation. Here, we have an empty square
who exactly has three neighbors, so there will be a birth here
and the same for there and for the three squares below. On the next turn,
we then have a new situation that is this one. And then we just have to start over
to evolve this new situation. There will be several deaths by overcrowding
and also some births. And so on,
we run it as much as we want. You see that applying the two rules
is extremely simple. And in fact, it’s called the game of life, but it’s not really a game
since there is no player. You have no decision to make,
apart from choosing the starting situation. And then, just apply the rules step by step,
without thinking. If we go back to our little example,
we have reached the following configuration. A square made of four cells. And there, well you can convince yourself
that nothing happens anymore. Neither death nor birth. The figure does not evolve anymore, it is stable. obviously,
the fact that we arrive at this stable structure is related to the choice we made
for the starting configuration. If we had taken a different configuration,
things would have been different. To explore different scenarios,
we will not continue to do that by hand. It turns out that the rules are extremely simple,
so very easy to program. And that will allow
to calculate changes much faster. There are many small applications
that we can download, but to be able to make beautiful illustrations,
I recoded the game of life in python, and for those interested,
I will put the link in the description. When John Conway
proposed the rules of the game of life in 1970, he and some fellow researchers started
to explore many starting configurations to see where it was going to lead. Well, of course,
if I start from a single cell or even two, they die immediately of isolation. So let’s take three cells aligned And this is what happens
when we make them evolve. We do not get a stable configuration,
as earlier, but an oscillating structure. It’s called “the blinker.” Starting from four cells in a row,
evolution stops, but on a new stable structure
nicknamed “the hive.” And with five cells,
we get after a few steps a new oscillating structure
consisting of four blinkers. But in fact, we do not always get
stable or oscillating structures. Sometimes some structures
end up dying completely. Like this one for example. Of course I will not run all the starting configurations What is funny to find
new stable or oscillating structures, is to start from an initial random situation to launch evolution
and see where it takes us. Among stable structures
that we can find this way here is for example “the boat,” “the bread,” and “the pond.” And among oscillating structures,
apart from the blinker, here is “the beacon” and the toad. What’s fascinating in the game of life is how simple little things can lead to complex structures for example take as starting situation the hive configuration that is stable, as I said, but just adding a small cell in a corner launch the evolution and here is what happens you now have four hives if I take a staircase like that with just 6 cells , look the structure develops…we wonder how far it will go and it ends up stopping towards a stable structure consisting of four blocks quite far apart from each other and it took sixty three generations for this I could show you a lot like that but the observation that fascinated Conway and that showed him the full potential of his little game is the evolution of this simple structure five cells, that’s it. And yet look and there in fact I’ll have to accelerate the movie, and also zoom out because it keeps going Again There you go The first amazing thing is that it took 1103 generations for the evolution to stabilize, which is really huge considering that we started from five little cells and the evolution was completely chaotic, with regular pattern, and finally we end up with a configuration without any symmetry consisting of four blinkers here and 15 stable structures eight blocks, 4 hives, a boat, a bread and a ship but that’s not the most fascinating thing.
In the middle of the simulation, something really amazing happened I do not know if you have noticed it. I’ll show you again look there, around the 70th generation this thing it’s called a glider, here are some others the glider is a structure that changes shape, but every four turns comes back to its configuration by moving one square diagonally we can see others a little later in the simulation. It is a structure that can move the presence of this little glider is quite revolutionary because it shows two things the first is that a pattern can have a growth that will be infinite in space during evolution.
There are eight glider that are emitted and they will therefore continue to infinity the second thing is that these gliders allow us to imagine a form of communication between structures and we will see that it opens many doors Since the game of life was imagined by Conway, “players” seek to discover or build interesting structures bigger and bigger, and richer and richer. To find large stable structures that are not just associations of existing structures, it is not so simple The biggest so far is that one. We call it Cthulu and it’s made up of 45 cell Regarding oscillators, many more were found, and in particular some whose oscillation period is not two, like the flashing but can be a larger number For example here is a structure called the pulsar, oscillating with a period of 3 meaning the figure is repeated every three turns Another more complicated example : take 10 cells in a row at first we have the feeling that the structure evolves without any particular pattern but in fact it repeats every 15 turns : it is an oscillator of period 15, called the pentadecathlon We now know oscillators of many different periods. Here is one of the biggest : it’s an oscillator of period 177. It means the configuration is repeated every 177 turns On the glider side, we could also find other structures able to move and for example the following three called spaceships respectively the small, the medium, and the big and also very pretty structures like this one: it’s the canadian Goose. Or more symmetrical like this one called 60P5H2V0. Yeah. We can imagine even pushing further but the structure that caused a little revolution in the game of life is this one : it’s called “the glider gun” And you see why : It’s a structure that oscillates while producing gliders it was discovered by Bill Gosper and bears his name: the Gosper gliding gun. And this structure solves a question that Conway had asked from the start. It shows that the number of living cells can grow to infinity with time. We have gliders who are issued with a steady pace and who never disappear. And obviously we can play trying to build guns for other types of moving structures. Here is an extreme example: remember the 60P5H2V0 ship? Well, we can make a gun for it : it’s just slightly bigger than the glider gun. In fact yes, it is really, really bigger. Look at this harmonious ballet. Don’t you find that beautiful? In fact, what is beautiful is that all this fascinating complexity stems from only two completely basic rules. Do you want more structures? this is called a smoker : it’s a moving structure, like a ship, but producing waste that remain behind it And starting from this same smoker, but by adding in his wake spaceships that will follow him, we can make an “ecological” version of it Look: the ships will convert the smoker’s debris into gliders. Here’s another pretty awesome smoking structure that’s called an online smoker. So it leaves a hell of a mess behind Earlier we saw that the glider gun was a compact enough structure that allowed to produce infinitely many cells. but in fact we can do even simpler than that. Here is a starting structure with only ten cells, that does not look like much. but let’s see how it evolves. Look: after a while we notice a kind of road paved with blocks, that builds like that indefinitely With only ten cells at the start we can show that it is the smallest possible structure with infinite growth. We can also have fun looking for structures with infinite growth, that are as fast as possible and the record is held by this structure called Max and you see why When it evolves, well it fills the space with a pattern of zebra stripes that is locally stable Well, stable, but not very robust. Look what happens if I artificially add a small disturbance when we get to the edge: Well, it’s the apocalypse. But the most spectacular structure for me is this: it’s called Breeder One It is made of ten large vessels that leave debris and eject gliders. But look what it produces: gliders and debris eventually come together to form glider guns that line up in its wake and when time passes, the number of simultaneously emmited glider increases. It’s beautiful, isn’t it? Well, it’s been a few minutes that we make pretty drawings. But you might wonder what is the use of this So if all that leaves you cold, remember that what makes the beauty of the thing is that all this complexity is generated from just two extremely simple rules, about the survival and birth of cells and that’s what makes the game of life so interesting: the fact that basic rules can give rise to extremely complex structures and behaviors. And besides, nobody could have anticipated, just but by looking at the two rules, that they could give birth to all that. And it’s this phenomenon we call “emergence” : the fact that complex structures can have macroscopic properties that can not be found in their basic constituents. These emerging situations are found in both physics and biology but also in sociology or economics when we study the behavior of groups. And that’s because it tells us about these phenomena of emergence that we are very interested in the study of simple systems easily simulated by computer, like the game of life For those who remember, I had already mentioned these issues in a previous video about of the Langton’s Ant. In the case of the game of life, we could push the investigation far enough, for example, it has been shown that there are structures capable of self-replication, like what happens with living cells and from a theoretical point of view, it has been shown that one could realize a Turing machine with the game of life. So a Turing machine is a kind of abstraction of the computer concept. It means that everything that can be calculated with a conventional computer we can also calculate with a simulation of the game of life. We can design logic gates, do operations, calculations and at the end reproduce with the game of life any program that could have been run on a computer. And here is the craziest illustration of this idea. What you see here is a series of little gliders that move in a sort of square structure that creates them. And when you look at it from a distance, these gliders filling the square is a bit like a pixel, a box that would be black. There is also a version without gliders that suddenly looks a lot like a white square And we can associate several structures of this kind to form a network of giant white or black squares And by putting the right interactions, these giant squares can evolve according to the rules of the game of life. So, I admit that this simulation was too complicated, it’s not me who made it. So we have a configuration of the game of life that can simulate a game of life: I hope that the depth of this result fascinates you as much as me. So, a question that can be asked when we have seen all this is why use those two rules and not others? And well Conway found them by trial and error trying to have rules that are both sufficiently simple, avoiding the explosion of the number of cells but making it possible to form sufficiently complex structures. There are some variations of the game of life with different rules producing also quite rich results but to try to understand what is happening, something you can do is to study an even simpler system it’s called elementary cellular automata So little vocabulary: I did not want to bore you from the start with scholarly words but the game of life is part of the class that’s called the cellular automata it concerns all systems consisting of simple elementary structures, the cells and evolving discontinuously over time according to fixed deterministic rules.
To try to make a cellular automaton even simpler than the game of life well we can try to go down a dimension and no longer consider a grid of cells but simply a line if our system is a simple line of black and white squares well it will be necessary to specify the rules of evolution according to the colors of the cells and their neighbors for example here is a possible rule of evolution There are eight possible cases to consider depending on the color of the cell and its two neighbors for example this rule tells us that if a cell is white and its two neighbors are white and well it will remain so but that in all other cases it becomes black the next turn or it stays black so let’s try to see what happens if we simulate this example from a very simple starting configuration just a black square For clarity, I will represent the evolution over time by stacking the lines. Here is the initial situation Then the next, we applied the rules, the one after and so on and if I represent that by zooming out and in an accelerated way, that’s what we get Now let’s make a small modification to the previous rule. Now: an isolated black cell dies. and here is the image of the evolution that we get And if I make another possible modification well I have another evolution To completely specify a rule of evolution, there are eight cases to consider, and since every time the result can be white or black you can convince yourself that there are only 2 power 8, ie 256 possible evolution rules in this cellular automaton So it’s quite easy to program them all and to look in an exhaustive way what happens for each of them especially as some rules are more symmetrical to each other. In the 80’s, physicist Stephen Wolfram began to study systematically all these rules and to give them a numbering to be able to distinguish them. The first rule we saw is what he calls the rule 254 the second is the 250 and the third we saw was the number 50 Let’s now look at a new interesting rule: it’s the 126. It’s again very simple: an isolated white square will stay white, a black square surrounded by two black squares dies of overpopulation, and in all other cases the box becomes or remains black and here is the evolution of this rule you may recognize this figure it’s called the Sierpiński triangle, which is it’s a well-known example of fractal structure it’s funny : we have basic rules which are completely local and that allows to create a global structure as if it had been planned in advance let’s see another example is rule number 30 and here is his evolution it’s weird : on one side we seem to have relatively regular structures and on the other it seems totally random we see from time to time larger structures but no pattern that is repeated Stephen wolfram was able to show that this structure was chaotic in a well-defined mathematical sense and that it can even be used as random number generator. Now one last example but one that has the most fascinated Wolfram : this is the famous rule number 110 and it differs from the rule 126, Sierpinski triangle, by only one change : here it is if we study its evolution then we realize that on one side on the right, it stays completely white but on the left, funny things happen at first glance the pattern looks regular everywhere but when you take a closer look we see that structures appear, spread, disappear. It seems there are interactions between these structures on a background which remains regular Structures that spread behave a lot like the gliders of the game of life and we can simulate as far as we want, this strange behavior remains it’s not something regular nor is it something chaotic it’s between the two it is as if we had just created a small world in which forms of artificial life would develop and interact as a kind of mini game of life in 2002 mathematician Matthew Cook published a demonstration of the universality of rule 110 : that is to say that can be used to simulate a turing machine, and any function computable by a conventional computer It i’s like the game of life. But despite this demonstration as for the game of life we ​​can not say we really understand why rule number 110 works like this and not the other ones. The emergence of complexity and universality of such a simple system remain an open question It shows that even the most basic computer programs have not finished surprising us