A. Introduction.

  Currently, a number of countries are working to create a variety of robots and on virtual modeling of neural networks that functionally replicate the brain. It can be expected with the current speed of development that, after 15 ÷ 20 years a robots body will meet the level of complexity of the human body. At the same time performance computers will allow to simulate neural network corresponding to the complexity of the human brain.
     It is also clear that if the first and second problem will be solved, there will be a desire to unite the body of the robot and an almost human neural network. The success of such a union would mean that the new man is created.
    Experiments to create more sophisticated robots leave no doubt that in twenty years, with proper funding robots indistinguishable from humans will be created. Moreover, these robots will speak; will be capable of self-learning and performance of quite challenging jobs. Gradually the realization is being formed that with a significant increase in the speed of computers, with improvements of the software, and with the further development of algorithms thinking robots can be created that have overcome the Turing test and are virtually indistinguishable in their behavior from people.
    The situation is far less obvious in the modeling of the human brain. Currently, the performance of most productive modern supercomputers allows simulating neural network the size which approximately corresponds to the brain of rat. Given the pace of development of computer hardware it is easy to predict that, using the same programs, the performance of computers will be able to simulate the neural network of the size which corresponds to the human brain, in ten to twenty years. However, it is unknown whether this virtual brain will think. It is not enough to construct a complex device. It is also necessary to know how to make it work. And with the brain it is particularly difficult, even if there are no conventional notions of what consciousness and thinking mean. And there are also no formal criteria, which helps to distinguish a thinker from just a good executor. Turing criterion allowing potentially distinguish a computer from a person will not help in this situation. Too fine line distinguishes genius from the ordinary person. They look exactly alike. But one can act in unusual circumstances, or may make a major scientific discovery, and the other is able to perform only some routine operations, which he was taught.
     Moreover, it turns out that our ability to act in a non-standard and sometimes completely illogical manner is associated not only with the emotional overtones of our behavior. It turns out that our ability to act as described above has a fundamental cause and fundamental importance. Here it is possible to refer to the work of Penrose, in which based on the Gödel theorem he proved that the part of the human mind which is responsible for creativity, cannot be reduced to any algorithms in principle. That is the creator is more complicated than any algorithms and so the creator can not be created in accordance with any algorithms. Just remember the famous scholastics question "Can Almighty God create a stone that he can not lift?" However, in this situation there is quite a scientific answer. The creator can create not only a new algorithm, but also a new creator, but in the second case he has to act by non-algorithmic methods. What are these methods - this will become clear from the content of the article. Proceeding from these considerations it follows that the mathematical model of creative thinking can not be created. And, therefore, human thinking cannot be simulated by a computer. Therefore, an attempt to create a robot capable of thinking seems utopian. Moreover, it is possible on the basis of the proposed ideas to make a clear distinction between artificial intelligence and human intelligence. In the opinion of the authors, the main differences are that artificial intelligence is algorithmic intelligence and human intelligence can not be reduced to algorithms.
      So, it is obvious that a thinker cannot be created using direct programming and a computer. However, the question arises - is it possible using a computer, a kind of device in which thinking similar to the human mind can be generated under certain conditions? The only device known by people, in which thinking can be conceived, is the neural network of the human cerebral cortex.
      Therefore, attempts to simulate a virtual neural network close to the parameters of the human brain and eventually to get a thinker seem to be perspective. However, even a perfect copying of the human brain does not guarantee that a mind will arise in this brain. Slightly modifying the famous R. Feynman statement, we can say that we will understand what the mind is only when we will learn to simulate it. Therefore, the main theme of the article (and the book of the same title) is the outline for a plan of modeling the human brain as device and modeling of the conditions for the origin of a mind as a process.    

§ B. Some notable features of the structure of the brain and for the origin of a mind.

     B.1. Complexity of a Device.      
      In the middle of the XXth century, one of the founders of cybernetics, John von Neumann considered the possibility to create self-replicating machines. Von Neumann established that there is a certain minimum level of complexity, above which with sufficient complex programs in place, the machine would be able to execute the full scope of the operations that the machine is generally capable of performing. Among others, in the case of exceeding the threshold level of complexity, the machine may obtain the capabilities for self-replication.
       Kolmogorov proposed to treat the complexity of any device or algorithm as the minimum amount text or information necessary to describe the object. Following that approach to complexity, it was found that the minimum complexity level of a machine, capable of self-replication in the case of the availability of the required program in it, stands at 106 bits.
       Apparently, the threshold of complexity of the neural network exists when a neural network can acquire the ability to think. There are at least two facts which indicate the existence of such a threshold. The first fact is the appearance of the thinking process within the transition from animals to humans. And the second fact is the acquisition of thinking of children, upon overcoming a certain threshold within their development. A threshold of brain development, from which it reaches the level of thinking, is apparently overcome not abruptly, but gradually. Quantitative changes, that characterize the development of the brain, are gradually leading to a new level - the development of thinking.
       According to Kolmogorov, the complexity of the newborn brain can be identified as the number of binary characters in the instructions on its assembly. The genome is such an instrument with regard to living creatures. As the human genome contains approximately 1010 bits of information, this value can be considered the approximate upper limit of the complexity of the newborn human brain. With further development the child's brain is greatly complicated: a neural network develops; between neurons grow axons and dendrites; contacts arise between them (synapses). Along with the growth of the network it receives a huge amount of information that determines the effectiveness of separate network circuits and synapses. Then myelination of axons takes place. The authors estimate the complexity of the adult brain at 1016 ÷ 1017 bits based on the facts that there are about 1014  connections (synapses) in the adult brain, they have different levels of efficiency, the neurons, which connected the synapses, are divided into several subgroups.  

B.2. Discontinuity of the Algorithm Complexity.  
  As shown by von Neumann, if the complexity of the machine exceeds a certain threshold, software (algorithms) can be created that will allow this machine to reproduce. At first glance it seems that the same logic can be applied to the ability of the brain to think creatively. That is, it would seem, if the complexity of the brain as a device exceeds a certain threshold, it can be created by software (algorithms) that allows the brain to think creatively. However, Penrose proved that it is not so: It is impossible to make a program that would allow a brain of any complexity to think creatively. Creativity is an invention of new algorithms. And so, according to Penrose and Gödel's theorems, the creator can not be algorithmic. To come up with new original algorithms, man himself should not be subordinated to any algorithms and his thinking must have the ability to perform jumps, which disrupt logical sequences. The authors suggest that the problem here is that the creative process is so complicated that the complexity of the corresponding algorithm simply experiences a discontinuity. This idea is illustrated qualitatively at Fig. 1, which shows the complexity of the behavior algorithms for different levels of behavior according to Maslow's pyramid of needs. Thus, we are faced with a seeming contradiction - according to Penrose, the making of an algorithm of creative thinking is not possible, but the human brain is able to think creatively.
 The following text provides a number of versions of answer to this contradiction.


B.3. Body.
  The brain has to have a body, which, on the one hand, feeds it with energy, and, on the other hand, provides incentives for its development and opens up the capabilities for experiments and control in the role of the subordinate object. The body should be sufficiently complex, which would provide the higher demands for the development of the brain. The human body has about 300 degrees of freedom. There are known only single cases, when a body, that was not capable of movements from birth, that developed an outstanding brain. Experiments on rats and a subsequent study of the structure of their brains showed that the brain is most developed in rats living in a difficult, game-situating, developing environment. Rats that lived in a simple environment and even the rats that had the opportunity to observe life in a complex environment, had not developed and had a short lifespan. So for brain development it is necessary to not simply observe the complex medium, but also interact with this medium. But the brain can interact with the developing medium only through the body. A body must have a means of communication.   

 B.4. Chaos.   
One of the factors that may affect the brain's ability to think creatively is a high degree of chaotic processes which is specific for the waking brain. One of the modes in which neurons are able to work is the mode of spontaneous pulse generating by individual neurons. As a result of such generation, a self-sustaining level of chaos occurs in the neural network, which is superimposed by the signals coming from the senses. Perhaps the whole process of thinking, spontaneously occurring thoughts, decisions, and memories are just the result of passing of random fluctuating excitations by some chains of neurons and synapses, which some information is recorded on. If so, then the role of spontaneous pulse generating by neurons becomes extremely important. Moreover, the possibility of spontaneous generation of pulse neurons is one of the factors that provide the brain an opportunity for non-algorithmic (creative) thinking.     

    B.5. The Two Parts of the Nervous System. 
Conditionally the nervous system and the brain in particular consist of two parts. One is hard-coded genetically, and the second develops in the process of life. The neural network of the cerebral cortex is precisely the part of the nervous system, which is formed in the process of life. Features of the structure and functioning of the neural network determine the nature and the ability of the person, which possess this neural network. If the computer is clearly divided to "hard" (device) and software (algorithms records), in the brain the neural network is both "hardware" and software. Hence the complexity of the neural network as a unit and the complexity of algorithms, written in it, should be commensurate. From this it follows that the discontinuity in the complexity of the algorithms may arise either due to intrinsic chaos, or due to external influence. 

    B.6. Senses. 
For efficient development of the brain it should be ensured by an intense incoming flow of information from the senses. Estimates show that healthy human senses provide an information flow at about 107 bit/s. This flow of information ensures the normal development of the brain. In the absence of any sensory organs during brain development the development slows. In the absence of both eyesight and hearing the child's education becomes a complex task of teaching.   

     B.7. Upbringing.
    If a person has normally developed sensory organs and pathways from the sensory organs to the brain, the following necessary condition for the development of the brain is the child being in a complex information environment. The most complex environment is an environment of people owning speech, culture, science, technology. Being in such an environment, a child's brain can potentially reach the level of complexity of the environment. Each generation perceives achievements of the previous generation as a natural and familiar environment. 

     B.8. Speed.
The process of thinking is essentially dependent on the speed of the processes in the various elements included in the neural network of the brain. The most important of these parameters are as follows: excitation time (duration of a nerve impulse or spike ) - 1 ÷ 3 ms; duration of the refractory period (pause between pulses ) - 1 ÷ 5 ms;  time of the spontaneous neuronal excitation - 1 ÷ 10 s; signal propagation speed along the axon with myelin sheath - 5 ÷ 120 m/s; speed of signal propagation along the axon without myelin sheath - 0,5 ÷ 5 m/s;  the time delay of the signal in the synapse - 1ms ; minimum propagation time within the cerebral cortex - 0,01 s; speed of conscious perception of information - 40 bit/s .   

    B.9. Three Stages of Development.
Conditionally, the process of brain development can be divided into three stages. In the first stage there is a rapid growth of the neural network to form a huge number of new dendrites and synapses. This period is characterized by the first years of a child's life. In the second period, typical for teenagers, part of the neural network dies. Apparently, this part of the network was excessive and had not been involved in any of the processes in the brain. During the third period the formation of the myelin sheath on axons takes place. The speed of signal propagation along the axons grows considerably. This age (20 years) corresponds to a person's ability to make scientific discoveries and set records in the sprint.

    B.10. Synaptic Instability.  
The most widespread hypothesis about the mechanisms of learning and memory is the assumption that short-term memory is due to changes in synaptic efficiency, and long-term is due to the synthesis of new proteins and the appearance of new synapses. The property of synapses to change its effectiveness depending on frequency of their use is called plasticity. The more frequently the synapse is used for the passage of neyroexcitation, so it effectively excites postsynaptic neuron. The authors suggest that this property of synapses can lead to developing of instability and offer to call it "synaptic instability".  This instability can significantly affect on the operation of the brain. And, as the authors suggest, under certain conditions, this instability can lead to “autism” disease. Instability can develop as follows. Due to the phenomenon of plasticity, the more a synapse is used for the passage of neyroexcitation, the more effective it is. But more efficiently it excitates the postsynaptic neuron, the more often neyroexcitation will pass through this synapse. This will mean the beginning of the buildup of instability. In this scenario, the synapses entering the zone of development of instability will begin to line up to chains. Chains likely will close up to rings. Rings of synaptic instability will compete among themselves as long as one, the most effective does not absorb all the rest (see Fig. 2). In a healthy brain synaptic instability does not develop and there are no symptoms of the disease similar to autism. So, there is some mechanism suppressing critical development of this instability. Apparently, such a mechanism may be chaos. Then we can assume that in aggregate and in a dynamic equilibrium synaptic instability, as a manifestation of ordering of neyroexcitation (thoughts), and chaos, limiting the order, may correspond to features of our thinking.  

    B.11. Motivation.
The nervous system and, further, the brain have developed in living beings due to the need to respond to changes in the environment and the need to withstand competition with other living creatures. At the same time only those that were able to survive and to win, who really wanted to eat, to reproduce, to communicate with their peers, to learn. All of these "the wills" are fixed in instincts through the generation and selection and are the genetic memory. Genetically predetermined motivation of actions is implemented by emotion that in turn is driven in the brain by injection of different hormones into the brain. It is through instincts and emotions that nature affects us, forcing us to do things that are useful, including things for the development of the brain. Emotions influence also on the process of memorization. They help to identify those events that must be memorized.


 With a high degree of probability it can be stated that genetically predefined forms of behaviors are associated with the genetically predetermined part of the nervous system. And on the other hand forms of behaviors that are caused by training are related to the cortex developing in the learning process. This thought becomes particularly evident if we compare in the appropriate scale the diagram of the structure of the brain by McLean and the pyramid of human needs developed by Maslow (see Fig. 3). Conventionally the McLean human brain consists of three parts. Two of them have evolved from the brain of living creatures standing on the lower rungs of the evolutionary development in comparison to people. Thus, the lowest part of the brain (the ancient bark) has evolved from the reptilian brain (R -complex), the next in rank and time part (limbic system) has evolved from the brain of lower mammals (old bark) and only appearance of the new cortex (neocortex) meant that Homo sapiens appeared.


  Visual mapping of McLean scheme and Maslow pyramid emphasizes that the ancient parts of the brain, which are formed in accordance with a predetermined genetic program, are responsible for genetic memory, instincts, and motivated actions. Higher forms of behavior are formed in the process of education and cultivation of neural networks in the brain in its highest division - the neocortex;    
-        The brain developed by layering of new structures on the old ones. Apparently, in the simulation the same functions can be obtained by more simple hardware; 
-        Some of the functions of the brain are quite simple and algorithmic and can be modeled using a common computer, equipped by appropriate programs. It is impossible to model the level of cortex (thought level) using direct programming and a computer of any complexity, it is necessary to model a neural network;
-        The problem of modeling memory consists of two parts. First is the problem of simulation of genetic memory, which gives us our basic instincts and motivation. Since this memory is not corresponded with education but originally given for the whole life, then it is relatively easy to simulate it. This memory can be simulated on a PC without having to model the neural network.
      Modeling of neurological memory will obviously be one of the most difficult problems in the simulation of the neural network. Such memory occurs in the process of the formation of neural networks in the process of brain growth and education of the child. The formation of new synapses and complication of the network simultaneously mean arising of memory. Technology of memory consolidation (transfer of information from short-term to long-term) in a dream or in reality should be worked out in the process of improving programs of creating of a virtual neural network. 

      B.12. Brain is a Single Entity.  
 The brain is a single entity, but a number of its parts have a narrow specialization. Unity of the brain is provided by the fact that the axons of neurons can reach a length of one meter and penetrate the whole brain volume. Long axons provide memory allocation and duplication of functions when necessary throughout the cortex. Short axons provide specialized work of local brain function. The approximate ratio of the number of local links and long links: 70 ÷ 30; The destruction of one or more structures of the brain (cerebellum, corpus callosum) does not lead to a complete loss of function of the body. Some functions are compensated by the cerebral cortex. This means that apparently, the dependence of the level of thinking from different variants of the constructive organization of the brain has a gentle nature without sharp extremum. This fact is crucial when creating a new man. The fact is that during simulation of the brain no model will perfectly match the actual existing structure of the human brain. And the fact that the dependence of the level of thinking on the design of the brain is flat enough, gives a chance that the model errors will not be fatal for the results of experiment.
     The division of the brain by cutting the corpus callosum shows that from one personality can be obtained two personalities using such an operation. This encourages the idea that a reverse transformation is possible. That is, we can assume that two personalities can be combined into one. And this new personality will have the abilities of each of the primary personalities, and possibly some absolutely new ones. For people such association is impossible technically. And in the case of the successful creation of new people such consolidation will be fundamentally possible;
   Joint consideration of the experiments on humans and on mice when Karl Pribram was making a dissection of the corpus callosum to remove part of the visual cortex, allows us to make one more hypothesis. Separation of the human brain into two halves showed that this operation led to the emergence of two separate parts of the same "I", but with disabilities in each of these parts. In his experiments on mice Karl Pribram found that mice did not lose the ability to perform complex operations which they had previously been trained, when removing part of the visual cortex. This allowed Pribram to hypothesize that the principles of information storage in the brain similar to the principles of information stored in the hologram. Comparing these two series of experiments and conclusions from them, the authors come to a generalizing hypothesis. It can be assumed that the information about all aspects of the human personality is distributed throughout all the cerebral cortex. Accordingly, the division (if it were possible) of the cerebral cortex in two, four, or more parts would lead to the emergence of an appropriate number of pieces of original personality. But each of the parts would be more miserable and a pale copy of the basic personality.   

     B.13. Self-organization.
   The fundamental difference between human thinking from the process of information processing in a computer is that a computer can be conventionally considered as a black box, isolated from the outside world, in which the raw data for some problem and program of its solutions are downloaded. If data is enough and the program is correct, then at the appropriate time the computer will give the answer. In the case of the human, the brain is not disconnected from the external environment, and thinking is influenced by external and internal noise. In addition, the brain in the process of thinking is affected by the body with all its necessities. That is the thinking is influenced by the background information loaded to the brain and a huge number of internal and external noise factors and motives. On the one hand, these factors interfere with, but on the other hand they can push the brain to completely non-algorithmic decisions. Let us remember the legend of the apple that fell on Newton's head, and it becomes clear how the new algorithms can be created.
     In summary, we can conclude that the development of the human brain from the level of the newborn brain up to the level of thinking is very similar to the process of self-organization of a complex open system consisting of autonomous elements. The process of thinking is triggered by the action of the input and output information flows that exceed a certain threshold level. With this approach, it becomes clear that the level of complexity of the brain and the amount of incoming and outgoing flow of information should exceed certain critical levels. The process scheme is shown in Fig. 4.


§C. The first theorem of the new man’s pedagogy.

  As already mentioned, according to Gödel and Penrose, the algorithms creator can not be algorithmic. And this imposes a fundamental limitation on the possibility of modeling of human thought and on the methods of a thinker’s education. Along with Godel's incompleteness theorem, modern science knows a number of ideologically similar problems: The Turing theorem on the insolubility of the problem of stopping the Turing machine; theorem on non-computability of Kolmogorov complexity. From the point of view of the theme of this article, the problem of non-computability of the Kolmogorov complexity seems to be very important. The fact is that records of scientific information becomes more compact with time and increasingly comprises a larger amount of information. That is, in its development science follows closely to the precepts of monk Occam, who offers "do not multiply entities without necessity” ("Occam's Razor").
        Thus, we can say that scientific discovery in formulated form is a brief record of information describing a phenomenon. On the other hand, Kolmogorov complexity is being determined by the size of the shortest description of the object or phenomenon. Comparing these two approximate determinations, we can say that the formula of discovery is a Kolmogorov record. But then it is necessary to extend the problems, caused by the fact that the Kolmogorov complexity is not computable, to any formula of discovery.
        That is, suggesting, that the formula of discovery is a Kolmogorov record, we find new evidence of the aforementioned provisions that creative work is fundamentally a non-algorithmic process. It is possible to state that such conclusion follows from Godel's theorem and from the concept of Kolmogorov complexity.
        In this case, it is obvious that a thinker is needed to make scientific discoveries, inventions of new algorithms in programming, in business, politics, art, a creator is needed, who is capable of non-algorithmic methods. Most of these methods are called intuition. How is it possible to create such a thinker? Let’s formulate a clear enough idea, which we assume as the theorem number one of the future new man’s pedagogy.
       Theorem #1. The thinker, capable of solving of the non-computable problem, can be created only by non-algorithmic methods.
       Let’s prove this by contradiction. Suppose, there are algorithms that allow you to create a thinker, capable of solving a non-computable problem, but then it turns out finally that these algorithms provide a solution for non-computable tasks. That is, assuming that such algorithms exist, we have come to a contradiction. Consequently, there are no algorithms for the creation of such a thinker, as required.  There are no algorithms for the creation of such thinkers, but, nevertheless, the thinkers exist. So, our task of developing the methods for creating a thinker is greatly simplified. It is only necessary to analyze real-world facts. Some of these facts are listed in §B. Partially they are presented in the next section.

§D. To Build or to Grow?

       It is already mentioned in the text of the article that the fundamental difference between human thinking from artificial intelligence is its non-algorithmic character. A creator is more complicated than every algorithm, so its behavior can not be reduced to any algorithms. Then the question arises - how to create a creator?
       Before proceeding to the specific discussion of the structure and methods of creating a model of a thinking creature, a few words should be said about fundamentally different approaches which can be used at the same time. If we analyze the technological methods which are used by people, in the process of creative activity, such methods can be more simply divided into: building or growing. What are the differences between these two different approaches?
        In case of building there is a certain project (algorithm), in accordance with which creation of a certain object is performed i.e. it is known in advance which parameters the object being created will have, and which materials will be used in its creation.
        In case of growing, the process of creation is considerably less predictable. People are faced with this problem, if the object to be created is very complicated it is subject to its internal laws of some and not subject to a simple algorithmic management. In such case, the human influence on the final result of the creative process is to form the environment surrounding the object of growth. Through managing the parameters of the environment one can exert considerable influence on the object being grown. However, the final result can never be predicted in advance (see Fig. 5). If you build a house on the same project in different places, you will always get the same house. The process is completely algorithmic. If you bring up different children using the same teaching methods, the different result will be obtained each time. That is the process of growing defies to algorithmization, it is fundamentally a creative process. 
        Building methods in our task should be used in modeling of those parts of the body, brain and behavior of the new man, which structure for the average person is genetically defined. For these composite design and software parts of the new man specific project or an algorithm can be created, that is construction methods are applicable. For those parts of the brain and behavior of the new man, for which it is impossible to create an algorithm, it is necessary to apply the method of growing. Therefore, in the process of creating a new man it will be necessary to develop also pedagogy of education of the new man.   


      Interestingly, the techniques of growing are applicable not only for wildlife, but also, for example, for the formation of civil society, or fundamental science.
      It is also interesting to note that the idea of the fundamentally non-algorithmic nature of our thinking leads to new insights into the relation of Sciences and Arts. Traditionally, during the development of human civilization the primacy of the exact sciences is recognized, as they have always defined our understanding of the laws of nature and the level of technological development. After studying the human brain and its modeling, people for the first time encountered an object that is much more complex than all that has hitherto been the subject of study of the exact sciences. And at this level of study, it is clear that the algorithmic methods of the exact sciences do not always work in the study of object of this complexity. It is required to use non-algorithmic methods. In short, to create a creator capable of creating exact science, a complex humanitarian environment is required; methods at a level of art are needed.

§ E. Principles of Modeling the New Man.

       Before proceeding to the simulation of an object or phenomenon it is necessary to define them. And then at the end of the simulation it will be possible to judge on the success of the work performed. It is obvious that any definition simplifies the real object. However, at least for the initial step of modeling this is needed.
       In their work the authors proceed from the following definition of thinking: "Thinking is a property of a physical object that can receive process, store, create and spread information, and also acts accordingly with the information and motivation, both instinctive and social."    In their work the authors proceed from the following definition of consciousness: "Consciousness is a property of a physical object capable to perceive, process, store, generate and disseminate information, and to act accordingly with the available information and motivation, both instinctive and social." 
    Thus, from the point of view of the authors, for the level of consciousness it is essential to be able to take in information from the outside world and adequately act in accordance with the existing algorithms. Besides the skills of level of consciousness, it is necessary to be able to generate fundamentally new information for the level of thinking. That is, this is the level assuming the ability of creativity, since we mean the creation of new algorithms as creativity.
     Thus, based on these definitions, it turns out that the level of consciousness is the algorithmic level of the mind. Accordingly, the level of thinking is the non-algorithmic level of mind. The algorithmic level of the mind can be implemented using common computers and direct programming of behavior. The non-algorithmic level of mind can only be realized by neural network modeling and its subsequent development in accordance with the concepts outlined in the first section.
        As noted earlier in order to start to emerge thinking in the brain it is necessary to satisfy a number of conditions. The most important ones are briefly formed in the following manner:
     - The brain of a creature and its network should exceed by complexity a certain threshold. The brain structure is determined partly genetically and partly developed in the process of interaction with the external environment;
     - The creature must have a developed body, be secured with senses and communication organs and limbs to work and move in space. Such a body is necessary for the reception of information from the environment and effecting on the environment;
     - A being must develop in a complex environment of social awareness. By the experience of mankind, only such an environment is able to develop the growing in a neural network of a child's brain to the level of thinking;
     - A being should really want to live and have many needs. The mind appeared in humans, as it needed to meet better the needs.
        Thus, when modeling the new man it seems appropriate to pass the stages of development corresponding to the levels of the triune McLean brain. First it is necessary to pass the reptilian level, and then the level of the ancient mammal and in the last stage to create an artificial neocortex and the new man accordingly.
        Since the level of the reptilian brain in its functions is algorithmic, it can be modeled using a traditional computer and direct programming of behavior. Thus, by using direct programming, it is possible to specify motivational programs of behavior that living beings are given by instincts. Fig. 6 shows the scheme of the model of the simplest being (reptiles’ model).


       When modeling the brain, which is making the next step after the reptilians, it is necessary to repeat the way nature passed by, gradually complicating the brain. Copying the nature, it is necessary to keep the design used in the modeling of the reptilians’ brain, and for modeling the limbic system it is necessary to use a supercomputer. The scheme of model of creature with simple cortex (lower mammalian model) is shown in Fig. 7.

    The general idea of the design of the body and brain of the new man, in which methods of construction and methods of growing are used, is clear from the following Fig. 8.
    The body in its design and complexity should be as close to a normal human as possible, including facial muscles, muscles of eye control, as well as the maximum possible number of sensors which are the senses of the new man. The brain must reproduce the idea of the triune McLean brain. And reptilian level is modeled using a traditional computer, located in the body of the new man. At this level of the brain a neural network is not modeled, and using direct programming certain primary skills are given available to modern robots. In fact, a modern robot with the brain based on a common computer, having speech and movement skills through direct programming, is the baby of the new man. A human infant with poor neural network, operates on the basis of instincts. A robot acts on the basis of hard programs. That is both actions are algorithmic. Such a robot - a new man baby can develop playing with human children. On the same level motivational programs are given. This level of reptilian brain will affect the formation of the neural network of the neocortex.


       A human baby is born very small compared with the adult. And his brain is very small. Therefore, a considerable period of his childhood is spent to increase the size of the body and brain and to acquire the basic skills that are inherently quite algorithmic and could be defined genetically. In particular, many animals are able to walk immediately after birth. It takes about a year for humans for the acquisition of this skill.
        When creating a new man there is no sense to copy the skills of the baby. At the level of the robot it is possible to slip several stages of child development. In fact, it is possible to go through all the stages amenable for algorithmization. I.e., all stages of development, which can be passed through direct programming, should be passed through. Of course, this approach gives certain differences from the education of the child to the process of educating the new man. But on the other hand, it allows gaining time. And only the highest levels of development can be achieved in the learning process in the human environment and during the formation of the neural network. The following figure shows that the level of the limbic system is modeled using a supercomputer. At this level, the neural network is simulated. The neocortex level is modeled using cluster supercomputers. At this level also a neural network is simulated. The specialty of the parts of the brain, hemispheres work, the senses and connections to the body are modeled by supercomputers compounds, which belong to the level of the limbic system and the neocortex. In the first stage of the experiment, the required supercomputers cannot be accommodated in the new man body due to their large sizes. They will be placed in separate rooms.
        The complexity of a simulated virtual brain as a device must exceed a certain threshold value, and this problem can be solved by using a sufficiently powerful supercomputer. Modeling of the behavior algorithm is a separate task. And, apparently, the complexity of the behavior algorithm of the thinking functions, arranged according to Maslow pyramid, is increasing constantly and continuously and experiences a discontinuity for the creative processes. Therefore it is impossible to simulate creativity on a computer using direct programming. It is only possible to solve this problem by virtual simulation of the neural network and further playback of all conditions for the nucleation of thinking in the neural network. These things are mandatory: the impact on the brain by constantly going signals from the external environment and the impact of the brain through the body to the environment; maintaining a certain level of chaos in the brain due to the intrinsic activity of the neurons; self-organization processes in the neural network under the influence of incoming and outgoing information flows and motivational influence from the reptilian side of the brain.
        A group of scientists led by Dr. Theodore Berger got a number of important results in terms of real simulations of brain function. They have created an implantable chip that was connected to the rat hippocampus by thirty-two electrodes. Experiments have shown that rats remember some facts which they were trained, when the chip is enabled. And they "forget" these facts, when the chip is turned off. Thus, in this experiment, the rat’s acting neural network learned to use an additional logical structure – the chip.
        The proposed variant of creation of a new man authors consider a sequence, which is very similar to Dr. Berger's experiment. Namely, the acting and programmed robot‘s brain, based on a common computer, is supposed to connect with a complex virtual neural network. i.e., programmed reptilian brain level should begin to teach the growing neural network. Apparently, this variant is implemented in the development of human infants. Thus, the construction methods can complement growing techniques and vice versa.   
        After creating a model of the body and brain of the new man, it is impossible to rely on the fact that he will start to think immediately. Similarly it is impossible to hope that a newborn baby will start talking immediately. In this case the application of the method of growing involves a long educational process of new man’s babies among human infants and adults. Thus, after creating the body and brain of a new man in an ordinary man’s own image and likeness, we need to repeat the process of the formation of neural networks in the brain in the informational complex, social and technological environment. If we will move along this path, we will have a chance to grow gradually a new man from the robot.  

§F. Experiment “Big and Quick Brain”.

         The most obvious choice for the experiment on creating a new person is the reproduction of the main parameters of the human body and mind and all the conditions of his upbringing. That is the reason why this variant is offered as the first stage of the experiment.
          The second stage of the experiment is possible, only with the success of the first stage. And if this success is achieved it will be necessary in the second series of experiments to try to create a model of the mind that will differ, from the human mind in the number of neurons or in some other parameters of the neural network.
          All stages of development of living creatures on the Earth were accompanied by a simultaneous increase in size and complexity of the brain and an increase in brain size to body size. Of course man has the biggest correlation. Only dolphins, elephants and whales have a bigger brain than man. On the one hand however, they are our near relatives and on the other hand they are much bigger than us and their brain obviously contains a lot of neurons necessary to control their huge body.
          Dependence of intellectual facilities on the size of the brain among the human population is a very controversial issue. The fact is that such facilities obviously aren’t defined by just the size of the brain, but they are defined by the peculiarities of development of the neural network of a brain.
          However in the case of a human being, the size of the brain can influence his or her intelligence. For example in a number of studies by S.V. Saveliev, it is stated that some outstanding human abilities are defined by higher development of some parts of the brain than the mean level. And accordingly if a certain person has a necessary part of the brain initially developed, he has a chance to become an artist, a musician or a mathematician. If there are no such hypertrophied areas, this person will have usual abilities no matter how much he trains. This mechanism of achievement of outstanding features leads to certain problems of development for such people. Large brain structures of a genius that predetermine his talent increase at the expense of adjacent parts of his brain. As a result it turns out that those parts of the brain that are responsible for social behavior often suffer from deficiency. That leads to the situation when a potentially talented child with a usual size of brain doesn’t behave appropriately from the point of view of the environment. As a result his chances to develop his abilities decrease. That’s why the brain of a genius has a chance to develop if it is larger than average, which is 1.4 kilograms. In this case both parts of the brain responsible for its genius and the parts of the brain that allow its owner to behave in the environment of usual people are developed enough without socially standing out. That leads to the fact that according to S.V. Saveliev’s data the brain of a genius weighs as a rule about 1600 to 1700 g.
          Based on the information stated above, it is obvious that in case of success in creating new people, the next stage of experiments will be an attempt to produce a creature with a brain that will contain several times as many neurons and synapses as the human brain. This stage is absolutely obvious and there are no doubts that it will be completed sometime. But there are different ways to fulfill the program “A Big Brain”. One can try to produce a super creature with a big brain or one can get a big brain by means of joining neural nets of two or more new people.
          If you use on the reptilian level, the incentive programs of new people who are united to create a large brain, it is quite possible that the behavior of the new people will not be very different from the sexual behavior of ordinary people. Let us explain our idea. If we ignore the emotional and physiological components of sex, it is obvious that this process is reduced to a joining of genetic information of two partners in order to transfer it to the next generation. In order to find a partner for such a process it is necessary to have a certain contest and prove that your genetic information is much better than the information of your competitors. New people’s competition for a partner may be exclusively interesting as everybody will really have to prove that he is needed as a source of valuable information. It is interesting to note that if people’s physiological fusion (sex) is at the lowest level of needs; intellectual junction for new people is more likely to appear at the highest level of needs.
          If such a variant of creating a big brain and such a variant of motivation are fulfilled, new people will have a very powerful stimulus for development of their intellectual activity.
          The most obvious step in trying to improve the work of the brain is somehow to just make it bigger and trying to implement the experiment “A Big Brain”. But it is obvious that after new people are successfully created the desire for improvement of the brain will not be limited to making it bigger. What potential experiments will be performed?
          While striving to perform such experiments we can be guided by the tendencies of development of the brain. In particular when J.O. de La Mettrie observed the changes of the brain, starting with the animal brain and finishing with the human brain, he noticed that a human being has much more convolutions (twisted shapes) of the brain than animals. It is generally recognized that by means of a large number of developed convolutions of the brain, the size of the brain remains the same but the volume of the grey matter (the volume of neurons and dendrites) increases. Then it is obvious that the part of the brain occupied by axons becomes smaller. And as the number of neurons during development of convolutions most likely increases, the number of axons should increase as well, but at the same time they should become shorter. So we come to the conclusion that during gradual development of the brain, from the brain of animals to the human brain, the trend of development was defined by the fact that the number of neurons and apparently synapses increased and the axons became shorter which apparently led to an increase in the speed of the brain. Covering axons with a myelin sheath also improved the speed.
          From the point of view of construction of a virtual brain, we can also cast doubt on the optimum number of synapses located in one neuron. It seems possible that the number of synapses in one neuron is not a universal value, and only depends on the efficiency of the neuron. So if a neuron has a higher performance the number of synapses can also increase. If we suppose that at a fixed rate of higher performance of a neuron, the number of synapses suddenly increased it would lead to a tenfold increase of the number of incoming signals to a neuron. As a result, every neuron would not give in general 10 impulses per second to an axon, but it would give the maximum amount possible – about 100 impulses per second. In other words all the neurons would actually become saturated and the brain would stop working. That proves the stated idea about the fact that fast responses of neurons, synapses and their numbers are interrelated and their specific parameters are defined by the structure of corresponding elements. The same idea concerns the number of neurons, length of axons, geometric structure of the brain, and the size of the cranium.
          In the case of the virtual brain, the volume occupied by logical elements connecting the memory elements and connecting lines, is not limited. The speed of signal propagation in the network is close to the velocity of light. The speed of work of every artificial logical element is specified by program methods and can noticeably exceed the speed of a neuron. All of these factors will significantly expand the scope of work to find ways to improve the work of the virtual brain.  
         Thus in the case of success of the first stage of the experiment to create and bring up a new person, we can start the second stage of the experiment concerning the creation of “a big brain” and “a quick brain”. An attempt to increase the number and the speed of logical elements in the system may be the basis of such an experiment. Also to increase the number of synapses that account for one logical element, increase the quantity of possible conditions of a synapse; reduce the time of signals between logical elements, reduce the signal duration between neurons, and reduce the refractory period will be attempted.
          Perhaps, the second stage of experiments will turn out to be as fascinating as the first one.         
          In conclusion, the authors would like to emphasize several points once again:
          1. Originating of thinking in any system of logic elements is possible only when the system exceeds a certain level of complexity.
          2. Algorithm of actions, corresponding to a level of the Maslow pyramid, becomes more complicated as the level of motivation increases and experiences a discontinuity for creative thinking. This can only be realized for the system, which is open for incoming and outgoing information flows. Only in this mode creative thinking can be implemented as non-algorithmic level of mind.
          3. The second point can be realized only in the presence of a very complex body. The body should have a variety of sensors, controllers and communication devices to provide intensive input and output information flows. Such a body is capable of providing not only reception of information from the external environment, but interaction with the environment. Experience shows that exactly the interaction with the environment is necessary for brain development.
          4. When modeling a new person it is advisable to follow the logic of constructing the triune McLean brain. The reptilian brain level should be modeled on the basis of a common computer without neural network modeling. At this level motivational programs are easily set up through direct programming. On the upper levels of the McLean brain a neural network is simulated. 
          5. For constructive solutions for the upper levels of the McLean brain it is possible to get creative thinking only through growing. The algorithm is able to produce only the algorithm.               
          We list some terms that largely reflect the main ideas of this article:     
         - Complexity of the brain;   
         - Openness;   
         - Complexity of the behavior algorithm, gap of algorithm complexity, non-algorithmic specific;  
          - Synaptic instability, chaos;   
          - The complexity of the body;   
          - Levels of motivation, big brain;   
          - Growing and building;   
          - Consciousness is an algorithmic level of mind; thinking is a non-algorithmic level of mind.        

Hypothesis. Why do we feel time during wakefulness? Why don't we feel time during sleep?

Even when we do nothing, eyes closed and nothing interesting or disturbing is happening around we feel that something is still happening. Our "now" is a process. We may call it "feeling of time" and unlikely it has anything to do with memory, because there is nothing to remember. Someone could say that we cannot stop thinking or perception by our will, but we prefer the expression "feeling of time". It looks like a property of our consciousness. In spite of we do not really understand what physical time is, we know that the brain is operated by action potentials and different (say, mainly synaptic) forms of plasticity. Most likely, we can neglect all other local mechanistic details. Logical to assume, that this kit (spikes and plastic synapses) is enough to simulate subjective time flow. How could we simulate intrinsic time flow in an abstract spontaneously active network without any mechanistic "clocks" or "pacemakers", but by using something inevitable and objective? What do we know of physical time?

The only known by now fundamental physical law that shows the arrow of time (the asymmetry between the future and the past) is the second law of thermodynamics. It is based on the concepts of irreversibility and instability. In thermodynamics, we can use the known equations in order to calculate trajectories of particles and to predict their future, but the error of our calculations, no matter how precise they are, increases exponentially with each collision of the particles; they call it the global instability. The global instability is not a math trick, but a fundamental property of all real processes (the simplest model is the Sinai's billiard). For this reason even such simple events as collisions of spherical particles are probabilistic, and sooner or later we come to equilibrium distributions (individual future is unpredictable in principle, but chaotic equilibrium exists). Because of these errors the particles sooner or later fills up entirely the possible phase space independently on the initial conditions. Even if we could somehow redirect all particles backward we would not get to the previous microstate (literally, irreversibility) because of probabilistic nature of collisions (sooner or later we come to the same equilibrium anyway). We cannot redirect time flow even by this act. Or if we started again from the same former microstate we would not get to the same next microstate for the same reason.

The simplest possible example of the irreversible process is coin-tossing (Bifurcation, random choice between two possibilities that cannot be undone by the same algorithm in principle. The key word here "by the same algorithm". We can undo tossing easily by our hands; that is, using energy and information.). As a metaphor, our "now" is always the moment of the choice between these two possibilities (Figure 1) as if we are frozen at the moment of the bifurcation. The past is known, the future is probabilistic. But we do not have neither the past, nor the future, but the "now" only. We cannot escape from this point. Representation of time as a scale is a math abstraction. This is not a figure of speech; we do not have any deeper understanding of what time is. Probably, there is nothing deeper. If we tossed up many coins at once, we get equilibrium distribution and maximal entropy (the phase space is entirely filled up). The second toss would give the same result (statistically nothing would be changed, different microstate, but the same macrostate). By macrostate here we mean some specific distribution, by microstate – readings of the coins. Now let us imagine some abstract system that irreversibly evolves to the equilibrium or attractors. Statistically (on the macroscopic level) it "feels" time on its irreversible way to the equilibrium and stops doing it at the equilibrium. Nobel Laureate Ilya Prigogine called it "internal time" of dynamical systems.

Now let us recall neurons. When we look at real spontaneous activities of cortical neurons during quite wakefulness it's really hard to see any difference between them. Taking into account summation of thousands of relatively weak inputs, fluctuations of membrane potentials and fluctuations in synapses, it seems that cortical neuronal activity is chaotic. It seems that it's impossible to detect the input that physically evoked the postsynaptic spike in principle. (It is not so when we deal with presynaptic bursts or giant synapses, for instance, thalamocortical and corticothalamic so called driver pathways with giant synapses when one synapse evokes postsynaptic spike and we can easily detect reason-result causation.) According to STDP rules it does not really matter which synapse physically evoked the postsynaptic spike, what only matters is time correlation between pre- and postsynaptic spikes to evoke depression or potentiation. Then, this is the unitary irreversible process in the network (mind experiment, even if we could somehow artificially redirect synapses and spikes backward we would not get to the previous network microstate because of probabilistic activity or if we started again from the same previous network microstate we would not come to the same next microstate for the same reason). Due to irreversibility, sooner or later, we come to some attractors. In a majority of the modeling studies due to spontaneous activity some synapses are getting stronger, some synapses are getting weaker (they made their choices). This means that now we can detect the reason (pre-) and the result (postsynapse). We suppose that an appearance of some memory would also increase the probability of detection the reason and the result. But this determinism means the end of irreversibility, the end of evolution and the end of time (our "now" is no longer the permanent moment of a bifurcation). The same is correct for seizures.

Some papers show that even on the macroscopic level the network slides to the attractor during waking: "Fading Signatures of Critical Brain Dynamics during Sustained Wakefulness in Humans" from Meisel et al. 2013.

Probably, such or similar network dynamics had been shown within the theory of consciousness. "A functionally integrated and functionally specialized probabilistic network can sustain high ϕ. A massive Hopfield network with many embedded attractors may have high values of ϕ in certain states, but ϕ will rapidly decrease as the system relaxes into an attractor. The system is too tightly bound to sustain interesting dynamics: the elements act in concert to push the system into its attractors" (Balduzzi and Tononi; 2008).

What if sleep is not only the price we pay for plasticity (Tononi and Cirelli), but it allows the brain to restart irreversible evolution that cannot last forever and till now it was detected as synaptic up- and down-scaling (We need sleep to get rid of the attractors and it gives us time)?

In other words, they say that we have a simulation of the real world in our minds. It is known that this simulation is quite autonomous relatively to environment. We claim that irreversibility is the only known true way "to simulate" time within this simulation (and it comes from physics). Yes, our brains themselves are parts of the world, but our minds are managed by spikes and plastic synapses, nothing else. Spikes and plastic synapses cannot reproduce properties of the fundamental physical particles and the "real" physical time, whatever that "real" meant. Then, our internal simulation moves in time with the real world because spikes and plastic synapses reproduce irreversibility (our "now" as a permanent moment of a bifurcation). As for the "usual" dimensions, where we can move (x,y,z), our minds have direct representatives like place and grid cells as well as representatives for our body. Since our brain is smaller and faster than the Universe, our time is limited by one day (we slide to the attractor rapidly, an analogy with the heat death of the Universe). Once we came in the attractor, we need restarting, that is, sleep, something deterministic where we lose time and consciousness. Some papers claim that during sleep we have a deterministic replay of our day experience. Cortical evoked potentials during sleep much more deterministic as compared to the responses during wakefulness. Slow-wave propagation over the brain is deterministic as well. So, wakefulness vs. sleep as unpredictable irreversible evolution vs. deterministic programmed reverse.

The main criticism would be that memory itself is an irreversible process and it gives us time. Indeed, it might be, but memory is something local. Generally, irreversible processes occur constantly and everywhere in the brain independently on the environment and our will. Only some small part of them might have something to do with something what we can be conscious of or what we can recall later. Such an interpretation would allow us to avoid obtained by now controversy: sleep for potentiation or sleep for depression (Tononi, Timofeev, Born). Not all synaptic changes take place because of memory. This hypothesis is based on the fundamental physical law, it is free from mechanistic assumptions and we believe that we have a lot of indirect proofs.

Figure legend.

According to the second law and Prigogine time might be represented as one point that we usually call "now". During irreversible evolution (upper panel) we have the asymmetry between the past and the future. The probability of an event that happened in the past equals 1 (shown with green). The probability of an event that did not happen equals 0 (shown with blue). The probabilities of all possible events in the future are below 1 (shown with red) and if they are mutually exclusive in sum give 1. The time increment might be represented as a decrease of indeterminacy or as a potentially generated information during the transition from the future to the past. It is easy to see, that during irreversible evolution it is always above 0. Unlike irreversible evolution, during deterministic behaviour (lower panel) there is no asymmetry between the past and the future. Thus, the time increment equals 0.

          Article is a summary of the authors’ book, published in Russian and English languages.

         Sheroziya G.A. - Doctor of Science in Physycs and Mathematics, graduated from MEPhI ;
         Sheroziya M.G. - Ph.D.,  graduated from MIPT.

Physics of the brain. How time can exist in massless abstract medium. Time as a phase transition

Let us describe our hypotheses (wakefulness as an irreversible evolution) in terms of physics.

According to "big bang" theory when something called singularity got unstable, matter started creating space and time. By default, in the Einstein's relativity and "big bang" theory time and space do not exist without matter. Time and space are properties of matter m. According to relativity, massless particles like photons do not have intrinsic time.

On the other hand, our consciousness is a product of neuronal activity and plastic synapses, that is, massless informational system and (presumably) consciousness has its own subjective time which we lose during sleep. If this is true, the fundamental question is: how time can be generated without mass m?

In physics we understand time t as a parameter. This means, that there is no such a function f, so that we could write t = f. For the same reason in physics we do not have such a transform, so that for the moment of time "now" we could write tnow = t – f. How physical laws would look like if we could catch the moment "now"?

According to KAM-theory (Krylov, Kolmogorov, Arnold, Moser and Sinai) the deflection ξ from the calculated (forecasted) trajectory of motion can be approximated with inequality (definition of stability):

here ξ0 – precision of initial conditions, λ – Lyapunov exponent, t – time (now ≤ t ≤ ∞). If λ > 0, then the trajectory unstable and it is stable if λ < 0. De facto, this is a probabilistic description. The inequality (1) defines the difference between the future and the past: asymmetry of the moment "now" or a second order phase transition - the past exists while the future is probabilistic.

From (1) for the moment "now" (right limit: t → 0+) we have:

For the left limit we can put the derivative equals zero. Such definition of time does not contradict to the laws of motion. Indeed, it is well-known that the main laws we can get in such artificial way: let us define mass m as a derivative of time by some hidden variable ξ:


Let's put impulse and force as:

And we have Newton's laws:

We defined the moment "now" as the phase transition (2), so we got the real meaning of the hidden variable ξ: this is a spatial measure of fluctuations – size:

Finally, combination of (2) and (3):

We got a link between local time, space and mass m. As we expected, physical mass is characterized by positive Lyapunov exponents. To what extent such interpretation of mass m is correct, we do not know yet.

In massless abstract medium fluctuations can "simulate" mass m, and for this reason our consciousness can have its own time.

To be continued.

Autors: Georgy A. Sheroziya, Maxim G. Sheroziya

Шерозия Г.А.- д.ф.м.н., Шерозия М.Г. – к.б.н.