4 Answers

  1. 1.

    -It can

    – The breadth of thinking narrows, as it were. The greater the intelligence, the more you see and notice. Accordingly, with a decrease in nerve cells, its breadth and quality fall to the point that the limit of abilities becomes evasion from the hand.�


    Sleep is, in a sense, a turning off of the brain. For the AI, this will just be a jump in time, it will have nothing to tell about.


    If there are no senses, how will he feel? Theoretically it can read the state of this whole thing and it will be an analogue of our senses

  2. Addendum to the question:

    Fantasies about artificial intelligence:
    By definition, any intelligence is a handful of neurons connected by synapses.
    To date, we have been able to simulate a neural network of approximately the same level as that of a fly.
    It is known for certain that most of our smaller brothers can understand human language and communicate with gestures, so I assume that in order to start cultivating a system of their own character and opinions, a computer does not need to have the same level of intelligence as we do. Let's say its development will be at the level of a dolphin or primate.


    1. What will happen if we turn on this AI, train it, and gradually start reducing its power? That is, we will gradually reduce its intellectual abilities to the level of a fly, and then strengthen it back. In this case, all changes are recorded in its memory/neural network.

      • Will he be able to tell you how the lower life forms perceive the world?
      • How much does the perception of the world change when the activity of neurons decreases?
      • Until when can we judge the existence of intelligence? As long as at least one neuron is active?
    2. What will this AI tell us if we turn it on, train it, turn it off, and then turn it back on? The main question is, what will the switched-off consciousness experience? It is clear that from the point of view of the interaction of neurons – nothing, but will it be able to describe what happens after the death of a person?

    3. Formally, we generate a certain consciousness out of nothing – which does not contradict biology. But does this literally mean that consciousness generated from nothing is everything? That is, under a certain set of circumstances, our brain appeared out of nothing. In fact, consciousness can be generated anywhere in the universe, from almost any matter, does this mean that consciousness is everywhere?
    4. Will the AI be able to realize that it is part of the computer and try to feel its physical nature (I mean the computer, the hard disk in which it is stored, the processor, and so on)? Will he succeed and how difficult will it be for him to do it?
  3. no. AI can't be self-aware. The latter is itself a mystery, even for scientists. The AI will not experience any feelings when disconnecting, however, it can perceive disconnection as a threat if self-preservation is prescribed in its program code. If you manage to create an AI that will be able to recognize itself-congratulations, you have created artificial life, but at the same time it will no longer be an AI.

  4. The fact is that there is no AI now, and all projects to create it are stalling at the initial stage. The most powerful thing that is currently available is neural networks that beat supercomputers in chess. If these computers beat people, world champions, then neural networks beat them. But they don't have autonomous thinking. They don't have any thinking at all.�

    AI still exists only as a program, i.e. all attempts to create a smart robot or supercomputer imply that it simply performs program actions. He does not have his own thinking, and even more so does not make a choice. It simply performs the actions programmed into it. At the same time, it is also important that the computer program is created as a tool that should not make mistakes, should not make mistakes in principle, in any form, and if an error occurs, the program is difficult or interrupted, and the program immediately tries to correct this error. A person works many times more complexly, he makes mistakes often, and in some way he must make mistakes in order to achieve all sorts of non-trivial results. The human mind is not just a collection of computing power that can be accelerated to astronomical values. This is a much more complex, creative mechanism, and there is nothing science can do about it. No one has yet been able to give AI self-awareness, thinking, while there is only a dummy that acts completely predictably according to given algorithms. And it will probably never be possible to create a full-fledged AI. I'm sure of it.

Leave a Reply