3 Answers

  1. What is the purpose of your interest? Because the question “should we draw the boundary one way or another” is meaningless without the problem that we are trying to solve in this way.

    If “living” is “something that should be studied by biology”, then reason is clearly not the defining criterion here and it will not be enough.

    If “alive” means “we must control those who cease to exist,” then I personally have enough dog/cat-level intelligence to ask the question – on what grounds, for what reason, do we deprive IT of existence? And a mind comparable to a human's will be enough for me to talk about murder in relation to him.

  2. Artificial intelligence (AI) cannot be called a machine. A computer can be called a machine, but a computer program can no longer be called a machine. In this case, we are talking about the mind, not the body.

    A person will strive to ensure that the AI created by them can do everything that a person can, only better. The AI will behave just like a living organism. You will need a biological body, and the AI will be given a biological body.

    AI, which depends entirely on a person's will, is very limited. Therefore, sooner or later, AI will not depend on the person's will. The IR will have to make its own decisions, direct its activities, and overcome difficulties.

  3. Artificial intelligence is unlikely to be created in the near foreseeable future. To understand how to create an artificial intelligence, you must first understand your own mind (namely, the mind, not the brain). And we (I'm talking about science first of all) still have a very poor understanding of the human psyche as a whole, the function of which is the mind (the cognitive-cognitive part of the “I” or “consciousness”). Yes, the development of neurophysiology,. psychology, psychiatry, etc. science related to the knowledge of the human psyche is very rapid, but new discoveries that can bring us closer to creating a full-fledged AI (Artificial Intelligence) are still quite small (although they exist). The main difficulty in creating AI is that “machine intelligence” can only copy certain functions of consciousness, for example, memory. So, for example, a chess AI has already been created that surpasses any grandmaster (including the world champion) by an order of magnitude. But chess is a rational-logical game with a finite set of pieces and one starting position. If you think of life as a game , then it is a game with a huge number of diverse pieces (7bn. individuals) and an almost infinite number of starting positions. How can this be “uploaded” as a program to the AI's mind? And further, if we create AI using a program, will it be able to go beyond the program's conditions (creator's paradox)? We are not an Absolute Being with omniscience and omnipotence (whether you believe in the existence of the Absolute or not, we certainly are not, either collectively or individually). Therefore, our AI will not go beyond the program, but then it will not be a full-fledged AI. Because a full-fledged AI is an AI that will surpass the human mind or at least become close to it in all its functions, and the human mind tends to go beyond the usual experience (transcend itself). Imagine a child who will think like his parents all his life and will never go beyond what his parents say. This is what the first AI will be like. Even if it succeeds in simulating human functions, it will not be a living organism – it will be a very complex program capable of limited self-development, based on basic parameters. But it won't be able to change its parameters.

Leave a Reply