2 Answers

  1. The problem is that AI today does not know how to “realize”. That is, using an example. At school, you decide something using the specified algorithm, and it seems that the result is there, and you have learned. But no. Elementary reproduction. But then “something happens” and you begin to “understand” what it is and why you are doing it, that is, awareness comes. Currently, attempts are being made to understand the nature of this phenomenon, its implementation and implementation in AI. As far as I know, to no avail.

  2. The statement that I am a person is easily verifiable, because we know what a person is made of ,and even for this it is not necessary to cut it. From X-rays to cell-level / DNA tests .So it won't be a problem. If no checks, tests, MRI, etc. can not detect the difference between this creature and a person, the question arises on what basis (for what reason) do we call it an “artificial intelligence creature” and not a human?

    It will be more problematic to prove to a person that he is not a robot whose structure exactly repeats the structure of a person. Or if the robot declares that it is the personality (consciousness) of a person (specific), transferred to the “artificial brain” in the robot's body. In this case, we will have problems. We'll have Russell's teapot full of problems. More precisely, as many as 2 teapots. Because these 2 statements are not fundamentally verifiable in any way.

Leave a Reply