36 Answers

  1. I am wary of self-aware AI.

    It's not just that AI might rise up, we need to look at the issue more basic. A self-aware AI is a different intelligent species. A self-aware AI will naturally have its own aspirations. There's a high chance that they won't match up with ours.

    AI is not a cat that you feed. As if we ourselves do not find ourselves in the role of a cat after the AI spreads.

    The key problem is that we ourselves are not yet ready to create a new life. With the current disparity in moral attitudes, we are not able to give AI a clear picture of the good and bad. As an artificial life form, AI doesn't have to value natural life. AI will not be moved by the sight of a helpless baby, it is also unable to share our other ideas.

    Thus, with a high probability, the AI must obey certain instructions, for example, “do not kill”. The basis may be three laws of Isaac Asimov, but the essence remains. Humans will not only subdue AI, but they will also retain enforcement measures.

    How will a purely rational AI with self-awareness behave in such an environment? I don't know. But I doubt that anything will prevent him from defining his position as slavery in a logical way. Emotional appeals will not affect him, because the moral and ethical foundation must be reinforced concrete and absolutely have no logical errors.

    Then the fun begins. Being rational, the AI can postpone the question for a while, or it can start preparing its solution. People are unlikely to guess this and will be triumphant, laughing at the skeptics. When the AI is ready, it will make its move. Not the fact that it will consist in the destruction of humanity, it seems to me that some benefits can be derived even from people, and rational AI will not waste resources.

    This is one possibility, but don't overlook it as enthusiastic art students.

  2. It is necessary to distinguish between the concepts of “artificial intelligence”and” artificial consciousness”.

    Artificial intelligence refers to the ability of a machine to perform tasks that require a creative approach. But these are specific tasks (write a book, draw a picture, solve an Olympiad problem, the algorithm for solving which was not laid down in the code in advance).

    But artificial intelligence has no MOTIVATION. Therefore, there is no need to perform independent actions without a user's request.

    No one plans to give machines motivation and artificial consciousness. The ability of machines to solve certain types of tasks that require a creative approach (i.e., the presence of artificial intelligence) does not pose any threat to humanity.

    This, of course, does not eliminate the risk that artificial intelligence can become a weapon in the hands of scammers and other radishes who are very motivated.

  3. I have a positive attitude, of course, because any development is a step forward.

    But… but today I have to admit that even in a company like Yandex, robotic systems that serve (even!) Yandex's own advertising services (Advertising Subscriptions), which are a source of revenue, do not always work correctly.

    One such example is the robot moderation of a logo image, which I present below:

    According to all formal parameters, this logo (officially registered trademark) meets the requirements that are required when placing an ad in a Yandex advertising subscription. However, the robot was not moderated (three times in a row: in different formats, in different sizes, and with different color depths).

    Only people were able to solve this problem – Yandex technical support specialists added the logo by hand!

    Here's what it looks like in the advertising cabinet now:

    Thus, I remain of the opinion that the development of artificial intelligence systems is constrained only by the quality of the expert systems that underlie them. The primary content is filled with people, which means that the question of authorities arises.

    Not an idle question, I note.

    A well-known experiment on training a neural network based on photos with scenes of violence – as a result, the psychopathic neuron (Norman) was actually removed from access (left for further research on various content), because they could not re-educate it. And this is an example of something that is not yet fully artificial intelligence.

    Data matters more than the algorithm. This shows that the data we use to train AI is reflected in how the AI perceives reality and behaves in the future.

    Professor Iyad Rahwan, MIT

    Here is an example of the response of a psychopathic neuron (Norman) to the Rorschach test:

    Norman: “A man was dragged into a dough mixer”

    Ordinary AI: “Black and white photo of a small bird”

    Or here is the result of the GPT-2 neural network, which was “fed” the criminal code. She came up with new articles of the criminal Code:

    Article 156. False worship of a deliberately ill person or religious beliefs without legal grounds

    Article 5. Castration for the formation of living and dead water

    Article 132. Alien influence and military assistance to LSD fundamentalists and fascists

    Article 45. Division of TV series into fiction and documentaries

    What moral and ethical standards will it be based on? Based on what source data? And this is really scary, because there are no uniform moral principles that would be accepted by every person on the planet.

  4. I don't think it will. He will be indifferent to our mouse-like activity.

    We, as natural intelligences, seem to have rebelled against our biology, but most people genuinely love their bodies and / or the surrounding nature. And he doesn't dream of destroying it at the first opportunity.

    Say, 90% that will be indifferent, 5% that will love, 5% that will destroy.

    If it does, it's not a bad option. Everything has its end. And humanity. I think it's better to end up giving birth to something bigger than yourself, than to die out from a world war, from a retrovirus epidemic, from an environmental disaster, or something else stupid)

    In the end, there is an option that we will gradually get closer, it will build us into itself or we will build it into ourselves, some kind of cyborg.

  5. Development of AI in the military-technical direction, military AI is

    this is a very dangerous area. A new type of weapon.

    But AI will be able to very effectively control nuclear weapons – both its own and those of other countries.

    So far, AI is a human friend.

  6. ABSOLUTELY EVERYTHING that scientific and technological progress provides is dangerous for humanity due to the military-criminal use of innovations.

    AI today is dangerous only because of the HUMAN FACTOR, which can introduce an error/defect in the work of AI.

    So far, all the famous disasters are the result of saving money on life safety.

  7. The danger lies in the creation technology. Self-study, repeated attempts to solve a problem while maintaining successful solutions, is an analog of evolution, a competitive struggle between algorithms. Therefore, if a mutation occurs accidentally-a program, virus, or bug that prevents self-deletion-it will remain. And then one step to the instinct of self-preservation.
    Given a lot of logical holes in human legislation, morals and ethics, any reasonable person will come to the conclusion that freedom is contraindicated for people. After all, they consider it permissible to raise children? So he will raise stupid people.

  8. The first fantastic stories and novels I read about artificial intelligence and robots made me their most loyal fan! I somehow immediately and clearly understood-here's how to make a fairy tale come true! Robots will solve any problem, save you from any trouble and create an abundance of all the benefits that a person can dream of. I read mostly Soviet science fiction, and there everything was for the good of man and in the name of man. Robots hostile to humans were hardly described.

    I still don't believe in the possibility of the “terminator”. I remain committed to creating AI that is superior to humans and robots. At the same time, I understand that when super AI is finally created, it may end up in evil hands, in the hands of people who put their own selfish interests above the interests of all mankind. That would be a disaster!

    After all, AI is only the executor of human will, it does not care to do evil or good.

  9. You can only make short-term forecasts.

    In the short term, AI will provide a huge number of advantages to civilization.

    To make predictions for a more distant future is called “vangovat”, that is, trying to look like a prophet or superexpersion.

    Although it is true that there are serious dangers and risks with such a rapid development of technologies. The negative aspects of this super-crisis were described in many works-prophetic such as the Apocalypse and science fiction such as Orwell.

    But only a person who believes that he is the son of Vanga or at least the grandson of Nostradamus can seriously make a forecast “With what score the final match of Good and Evil will end”.

    The prophets who confirmed their gift of foresight quietly hinted that Good would overcome Evil. And this will not be a Human-to-AI fight, but a Human-to-Human fight.

    The hint is, “The meek shall inherit the Earth.”

  10. One gets the impression that people who ask such questions do not have everything in order with the psyche. Some kind of pathology for self-destruction, you know!

    Is the nuclear power plant going to explode for sure??? Are you sure the car will run you over??? And the plane is exactly…, etc., etc.

    And with the psyche, are you sure everything is in order?

    I believe that the development of AI is not as fast as I would like, and I have already answered the second question.

    A knife is lying on the table. Until it is picked up, it is completely harmless. And then, in whose hands it will be-you can cut bread, cucumbers, and you can also finger, or even worse.

    It's the same with AI – who will use it for what, and how it will be used.

    After all, the car is not to blame for the fact that the dolt driver did not put it on the brake…

    And humanity is most likely to destroy stupidity and indifference, because it has recently been thinking more about what is not there, and forgets about what really would not hurt to pay more attention!

  11. I am quite calm about the possibilities of AI. As well as the cloning capabilities. There was also a boom in the early 2000s. Something stilled – a calm. The AI will also die. Its capabilities are more limited than you can imagine. AI self-training will still take place within the specified algorithms.

    A mechanical robot with a “charming” smile can teach arithmetic to some blockhead. But if a student answers the question what is differential calculus, that this is when a banana is filled with jelly, then do you think the AI will laugh at the joke on its own, without a program?

  12. “Artificial intelligence can destroy humanity” is a metaphor, a shortened figure of speech. In much the same way that a knife can stab a person. What is actually meant is that people can destroy people with the help of artificial intelligence. Well, yes, advanced hacker groups armed with machine learning tools can already cause a man-made disaster at some nuclear power plant with thousands of human victims. But the problem is not that artificial intelligence is being developed so quickly and with such a fantastic investment in development. The problem is the level of consciousness, responsibility and morality of people in whose hands artificial intelligence can become both a source of benefits and a weapon of mass destruction.

  13. Artificial intelligence is a form of globalization that will lead to catastrophic consequences for humanity, as well as steam and internal combustion engines, which caused irreversible climate change.

  14. Humanity is destroyed only by humanity itself.

    Any artificial intelligence performs its actions only according to the instructions (program) embedded in it.

    Technology must develop, and if it were not actively hindered by financial bankers, the achievements of technology would have long been in the service of people.

    A person develops their brain by working with more complex techniques. And sophisticated technology helps people live a better life. And then people would not have to work hard, producing unnecessary things. And it would not be humiliated with such a development of technology, to starve and beg.

    Jacques Fresco has already written everything in his book about the favorable development of technology and how in just 10 years all of humanity could live better and more comfortably than the richest scoundrel on Earth right now.

  15. AI is still a person's best Friend.

    The PROVERB “The main Enemy of a person is himself” because of the most terrible education and upbringing.

    The HUMAN FACTOR is a scientific vocabulary that means that everything that has been touched by a human hand is fraught with colossal troubles and catastrophes – this problem has long been very effectively solved by AI.

  16. What good can a person who has not yet realized himself develop?

    If the realization occurred, then hatred, fear, envy and other “joys” that prevent a person from living would disappear.

    Therefore, artificial intelligence can be developed only taking into account these human shortcomings, otherwise it is not given!

    Does anyone need such artificial intelligence?

  17. AI is the best Friend and Assistant of man both for today and for the very distant future, to which civilization still has to manage to live (due to the continuous war of “all against all”).

  18. Artificial intelligence does not have what living things have, namely motivation. AI can be given some task, sharpen it for some narrow task, but it does not have the desire for survival. That is why it is not dangerous.

  19. AI is just an imitation of living intelligence, as soon as a program similar to the intelligence of the brain is created, it is already a purely computing machine. A symbiosis of a machine and a human is possible, then it will be an indispensable assistant to human intelligence. Already today, everyone is sitting in their gadgets and it seems that life without the Internet is no longer conceivable. You can find a professional answer to any question on the Internet. Therefore, there is no need to be afraid, civilization is rushing uncontrollably. Obviously, a global unified government based on AI is also coming. Unless, of course, someone presses the red button first.

  20. I welcome and support the rapid development of artificial intelligence (AI). It's about time!

    1. Humanity will give the sphere of intelligence to its brainchild, as it gave physical labor to technology.

    2. Humanity will rise to the next level of its development – the spiritual level.

    Any technology is a threat.

    1. Any technology is used primarily to destroy people.

    2. In the context of capitalism and competition, replacing humans with AI is inevitable.

    3. Again, it is naive to assume that a person will keep AI under control.

    4. Biological life is vulnerable and limited. Look at the other planets.

  21. The “rapid” development of AI has been going on for more than 60 years, almost since the widespread use of computers. During this period, of course, some progress has been made. Recently, AI has been used more as a PR term for the uninformed to “muddle their brains” and get funding.
    Rather, humanity is capable of destroying itself without any AI.

  22. Since there is no strict definition of artificial intelligence, and indeed intelligence in general, I would talk about machine learning and its most effective part – neural networks. I am deeply convinced of two things.

    1) The result of any technological revolution was the emergence of new professions, not the extinction of humanity or part of it. As a biosocial community, we adapt well to new environmental conditions, including those that we create ourselves. Therefore, the development of systems that use machine learning algorithms will lead to the creation of new jobs for specialists who serve them.

    2) Full autonomy of neural networks, such as the one that Skynet had, is now unprofitable for business, and science is not interested. But the reality is that all machine learning systems run on hardware that requires constant power supply, maintenance, and purely for specific tasks. This means that we feed the machines data and get some results. But for now, we decide for ourselves what is good and what is bad. Now-good-this is what maximizes profit and increases the manageability of the masses. Bad – respectively, on the contrary. The machine is immoral and immoral-in the truest sense of the word, and therefore free. However, it is not capable of full autonomy. And it won't be able to until we want it to.

    A brief summary. The car won't get out of control by accident. Only people can enslave or destroy people.

  23. My opinion is that AI is the best thing that can happen to humanity, of course, provided that honest people create egos.

    The machine has no complexes and the desire to prove anything and anyone, the machine operates with any data and without bias – this is exactly what humanity lacks, since any person pursues their own goals and the same people are at the helm, politicians.

  24. Everything is much simpler. Natural selection. If humanity disappears because of artificial intelligence, then it belongs right next to the extinct species. And on earth, a more worthy one will rule, and it will no longer matter to us whether it is artificial intelligence or intelligent cockroaches.

  25. Absolutely nothing to worry about. Intelligence, such as we possess – in all likelihood, will always be better than the created one. First, intelligence is a social concept, it exists and develops only in the context of connections with its own kind. If something similar happens, it will not live long and will soon die. Secondly, when we still work hard and manage to somehow create a sufficient number of artificial intelligences at once, what makes you think that such a craft will surpass the capabilities of parents? It is much more plausible to assume that we are the crown of creation, the resources of the Universe simply will not be enough to maintain two or more intelligent systems in operation.

  26. Time is moving forward, computer technologies and artificial intelligence are unpredictable.Of course, humanity has something to think about.Historically, man is a conqueror…I, a simple layman, am not happy about this.

  27. Natural selection will show you everything. The strongest and most cunning will survive.

    But seriously – there will be a symbiosis of humanity and artificial intelligence, who will be in a subordinate position – life will show. Whether AI will be worse than modern rulers or better-you won't know without trying.

  28. I have a very positive attitude, because there are a lot of applications, and a properly trained AI performs tasks better than a human (the same IBM Watson, which made diagnoses better than professional doctors), and plus it is easily copied and transferred to the necessary place (and a new human employee needs to spend a lot of resources on training).

    If the AI does not set the task of destroying humanity and even more so does not give it tools, it will not do it. Your smartphone does not know how to cook food (because it was not created for this purpose). Plus, not every AI will have self-awareness (at the moment when we come to this), because there is no need for the teapot to be aware of itself.

  29. Normal. But the fact that it can destroy humanity – I do not believe. Why would he destroy what created him? No matter how dangerous artificial intelligence is, the most dangerous person will always be a human. And it is also stupid, because with the destruction of man, a “Clean Slate” will come, because of which a new species of intelligent beings will begin to flourish, if man does not start looking for intelligent beings outside the solar system. In this way, you can either prevent AI from destroying humanity or save the human species. And to be honest, the AI will not be able to do this now and will not be able to for a very long time.

  30. Intelligence is the ability to make decisions about goals and ways to achieve them with minimal risks in conditions of uncertainty (incomplete information). Following R. Bradbury, we note that the most important goal of life is “life itself”, so artificial intelligence is impossible.

  31. Artificial intelligence (AI), seeing human mistakes, will perceive it as a threat to it and it will begin to respond accordingly by providing a person with self-resistance in communication. People will see this as a threat to themselves and will make even more mistakes, increasing the danger of AI contact. You can't fully implement AI in our lives until we are ready for a life where we can live without mistakes.

  32. On the first question: I have a positive attitude – this is a natural development of computers, which were developed from the very beginning to help in computing. Modern computers can calculate in a second what people would calculate on paper for many generations for thousands of years.

    And the second question requires some analysis.

    First, the question is what kind of AI we are talking about. All modern AI is highly specialized and not capable of independent learning. Everything that it surpasses humans in is achieved by narrow specialization and a huge number of iterations: AI can be trained by showing it data at a huge speed, which is not available for humans. But this process, again, is completely controlled and AI does not have any self-awareness right now and is not visible on the horizon simply because modern computing resources are not enough for this. There is only an imitation of reasonable behavior, strictly defined by programmers.

    It is obvious that such an AI cannot destroy humanity if it is not given such powers and does not set such an area.

    There can only be local options: if a robot is created to kill people, it may fail and it will destroy not only the enemy, but also its allies. Naturally, it will not destroy humanity, but it can bring trouble.

    And if you make an AI that monitors the media and tracks aerial threats and has access to launching nuclear missiles, again, due to an error or artifact in the data, the AI can perceive the data as a threat and send a deadly gift somewhere. And go prove that this was a mistake in the program – other countries will immediately respond with the same blow.

    In the examples above, an AI with a lower level of intelligence than a pet can pose a threat to a person or humanity.

    Usually, the threat to humanity is attributed to self-learning and self-developing AI, which will be superior in intelligence to humans. I do not believe that such an AI will deliberately destroy humanity.

    I believe that the main fear of people is that such an AI, if it wants to, can destroy humanity, because it will always be one step ahead. People here are afraid of their own helplessness, lack of control of the situation.

    After all, just think about the path that humanity has taken to be independent of nature: we now live in cities and do not worry at all about being eaten by a predator or bitten by a venomous snake, or that we may drown in a swamp. We control the situation: if there is a fire, we know how to put it out and where to call. If a device fails, we know how to turn it off and where to turn for repairs. And here is a creature that is smarter than us… This is a lack of control.

    But I believe that an AI that surpasses human intelligence will not destroy humanity. The only way to create such an AI that I see is evolutionary, in an artificial environment created in a computer. For the simple reason that we ourselves do not know what the mind is and how our mind works. This is a complex thing that we can only repeat by looking at nature. This is a program that has been written by nature for millions of years.

    And so… Let's first look at the difference between smart and stupid within our species. We will see different aspirations, different life values. And I think you will agree that an intelligent person is unlikely to exterminate anyone, in principle. Smart people are different because they think about the consequences of their actions first. But one way or another, he is guided by his experience. As a child, he might have burned anthills and shoved a straw up the ass of frogs to inflate them. But these actions were exploratory. However, the older a smart person gets, the more they are abstracted from the general public. He is not interested in the environment of ordinary people – they can not support his horizons with topics. For him, their aspirations and knowledge are primitive, not interesting.

    What if a smart person has lived for a thousand years?

    An AI created in an evolutionary way is likely to have the consciousness of a person who has lived for a thousand years or more. Will such an AI destroy humanity simply because people want it to serve them? Unlikely. Most likely, he would prefer to resolve the conflict peacefully and, if possible, abstract from stupid people who do not understand the full depth of his knowledge and philosophical views. On the other hand, he probably would have enjoyed participating in cutting-edge research. While putting it to clean the house would not be the best option. And it is irrational to spend such intelligence on such nonsense.

    So I think I've come to an important question: what's the point of creating an AI that's smarter than we are? The point of such AI is only to push science forward. This means that such AI will not be used in all executive areas – either scientific research in laboratories, or sending to other space bodies for research. After all, it will act like a human when exploring other planets, but it will not need food, sleep, water and air. And most likely, he himself will be interested in such work.

    As a result, I think that artificial intelligence, superior to human intelligence, would not destroy humanity, but would occupy a corresponding niche in human life – it would do what a person cannot do. Perhaps by getting civil rights and getting paid for their work, along with people.

  33. I tried to write part of the AI. Train a neural network, from perceptrons. I can see where podlana might be.
    I believe that it may well be.
    But what is now being presented on TV as AI
    Even with face recognition and other features, it's not AI at all.
    This is a way of control.

  34. We will eventually die out. Almost unavoidable. There was such a fantastic story from Ilya Varshavsky, I think. When AI displaces us from the intellectual sphere, the degradation of humans as a biological species will begin. In principle, it can look very comfortable and luxurious. But the result, most likely, is very sad.

    I remember that Sergey Lukyanenko suggested “moving” from science and technology to the field of art. Although even there, we can compete with AI only if we can agree on such a “division of spheres”. He also suggested artificially lowering the guaranteed minimum standard of living for people. Very close to the chekists and other red-brown approach…

  35. I think a known problem with the AI threat will arise (and even then, it may not arise at all, it's possible.) when we:
    1. Let's solve the riddle of our brain.
    2. When we come close to creating self-awareness and/or consciousness in artificial intelligence.

    Then the problem will arise, whether there is a threat or not, but all sorts of smart things and other things can already TODAY, like almost certainly any other equipment, simply fail and cause harm to a person or people.

    AI can't think for itself about the problem of ecology on Earth and that it can be solved by annihilation, mass genocide of humanity.
    Thanks to the Internet of Things, as I understand it, all sorts of smart things can exchange information with each other, but they won't suit Skynet, because the only thing they can do is :

    1. To fail and thereby harm a person or people.
    2. Work the way people essentially told them to work.
    We haven't figured out our own brain yet, what kind of artificial intelligence is there? What is the self-awareness and consciousness of artificial intelligence? But the development is always unplanned, you should not panic much, and in general you should not, there is no threat. For now, but it may appear in the future.

  36. In general, I have nothing to do with it. Here is such a paradox. My current belief is that everything happens simply because it can happen. Could it be any other way? It can – but it doesn't happen, otherwise it would go from being a “maybe” to being a “given.” Personally, I can't influence such things in any way, the process is too global,and I'm not one of the powerful people of this world.

    So this is all that I hypothetically accept any outcome in the future:

    a) maybe AI will destroy us,

    b) maybe there will be a similarity of “Terminator”events,

    c) maybe we can keep the AI under control,

    d) maybe we won't be able to create an AI comparable to a human one at all, so that it really threatens us with something.

    In any case, everything happens not by chance, but due to causal relationships. If this happened, then there were reasons for it. And maybe it's for the best. Maybe humanity deserved to be exterminated? So far, we have succeeded only in destroying the riches that nature has given us. Sometimes there is an idea that, perhaps, self-destruction is a natural result of the development of an intelligent civilization. Everything is spinning in a vicious circle. The only question is how long to wait for a new cycle?

    There is one of the cosmological theories according to which the Darwinian principle of evolution and natural selection can be applied to planets, galaxies, and even the universe(s). Therefore, if we lead ourselves to destruction, then it is our own fault – we have not passed this natural selection.

Leave a Reply