![]() To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. Ī second source of concern is that a sudden and unexpected " intelligence explosion" might take an unprepared human race by surprise. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals as this would prevent it from accomplishing its present goal, and that it will be extremely difficult to align superintelligence with the full breadth of important human values and constraints. Two sources of concern are the problems of AI control and alignment: that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk. The probability of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. ![]() If AI surpasses humanity in general intelligence and becomes " superintelligent", then it could become difficult or impossible for humans to control. ![]() It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. ![]() Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |