Oxford University

Nick Bostrom started most of his work in Artificial Intelligence at Oxford University in 2002. 


Latitude: 51.756711000000
Longitude: -1.255468700000

Timeline of Events Associated with Oxford University

Date Event Manage

Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014)

In 2014, Nick Bostrom published SuperIntelligence: Paths, Dangers, Strategies, aan in-depth analysis on the future of post-human intelligence. In it, he describes several scenarios that could lead the human race to possibly (or inevitably) create a being or beings of greater intelligence than our own. Methodically, Bostrom explores the definition of and multiple paths to Superintelligence. He then provides analysis of the merits and drawbacks of each. He posits one such path as the development of an artificial being, effectively a robot. He also suggests possible strategies for avoiding the trivialization of human life and asks critical ethical questions that must be answered before such an eventuality as a superintelligent system occurs. Furthermore, the book talks about how day to day life will change, theorizing on the new economy and value systems of a post-singularic world. Touted by many as essential to understanding the consequences of superintelligence, SuperIntelligence: Paths, Dangers, Strategies provides a practical and logical lens through which to consider the future of AI.

Nick Bostrom is a Swedish existentialist philosopher at Oxford University. His work centers around the coming rise and risks of artificial intelligence, which he states will likely become what he calls Superintelligence. His research on the future of humanity and the concerns regarding the possible outcomes of developing Artificial Intelligence has influenced the thinking of many pioneers in the fields of computing and data analysis such as Elon Musk and Bill Gates. One of his main concerns with developing artificial intelligence is the thin line between achieving human-level intelligence in robots and super-intelligence which will result in an “intelligence explosion”. He suggests that artificial intelligence will reach a point where it improves itself, after which, human beings would have no control over it. Another issue that he often talks about is machine ethics and dealing with artificially intelligent robots in daily life, once they achieve human level intelligence. He suggests that having an intelligence of that kind would entitle them to gain a status as a separate species, living on earth with humans, or displacing them in a worst case scenario. He now serves on the advisory boards of several organizations dealing with the development of AI.