Sunday, November 2, 2014

The Philosophy of Intelligence Explosions and Advanced Robotics (Series Index)


Hal, from 2001: A Space Odyssey


Advances in robotics and artificial intelligence are going to play an increasingly important role in human society. Over the past two years, I've written several posts about this topic. The majority of them focus on machine ethics and the potential risks of an intelligence explosion; others look at how we might interact with and have duties toward robots.

Anyway, for your benefit (and for my own), I thought it might worth providing links to all of these posts. I will keep this updated as I write more.


  • The Singularity: Overview and Framework: This was my first attempt to provide a general overview and framework for understanding the debate about the technological singularity. I suggested that the debate could be organised around three main theses: (i) the explosion thesis -- which claims that there will be an intelligence explosion; (ii) the unfriendliness thesis -- which claims that an advanced artificial intelligence is likely to be "unfriendly"; and (iii) the inevitability thesis -- which claims that the creation of an unfriendly AI will be difficult to avoid, if not inevitable.

  • The Singularity: Overview and Framework Redux: This was my second attempt to provide a general overview and framework for understanding the debate about the technological singularity. I tried to reduce the framework down to two main theses: (i) the explosion thesis and (ii) the unfriendliness thesis.


  • AIs and the Decisive Advantage Thesis: Many people claim that an advanced artificial intelligence would have decisive advantages over human intelligences. Is this right? In this post, I look at Kaj Sotala's argument to that effect.

  • Is there a case for robot slaves? - If robots can be persons -- in the morally thick sense of "person" -- then surely it would be wrong to make them cater to our every whim? Or would it? Steve Petersen argues that the creation of robot slaves might be morally permissible. In this post, I look at what he has to say.

  • The Ethics of Robot Sex: A reasonably self-explanatory title. This post looks at the ethical issues that might arise from the creation of sex robots.



  • Bostrom on Superintelligence (2) The Instrumental Convergence Thesis: The second part in my series on Bostrom's book. This one examines the instrumental convergence thesis, according to which an intelligent agent, no matter what its final goals may be, is likely to converge upon certain instrumental goals that are unfriendly to human beings.








  • Is anyone competent to regulate AI? - Second post looking at Matt Scherer's work. This one looks at the three main regulatory bodies in any state (the legislature; specific regulatory agencies; and the courts) and examines their competencies. It ends with a brief evaluation of Scherer's proposed regulatory model.








No comments:

Post a Comment