The fear of AI, or some people might say: "The rational response to real risk." This recent debate is slightly complex but hopefully you can follow the back and forth.
I published my Hawking Fallacy on 10th May 2014, which was following on from my AI Risk Analysts Are The Biggest Risk (Risky Analysts) dated 27th March 2014. About three weeks after Risky Analysts, Hawking and few of his buddies published an article warning about possible danger from AI.
The text below is a comment of mine responding to an article in response to my Hawking Fallacy article. The title of the article, which my comment below responds to, is The Hawking Fallacy Argued, A Personal Opinion. It is also worthwhile to mention a post on Futuristic Reader, titled AI Ethics Dangerous Risky Xenophobia, which includes some relevant Tweets and two relevant G+ posts.
SU Comment:
A minor point to note regarding the Hawking Fallacy. I have been tracking this issue for approximately four years, since 2010. I was not merely responding to the subtitle attached to the Independent article. Note my article preceding the Hawking Fallacy, which was published before the warning by Hawking and company. My article "AI Risk Analysts are the Biggest Risk" was published via Singularity Weblog on 27th March 2014.
Hawking's question (19th April 2014), about are we taking the so-called "risks" of AI seriously, could actually be a response to my article. Hawking is connected to the CSER organisation. I did notify CSER about my article (Risky Analysts) on Singularity Weblog, so to see the article via the Huffington Post by Hawking and others a few weeks AFTER my article was published, it makes me wonder if they were responding to me. Probably their article was merely coincidental, or based on the film Transcendence, but undoubtedly I assure you they will have been aware of my article preceding theirs by a few weeks. These issues have been brewing long before recent films, long before articles by Hawking and others. Hawking has been connected to CSER from the outset.
I just wanted to give some more context regarding my views, namely I was not merely having a knee-jerk reaction to a recent article by Hawking and company. I have been considering these issues for three of four years. My earliest published article on this theme is perhaps January 2012 via H+ Magazine on the topic of Friendly AI.
The issue with Hawking and others is that they are doing more than merely asking a question. The question is loaded, they are not asking if utopia is possible, they are asking if dsytopia is likely, furthermore they are stating they think dystopia is distinctly possible. Perhaps not all this is apparent from the Huffington article, but once you dig deeper into CSER, the truth of their fears, their prejudice, becomes apparent, notable via a driving force behind CSER, Lord Martin Rees who wants to limit the intelligence of AI, he wants to create idiot savants.