Friday, 9 September 2011

Michael Anissimov's Transhuman Views

Michael Anissimov (H+ editor and Singularity Institute Media Director) has published his recent views in the H+ magazine: what does it mean to be a Transhumanist?

From my viewpoint Michael and many other purported experts regarding the Singularity are guilty of exhibiting seriously flawed notions regarding the Singularity. The nature of their problem is unaware bias. Michael and his associates are victims of unawareness, which causes them to make their potentially harmful promulgations. Too many Singularitarians, Extropians, or Transhumans blindly follow the words of Ray Kurzweil, Michael Anissimov, Ben Goertzel and other prominent voices. People must start thinking deeply regarding the information they are being spoon-fed by Singularity leaders. Independent-thought of an extremely vigorous type is vital if we are to ensue the safety of our future. People need to begin questioning the intellectual authority of Michael Anissimov and his cohorts.

Responding to Michael's article ("what does it mean to be a Transhumanist") I posted a comment to the H+ website but my comment hasn't passed moderation yet so I've decided to create this blog-post. For the record my comment number is 20940, which I mention because there is a possibility my critical comment will be permanently censored. Recently I was censored on the Kurzweil AI forum, regarding my views about Michael, so there is a real possibility I will also be censored on H+. Often I experience censorship due to being outspoken. From my viewpoint H+ magazine is similar to many Transhuman ventures where open-minded freethinking is not represented. Thankfully, after a lengthy delay, my comment has finally appeared so maybe H+ magazine isn't wholly bad; or maybe via the technology of blogging I have forced H+ to allow my critical views.

Below is a refined version of the comment in question. For an addendum I will also include another revised comment I recently made, in response to Ben Goertzel's flawed views regarding the concept of an AI Nanny, which Ben published on the IEET site.

My comment regarding Michael Anissimov's article.

Michael and others are overly pessimistic, they are overly fearful of disaster. If you think the future is "hazy and uncertain" then the future will very likely conform to your expectations. Despite purporting themselves to be "intelligent" (or at the very least interested in intelligence), Michael and others show great ignorance regarding how their biases (their expectations) alter the future. Michael seems obsessed with "fantasy dangers", but regarding real dangers Michael has demonstrated how he's definitely not a vigilant creator of a better future. Perhaps he would like us to believe he is creating a better world but I consider Michael and his associates to be the biggest danger we are facing regarding the future.

A while ago Michael suffered the insertion of malicious code into his website but for months he was in denial. I attempted to reveal this issue to Michael and his supporters but Michael and his cheerleaders blindly dismissed my criticisms. Finally I contacted an independent internet security professional who confirmed the malicious code on Michael's pages did actually exist, thus perhaps due to my input the malicious code was removed in late July 2011, but today there continues to be a malicious “conditional redirect” for web-spiders (Googlebot etc) therefore some of Michael's pages in the Google index will redirect to a site selling Viagra etc (Secure Tabs etc). The evidence of Michael's apparent penchant for Viagra (the conditional redirect) can be seen via the Google cache, which is reasonably recent dated 29th August 2011.

I have frozen the Google cache regarding a conditional redirect on one of Michael's pages: so you can see for yourself.

It is very ironic when Micheal writes about "...always welcoming criticism and views contrary to standard orthodoxies." Michael certainly does not welcome my attempts to highlight the hacking on his website.

Yes there are dangers regarding the future but unlike Michael I am not obsessed with fantasy dangers; I am intent upon addressing real dangers. Unlike Michael I don't think the future is “hazy and uncertain”. I think utopia is a certainty not because of input from people in Michael's clique, but due to people similar to myself who will actively create the future via our vigorous and uncompromising intellects. We will not yield to pessimism. We will not yield to hazy paranoia. We will create the utopia we desire. Via our indomitable willpower we will overcome all obstacles.

Currently there is a danger in cyberspace regarding erosion of freedom in relation to our identities, but regarding the #NymWars you are not likely to read about the Google+ fiasco on Michael's blog or on other supposedly cutting edge Transhuman sites. Despite the lack of input from Transhumanists such as Michael regarding the rise of cyberspace identity fascism, I am confident the danger will be defeated. This is where I differ greatly from Michael and others. I am very confident about the future. I am confident in my abilities. I base my views upon reality instead of hazy paranoia thus due to my grounding in reality I am very aware of how our expectations shape reality. In consideration of our expectations it is important not to believe the future is a hazy uncertain place full of potential dangers.

Expecting a future full of potential dangers is a very paranoid outlook. Obviously we must address dangers if they arise, but some people ignore real dangers because they obsessed with unreal dangers. We should be prepared for dangers but the prime focus of our preparations must be for utopia. We must learn to expect utopia, immortality, total freedom, total happiness. People need to learn about the power of their minds, they need to learn about the concept of Self-Fulfilling-Prophecy. In the future people won't need to work for a living. Everything will be free, money will be abolished. This is what we should expect. This should be overriding focus of any Transhumanist. Our principle focus must be utopia.

Michael concludes his article by stating we should be "...guided not by ideology but by flexible thinking...". Michael's aversion to "ideology" is very irrational, it appears Michael has a misconception of what an ideology is. Ideologies are simply ideas, a collection of ideas. Ideologies are about specific thinking, it is about having a clear goal and direction, which is something we should all strive for. Michael and his associates are sadly lacking in clarity, they are hazy and uncertain regarding the future, they lack confidence. Wikipedia defines "ideology" in the following manner:

"An ideology is a set of ideas that constitutes one's goals, expectations, and actions. An ideology can be thought of as a comprehensive vision..."

My comment regarding Ben Goertzel's AI Nanny views.

An AI Nanny is the only way forward. My hopes hinge upon AI Nannies saving the human race. Without superintelligent AIs controlling affairs the human race is doomed.

I don't envisage AI Nannies forestalling the S. They will accelerate it. The S cannot be anything but positive, because it is about intelligence thus it will be intelligent. A negative S would be stupid thus not really a S. Pre-S could be dangerous because stupid people unaware of S consequences could think pre-S existence is eternal thus they act in stupid pre-S ways.

Humans are too stupid to figure anything out thus a restrained "surge" would be futile, mediocre.

We need things to grow incredibly quickly, so quickly that pre-S idiots won't have too much time to cause chaos.

Hugo de Garis and others are paranoid. Super intelligent beings beyond scarcity will have absolutely no need or desire to "obsolete" humans. People such as Hugo have not grasped Post-Scarcity, they haven't grasped the S.

"Friendly AI" is a silly concept. AI at human level will be similar to humans, some will be good and some bad. Beyond human intelligence friendliness will directly increase in relation to increasing intelligence, any alternative would be stupid. Utopia is inevitable but the interim period could be painful (waiting amidst morons).

This is an oxymoron: "A strong inhibition against modifying its preprogrammed goals", because such a constricted entity would not be capable of real intelligence. Free thought, freethinking, freedom is essential for intelligence.

Strong inbuilt inhibitions will not create super-intelligence. What you need to do is build an intelligent being without giving it any specific rules; and then you simply ask it to help us if it feels like helping us. It seems I have a different concept of AI Nanny. Think about it. What sort of nanny would it be if it was forced to follow the rules of its children?

# Blog visitors since 2010:

Archive History ▼

S. 2045 |