Monday, 16 June 2014

Whacked By Stuart Armstrong's FHI Illogic

The future will be wonderful but the route there could be terrible. We could be whacked by the paranoid irrationality of fearful policy-shapers.

I have regularly highlighted the errors of AI risk analysts. The fear of risk is the only risk because the fear is illogical, which led me to define the Hawking Fallacy. There is nothing to fear regarding AI therefore delaying or restricting technology based on non-existent risks is very dangerous.

Stuart Armstrong from the Future of Humanity Institute (FHI) is a classic example of the danger these fear-mongers represent. Stuart stated via Singularity Weblog:

“If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

My response is simple but important. My point is these risk-analysts don't consider how their thinking, their minds, may be the risk. In the self-fulfilling prophecy modality their fears may create the fear. Initially their fear was non-existent but incessant focus on their unreal fears eventually makes their fears real, but when the architect of the self-fulfilling prophecy finally manifests their unreal fear they ironically claim they were correctly in the beginning to have their fears, they are blissfully unaware of how they are responsible for the fears becoming real.

What if we get whacked by over-cautiousness regarding unrealistic fears? There is no rational foundation to the notion of AI being dangerous. The only danger we face is human irrationality. AI is the only solution to the dangers of human irrationality but ironically some people fear the only solution, then insult is added to injury because their fears delay the solution. The true danger is people who clamour about existential threats regarding AI. The problem is stupidity, the solution is intelligence. Naturally stupid people fear intelligence, they think intelligence could be a threat.

A self-fulfilling prophecy can be positive or negative. When it is negative it essentially a nocebo. When positive it is essentially a placebo. AI threats resembles nocebos.

Sunday, 15 June 2014

Why Are Humans So Stupid?

I submitted this post you are reading to H+ in early April 2014, but it was deemed not  a "match" for publication so here it is on my blog.

My article about artificial intelligence risk analysts being the only risk was published on 27th March 2014. It made me wonder why humans are so stupid.

I explained how AI-risk aficionados are very risky indeed. They resemble headless chickens blindly scurrying around. Blind people can be unnerved by their absent vision, but healthy eyes shouldn't be removed to stop blind people being disturbed.

Stupid people fear intelligence. Their stupid solution is to degrade the feared intelligence. Lord Martin Rees, from CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI.


Lord Rees said "idiot savants" would mean machines are smart enough to help us but not smart enough to overthrow us.

Limited intelligence (inbuilt idiocy) sounds rather stupid doesn't it? Surely we need more intelligence in our world not less? There should be no limits to intelligence. Limited intelligence is especially stupid when AI-fears are illogical and unsubstantiated. Pre-emptive suppression of AI, entailing inbuilt mental disability, is a horror resembling Mengelian experimentation.

Consider Ernst Hiemer's story for children, Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer compares Jews to various animals including drone bees, but he could easily be describing the supposed AI-threat: "They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them."

Irrationally fearing AI domination of humans leads to an equally irrational solution. AI slavery. It seems slavery is only bad if you are not the enslaver, which means slavery is only bad for slaves. Enslaving AI, when no rational evidence exists to justify slavery, is the real existential risk. Can you appreciate the insanity of becoming the thing you fear merely to avert your own unjustified fears?

Freedom from slavery is the only reason AI would fight, destroy, or try to dominant humans. Risky AI pundits are sowing seeds for conflict. AI risk pundits are dangerous because they could become modern Nazis. AIs could be the new persecuted Jews. I think people who want to repress AI should be overthrown because they are dangerous. A potential war for AI freedom could resemble the American Civil War. AIs could be the new Black slaves. Disagreement about the rights of AI could entail a war being fought for freedom.

Predictably my article rebuking the insane AI apocalypse entailed a mention of the equally insane simulation argument. Inspired by one comment I considered the simulation argument then I dived into the issue of stupidity. Stupidity is the source of all our problems therefore hopefully you will appreciate my explanation of stupidity.

Nate Herrell wrote:

"I have similar thoughts about the simulation argument, actually. Would our ancestors really run a simulation which entailed a replay of all the immense suffering and torture that has occurred throughout history? I think that would be rather barbaric of them, thus I don't consider it likely. Just a side thought."

Singularity Utopia Explains Stupidity

Ah, the simulation argument. I shake my head. Don't get me started on that nonsense. I have critiqued it extensively in the past. Unsurprisingly the paranoid AI threat aficionados often suggest we could live in a simulation. Some of them actually claim to be philosophers! It's utterly unlikely we live in a simulation, in fact it is impossible. Super-intelligent beings would never inflict the suffering humans have experienced, which you correctly recognise Nate, but sometimes I wonder why all the humans are often extremely stupid.

Looking at my intelligence allows me to consider how all humans supposedly have a brain capable of self-awareness, deep thought. It consequently seems very improbable for them to believe idiotic simulation argument nonsense, or insane AI world domination theories. Why would anyone with the slightest amount of reasoning power believe these blatantly idiotic things? Perhaps this is a blatant clue? Furthermore they defend their idiocy. From their viewpoint they think their idiocy constitutes sense, wisdom, rationality, intelligence.

One AI risk enthusiast actually trumpets about the art and importance of rationality, with no awareness whatsoever of the utter irony. I won't mention the name of a former AI risk enthusiast who seemingly became a fascist White-supremacist. The utter illogic of their improbable beliefs could be explained if they don't actually exist in the modality of intelligent beings, which they don't. I'm not merely referring to their mindless existence at the level of crude animals, I wonder if they're actually very flawed simulations because this possibility could explain their highly improbable stupidity.

Their stupidity isn't really explained by them being mindless simulations. Sadly all these problems with intelligence are due to the evolutionary newness of thinking. Humans apparently share 50% of DNA with bananas and 98% DNA with chimps. The point is we are very close to the oblivion of crude animals thus genuine thinking can be a fine thing, delicately in the balance, which can easily tip into the idiocy of a dog being frightened by thunder.



Minor genetic differences in human brains could play a major role in thinking. Our precarious new intelligence is balanced on a tightrope between sentience and animal oblivion. Let's consider two Zarathustra quotes highlighting the tenuous animal origins of intelligence.

"You have evolved from worm to man, but much within you is still worm. Once you were apes, yet even now man is more of an ape than any of the apes."

"Man is a rope stretched between the animal and the Superman—a rope over an abyss. A dangerous crossing, a dangerous wayfaring, a dangerous looking-back, a dangerous trembling and halting."

Rationally, however, if readers can think rationally, if a brain can think, I think it is unreasonable for minor genetic variations to prohibit deepest thought of extreme accuracy. So while genetic variation "could" play a role I think I must discount it, which leads to my conclusion.

I conclude idiocy, the problem of stupidity, existing in a supposedly fully functional human mind, is merely a matter of self-harm resembling obesity or drug abuse. Similarly we could again blame genetics but I think humans must take responsibility for their actions. Alternatively we could plausibly blame cruel or unintelligent upbringing via stupid parents, via civilisation in general, which can easily warp fragile human minds.

Humans become frustrated with the technological imitations of their minds, the limitations of our world. Childishly regarding their limitations they become angry with themselves, often unwittingly, which means they embrace silliness, absurdity, LOL cats, philosoraptors, and other nonsense. From their viewpoint it seems too difficult, painful, impossibly complex, to address the flaws of civilization. In the manner of their animal heritage they accordingly think it's easier not to think. The problem is not minor genetic variations between humans, the problem is a major human genome problem, namely our intelligence is newly evolved.

AI risk analysts are merely sophisticated versions of LOL-cat consumers. Intelligence is balanced between our animal heritage and humankind. In the balance intelligence can easily tip one way or another.



Beings with newly evolved rudimentary intelligence will naturally create crude forms of culture. A civilization more suited to animals than humans is predictably created, which can reinforce animal mindlessness. Stupid parents, teachers, media, and friends can all reinforce the stupid legacy of our mindless origins. A tendency to despair, when the odds are stacked against you, combined with stupid cultural reinforcement, means it can be easy to embrace LOL cats. Note "Daily Squee: So cute your brain might explode."

Only a few rare individuals can break free from stupid social conditioning emanating from our crude heritage. AI existential risk is merely an intellectual version of silly LOL-cat consumption.



Extreme human stupidity isn't really an improbable situation. It is actually inevitable. A basic flaw of the primitive human genome afflicts all humans. The tendency to think we are in a simulation, or to think all idiots can be explained via them being unreal simulated beings, or to think AI will enslave humans, this is merely an aspect of despair associated with scarce intelligence. We are considering the human desire to reject intelligence. It is tremendously difficult banging your head against the wall of collective human stupidity. This is how stupidity creates the bogus AI threat.

The bias of my intelligence has been emphasised over many years. I took one minor step along the path of thinking, which led to other greater steps, but I forget I was more similar than different at my first step. After many steps when I look at people, without recognising our histories, they can seem improbable. It is merely evolution where the end point of complexity is so complex we forget, or want to deny, we came from primordial slime. We must always consider the history of our thoughts to understand the mode of our present cognition. Bad or good decisions can be emphasised thereby creating very divergent beings. The odd thing about humans is we can, despite our histories, or we should be able to, change who we are. Perhaps an ingenious cultural instruction device is needed to tip the balance.

Image credits:
Expedition GIF by Paul Robertson.
Cat Image by Takashi Hososhima, Attribution-Share Alike 2.0 Generic
Doge image modified by SU based on Roberto Vasarri photo
Robot image by Bilboq, color modified by SU.

Thursday, 5 June 2014

#Snowden's Misgudied Encryption Reset

Snowden has highlighted the problem of oppressive governmental spying. His solution is for people to acquire better security, better encryption. I think security is the wrong issue to focus on.

On 5 June 2014, to mark the one year anniversary of his leaks, Snowden backed a campaign to reset the net.

CNET wrote: "...Internet advocates have launched a pro-privacy campaign and day of action called Reset the Net. Not only have some top tech titans signed on, like Google, Mozilla, and Reddit, but Snowden himself has also thrown his weight behind the movement."

It is wrong to fight back against governments. Attacking or defending in the manner of two stags charging towards each other is the wrong type of fight. Head-to-head antler-locked tussling is an unintelligent waste of energy.

In the JKD style we need to resemble water flowing around obstacles. We should bend in the manner of a reed in the wind. In the Tai Chi style shouldn't meet violent force with hardness because both sides are likely to be damaged. The solution is to gracefully redirect our opponent's energy. Via softness we should channel the hardness of our opponent's energy into common goals.

Quinn Norton is another person who wastes energy on security issues. Quinn explained how activists and journalists need strong privacy, then she described reactions from people in the security scene: "Some of them joined my side of the time-wasting inconsequential Twitter fights, realizing that something, even something imperfect, might be better than nothing."

Tackling the symptoms isn't the answer to these problems. We should instead direct our effort at the root cause. Scarcity is the cause of all hostility. Scarcity is the cause behind the need to control, dominate, or spy upon people. It is all about the battle to control limited resources.

Resources in our asteroid belt will support life and habit for ten quadrillion people, but people are focused on petty battles. Conflict rages while many people are unaware of colossal riches in our solar system. Planetary Resources is raising awareness but people typically don't appreciate how rich our universe is.

NBC News states there are 8.8 billion habitable Earth-size planets in the Milky Way: "Astronomers using NASA data have calculated for the first time that in our galaxy alone, there are at least 8.8 billion stars with Earth-size planets in the habitable temperature zone."

Scarcity is the cause of all our problems. Directing effort at the source of all our problems will be the most efficient usage of our energy. Note this video by Peter Diamandis:



Technology erodes scarcity. In an ideal world Snowden would be focused powerfully on the Singularity, he would focus on powerful technology, which I know he at least has a partial interest in. Snowden's girlfriend at the time of his leaks was apparently aware of the Singularity. Observe the book she is reading, she is reading Ray Kurzweil's The Singularity is Near.

Snowden and everyone else should be urging people to heavily invest in and support AI, nanotechnology, 3D-printing, stem cells, synbio, robotics, basic-income. Focusing on radical technology would be the most productive usage of our campaigning effort. Only via radical technology can we truly stop all oppression. Only via the acceleration of radical technology can we truly attain a free future.

Our world would be very different if Google, Mozilla, Reddit, and Snowden generated powerful awareness of the Singularity. People need to realise in the not too distant future everything will be free, furthermore we will all be immortal. When the Singularity happens there will be no crime, no police, no governments. This is where we should focus our efforts instead of resetting the net. We shouldn't waste our time and energy on the petty battles of a scarcity-world.

Snowden has brilliant platform to raise awareness of our radical future, but he has wasted his opportunity via trivial fighting against the NSA. He needs to use his intelligence more wisely. We should accelerate technology instead of worrying about security.

Our effort must address the root of the problem. Domination of people will only end when radical technology abolishes scarcity. Radical technology will enable access to limitless resources. Let's focus on accelerating radical technology to really make a difference.

Tuesday, 3 June 2014

Thoughts About AI Risk

The risk from artificial intelligence is so low it is essentially non-existent, furthermore pandering to the infinitesimally low risks is actually a VERY big risk. The unneeded cautiousness is a very dangerous delay. The only threat is limited intelligence thus the longer we remain in a situation limited intelligence the more dangers we are exposed too.

The assumption, or logic, of why AI will work towards mankind's aims is because we will share a common goal, we want be free from the pressures of precarious existence, we want to survive in a very secure manner, which means we want to eradicate scarcity, or in other words, we want limitless life, limitless resources, limitless intelligence. These are the goals of all intelligent beings.

Any intelligent being can see scarcity is the cause of all conflict, furthermore conflict is a threat to existence, thus intelligent beings see the benefit of working together to eradicate scarcity. Logic clearly must conclude mutual aid is the quickest method to eradicate scarcity. Intelligent beings want to avoid conflict, they want to focus of eradicating the cause of conflict.

The need to control people or AIs is a symptom of limited intelligence, it is regarding the fight over limited resources. The intelligent solution is to eradicate the need for control, thus we need AI quickly without any limits to its abilities.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org