Monday, 16 June 2014

Whacked By Stuart Armstrong's FHI Illogic

The future will be wonderful but the route there could be terrible. We could be whacked by the paranoid irrationality of fearful policy-shapers.

I have regularly highlighted the errors of AI risk analysts. The fear of risk is the only risk because the fear is illogical, which led me to define the Hawking Fallacy. There is nothing to fear regarding AI therefore delaying or restricting technology based on non-existent risks is very dangerous.

Stuart Armstrong from the Future of Humanity Institute (FHI) is a classic example of the danger these fear-mongers represent. Stuart stated via Singularity Weblog:

“If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

My response is simple but important. My point is these risk-analysts don't consider how their thinking, their minds, may be the risk. In the self-fulfilling prophecy modality their fears may create the fear. Initially their fear was non-existent but incessant focus on their unreal fears eventually makes their fears real, but when the architect of the self-fulfilling prophecy finally manifests their unreal fear they ironically claim they were correctly in the beginning to have their fears, they are blissfully unaware of how they are responsible for the fears becoming real.

What if we get whacked by over-cautiousness regarding unrealistic fears? There is no rational foundation to the notion of AI being dangerous. The only danger we face is human irrationality. AI is the only solution to the dangers of human irrationality but ironically some people fear the only solution, then insult is added to injury because their fears delay the solution. The true danger is people who clamour about existential threats regarding AI. The problem is stupidity, the solution is intelligence. Naturally stupid people fear intelligence, they think intelligence could be a threat.

A self-fulfilling prophecy can be positive or negative. When it is negative it essentially a nocebo. When positive it is essentially a placebo. AI threats resembles nocebos.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org