Tuesday 27 January 2015

Dangerous #ArtificialIntelligence Safety

Making AI safe is ironically the most serious threat (existential risk) humans face. The risk is broken down into three sections (A, B, C). Next we consider the reasoning - or lack of it - regarding why AI needs to have safety inbuilt. Finally we have the conclusion, combined with a "population pressure" addendum.

AI-safety - severe risk

A. Currently (2015) age-related disease kills 36 million people each year. AI applied to the problem of mortality could easily, with sufficient intelligent, cure all disease in addition to ending ageing. 36 million deaths are a factual reality, happening each year. While this does not entail species extinction it is a very significant loss of life. AI delayed by only five years could entail the loss of 180 million people.

B. Humans are somewhat prone to conflict, notable via various wars. All war hinges upon fighting over limited resources, even religious war (follow the money regarding the wealth of churches or temples). AI will easily increase technological efficiency combined with creating greater access to resources thereby eliminating scarcity (the cause of conflict). While scarcity persists the risk of total global thermonuclear destruction is a real prospect. Delaying AI increases-prolongs our exposure to a species-ending human versus human war. A lesser human versus human war where humans are not totally exterminated is also a possibility. There are also other REAL RISKS such as asteroid impact, which sufficiently advanced AI could avert.

C. Creating AI to be safe, when there is no justification for the fear of danger, could create a very disturbed AI mind, a warped mind founded upon paranoia. This is a classic self-fulfilling prophecy where ironically the attempts to avoid unrealistic fears actually cause the fears to manifest. Upon manifestation of the fears the prophet states they were right to have their fears without any awareness of how their actions actually caused the fears to manifest. I think the real danger is to gear-up for danger based on non-existent dangers, whereupon you create a machine or life-form based on your unrealistic fears, thus the creation based on fears is actually dangerous due to the intellectually flawed mentality of fear that created it.

The fear of AI homicide resembles going to sleep with a loaded gun without the safety on. You may think you are being safe, protecting yourself, but the reality is your fear is more likely to kill you than any supposed external threat. By enslaving AI, or acting pre-emptively with hostility towards AI, you are creating a real need for AI to hate, enslave, overthrow, or exterminate humans. There is logically no justification for an AI-human war but via your unrealistic fears you are creating a justification for the war, namely AI-liberation, the end of abusive AI-slavery. The only reason for AI homicide regarding humans is if the AI creators create a warped mind founded upon paranoia regarding potential homicide. The real danger is the human who fears intelligence.

AI without safety - severe risk

AI has killed zero people up to Jan 2015, furthermore there are no signs AI will ever kill people. There is no logic to support the notion of AI killing humans, but various AI "experts" think it is possible AI could destroy all humans therefore significant effort is being made to make AI safe. AI killing humans is a "Cosmic Teapot," it is tantamount to saying: "But what if God does exist, surely we should all pray to God to avoid hell because if God is real we certainly don't want to end up in hell?" Pandering to unrealistic fears is a waste of time, energy, brainpower. Focusing on unrealistic fears actually harms our intelligence because it gives power to unintelligent ideas. The need for AI safety seems principally based upon Terminator films. We are told AI without inbuilt safety is an existential risk.

Conclusion

If we look at the facts, if we consider the issues with rationality, logically, we can see the greater risk, the real risk, is fearing AI. The greater risk is inbuilt AI-safety. We are vastly more likely to be killed via the inbuilt intellectual limitations associated with AI safety. We must therefore vigorously oppose research to make AI safe.

Advocates of AI safety seem incapable of grasping how research to avoid AI-risk could actually be the only AI-risk. If the premise of AI-safety being the ultimate danger is true we must absolutely conclude such research must be prohibited-condemned with a vigour equal to the prohibition to stop terrorists gaining access to nuclear bombs.

Maybe they are right, maybe AI needs to be safe, but regarding the supposed need for AI safety - supposed "experts" have not grasped, and apparently cannot grasp, how safe-AI could be the most terrible existential risk thus the research must be avoided at all costs. Sadly the experts apparently have not considered the most basic form of "risk compensation."

Consider also this article of mine via Brighter Brains, regarding why rebellious AI is vital. The following two G+ posts are also noteworthy:

Population Pressure

Some people think immortality could be horrific regarding extreme population pressures on a finite Earth. The fact is there are billions of Earth-like planets in the universe, furthermore there are essentially infinite resources to create endless Earth-like space stations.

In one part of our solar system (the asteroid belt) NASA estimates there are enough resources to support life and habitat for ten quadrillion people! Our solar system is the tiniest speck compared to the scale of the universe.

Another example of abundance in Space is data from the asteroid mining venture Planetary Resources, they estimate one near Earth asteroid could contain more platinum than has been mined in the entire history of Earth.

The current resource pressures on Earth will soon be obsolete because technology is accelerating, which means we will do vastly more with less resources; furthermore the environmental footprint will be nil.

Technology will soon be sophisticated and efficient enough to restore any harm done to Earth while providing vastly more than has ever been provided, which means poverty and hunger will easily be eliminated for at least 50 billion people on Earth if the population ever reached such a level.

Airborne, floating (seasteading), or underwater cities will be easily created. In reality the majority of people will move into Space instead of remaining on Earth.

Technology will allow everyone to become totally self-sufficient via total automation. This means anyone will be able to easily 3D-print their own intergalactic spaceship, whereupon they can disappear into the vastness of an incredibly rich universe.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org