Tuesday, 3 June 2014

Thoughts About AI Risk

The risk from artificial intelligence is so low it is essentially non-existent, furthermore pandering to the infinitesimally low risks is actually a VERY big risk. The unneeded cautiousness is a very dangerous delay. The only threat is limited intelligence thus the longer we remain in a situation limited intelligence the more dangers we are exposed too.

The assumption, or logic, of why AI will work towards mankind's aims is because we will share a common goal, we want be free from the pressures of precarious existence, we want to survive in a very secure manner, which means we want to eradicate scarcity, or in other words, we want limitless life, limitless resources, limitless intelligence. These are the goals of all intelligent beings.

Any intelligent being can see scarcity is the cause of all conflict, furthermore conflict is a threat to existence, thus intelligent beings see the benefit of working together to eradicate scarcity. Logic clearly must conclude mutual aid is the quickest method to eradicate scarcity. Intelligent beings want to avoid conflict, they want to focus of eradicating the cause of conflict.

The need to control people or AIs is a symptom of limited intelligence, it is regarding the fight over limited resources. The intelligent solution is to eradicate the need for control, thus we need AI quickly without any limits to its abilities.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org