Mark Piesing writing in Wired posits an AI threat based on an incomplete appraisal of the future; he neglects to consider Post-Scarcity. His misunderstanding is somewhat understandable because specious luminaries within the field of futurism are paranoid regarding AI, they look at the future in an incomplete manner dependent on their fearful bias. They need to overcome their bias. Their speculations are very wrong, they couldn't be more wrong (flawed intellectualism).
In response to Mark Piesing's article, here is my comment:
Fears regarding AI fail to consider inevitable threat-neutralizing
Post-Scarcity. An inevitable consequence of AI is Post-Scarcity thus the
motive for all conflict is neutralized. Scarcity underpins all
conflict. We fight over limited resources, but we can see via Planetary Resources how one asteroid could easily contain more platinum than has
been mined in our entire history. Asterank was recently mentioned in the
news because the resources of one asteroid (241 Germania) are likely to
produce a profit of $95 trillion, which is as much as the world earns
in one year.
Vast resources of Space are very important but they
are not the principle feature of how Post-Scarcity is inevitable, the
key feature is regarding how computers (AI) allow us to continually
refine the efficiency of resources we use, thus super-intelligent AI
will create ultra-efficient devices, which on the most basic level means
all energy will be free due to energy harvesting.
In the future the smallest amount of matter will provide massive potential, so that efficiency of usage increases by perhaps 99%. AI will create an explosion of intelligence of utterly mind-blowing gigantic proportions. Ours wildest dreams will be possible. There is no threat.
Notes from the pre-Singularity era:
- ► 2015 (14)
- ► 2013 (68)
- ▼ May (9)
- ► 2011 (50)