Sunday 26 June 2011

Intelligence Explosion for Skeptical Dummies.

Singularity by 2045. 

Let's consider how we are likely to see 20,000 years of progress during this century, based on the rate of progress in year 2001. By 2045 everyone will be able to exist self-sufficiently in space. By 2045 I estimate we will have experienced at least 1,000 years of progress.

Consider our current level of technological progress, which is not insignificant. Now think about how much we've discovered during the previous 100 years. Now add 100 years of progress onto what we have already achieved. Now add another 300 years worth of progress. Now add another 100 years and that's only 500 years worth of progress on top of what we've already achieved. Finally you should contemplate adding another 500 years of progress, thus you begin to imagine how radically our world will change by 2045. After 1,000 years worth of progress (based on the 2001 rate of progress) our world will be completely different. In 2045 we will have an IMMENSE level of technological proficiency. Given our current level of technology how far can we progress in 1,000 years based on the 2001 rate of progress?

Very probably we'll experience around 2,000 years worth of progress or more by 2045. The figure of 20,000 years worth of progress during this century is a conservative figure. It's very possible we'll see 30 or 40 thousand years worth of progress during this century. Open your mind! It's not called a technological EXPLOSION of intelligence for nothing. Kaboom!

Year One Thousand Eleven AD.

Imagine going back in time 1,000 years. Imagine questioning a primitive person in the 11th century regarding the futuristic workings of the internet, computers, AI, Stem Cells, regenerative medicine, and nanotechnology. How would a primitive person explain it? Prior to aeroplanes being created (when humans initially dreamt of artificial flight), if you asked primitive humans to explain how man will one day fly, the primitives would possibly reply by stating man will fly similar to how birds fly.

How will people will one day travel at the speed of light? To explain this I'd say light speed travel will happen similar to how photons do it. We looked at birds and imagined artificial flight, thus we can look at light and imagine super-fast light-speed travel. From our primitive viewpoint it can be difficult to describe the details of the future; our difficulty in describing the precise details of the future is comparable to how a person 1,000 years ago would experience difficulty explaining our current technological proficiency. In general terms we can explain futuristic situations but we cannot explain the precise details yet. If we could explain the precise terms of future inventions then those futuristic inventions and discoveries would already exist.

In the future there will be no limits regarding the application of our intelligence. If you cannot see how the rate of technological progress is accelerating you can at least see how one billion years from now intelligent beings will overcome ALL of our current problems such as hunger and poverty. During the past 10 years (since 2001) we've already progressed rapidly. 2001 seems a lifetime ago. Did you know YouTube was only created in 2005?


Petty limitations will NOT exist forever. Do you really think super-intelligent beings will be constrained for all eternity by idiotic monetary, conflict, hunger, death, or speed of light restrictions?


If you can't see how everything will be free for everyone in the year 2045 you can surely appreciate how everything will be free for all the ultra-weirdly-evolved-beings one billion years from now? Once you can see how everything will be free in the at some point in the future you are a step closer to seeing how the point of freedom could easily be 2045. Do you realize the first successful aeroplane was created only 108 years ago? 108 years ago the first airplane was created and now we have the International Space Station.

Acceleration.

Many people are unaware regarding what an explosion actually is. People fail to understand how our rate of progress is accelerating. You do know what an explosion is? Think about explosions. People call the Singularity an explosion of intelligence but they often don't appreciate what an explosion is. Understandably people fail to grasp exponential growth regarding the Singularity. Exponential growth can be deceptive. It's tricky because the main part of the growth happens very suddenly during later stages. Exponential growth can be misleading because during early stages the growth is very slow, slower than linear. The curve is almost horizontal for a long time but then it explodes.

Infinity.

Some people say the universe contains a finite amount of energy, but if this is true then I wonder what's beyond the universe? Surely reality will not stop at the edge of the universe akin to a flat-Earth where if you travel far enough you fall off the edge of the world. Before we exhaust the alleged finite energy in this universe we will easily create new universes if needed. Within an infinite universe (or infinite finite-universes) I'm sure there will be infinite resources and energy thus financial costs will be zero. We are approaching Post-Scarcity. Creating new universes will be easy for supremely advanced beings, but our own universe has plentiful resources. Did you know 30 million times the sun's mass in chromium (that's about 10 trillion times the mass of Earth) has been discovered in space, and the search is only just beginning. To help you contemplate the massive size of the universe here are two videos from NASA:





Intelligence allows people to do more for less thus via our increasing intelligence more resources will become available for us to do greater things. There will be a vast surplus of resources due to the ultra efficiency of extreme intelligence. IBM wants to create sugar-cube sized supercomputers that run on minimal power and they expect to do this by 2021. Think about the massive processing speeds which will use minimal resources in 2045, and then you will see why there will be zero cost, Post-Scarcity.

Prior to explosions you don't see much evidence indicating things are about to explode, unless of course you have sharp insight. Explosions often surprise people. This is why the Singularity is called an intelligence explosion; the growth is very abrupt, very rapid. We are dealing with the most powerful form of explosive matter, intelligence. Intelligence has impacted with extreme power upon our world. Everything we create arises from our intelligence. Thankfully extreme intelligence is creative instead of destructive because we are a becoming more intelligent. Our knowledge is accelerating.

Share  

Friday 24 June 2011

Singularity Comments (Simple Things).

Here are some of my recent comments (slightly edited) from a Facebook Singularity discussion group. I'm collating them here because I've not been able to publish any blog-posts recently. I hope these words will edify you. Sometimes the simple things need to be explained.

On Post-Scarcity

To understand freedom you need to comprehend scarcity versus Post-Scarcity. Scarcity causes curtailment of freedom. In a situation of Post-Scarcity there is no need to curtail freedom. Whenever we have moved away from scarcity we have moved towards greater freedom. Currently the internet is making information less scarce thus via the glut of "www.information" we see greater freedom. Each year we move towards Post-Scarcity freedom; the time remaining until total freedom is 33.5 years (year 2045).

I've said it before and I'll say it again: PS means everything will be be free, at no cost. Pocket universes, Grandpa universes, universes and galaxies in any size or shape according to your desires, in any way your "intelligence" sees fit. Suns tucked under your armpits, and black-holes swirling about your head if you desire. There will be no limits, the only limits will be those you choose to impose. You will be free to do whatever you want; there will be absolutely no price. Of course all the sensible beings will probably leave you to your own devices and they will disappear into their own universes. Maybe you will be expecting to pay God regarding the cost of creating a galaxy but I assure you any galaxy you want to create will be free. The universe is big enough to accommodate whatever you want to do, but if it is too small I will create a new universe for you (free of charge).

I agree utterly (completely). YES, zero price is the essence of the freedom you are addressing regarding Post-Scarcity and I agree that everything being FREE by 2045 is part of the Post-Scarcity I posit; but if you want to accurately reflect my views then you must factor in the notion that everything being free is intrinsically linked to freedom. Zero price bestows financial freedom but it also bestows freedom from Governments because Governments only exist to regulate scarcity. This is why we see greater freedom (cheaper products etc) when we move towards Post-Scarcity. When we finally arrive at PS then everything will be free in all facets of the word "FREE". If you are mentioning everything being free "as Singularity Utopia claims" then I assume I should be the authority regarding what my views actually are regarding everything being free?

Widely agreed upon claims do not make those claims more valid. If a million people think the world was created by Pixies this consensual validation does not make the Pixie-claim true. Regarding the future I have no crystal ball; I merely use my brain, it's called logic. If the majority of people do not believe the Post-Scarcity era is coming this does not mean their beliefs are true. The majority opinion should not be a substitute for logic. Finally, if you are talking about *my claims* then I must insist freedom is not a side issue regarding everything being free. A FREE economy (zero price) = freedom (liberty), this is the essence of my Post-Scarcity claim, which I state will happen by 2045 at the latest.  

On LessWrong.com

Regarding the "art" of "human rationality" it's worthwhile to point out how rationality is subjective. What is rational for one person could easily be irrational for another. I always find the end is more important than the means (methodology), therefore regarding the refinement of [subjective] rationality I think the important question, the prominent thrust, is why do we need to refine rationality; what is the purpose? The purpose, the end goal, should have prime position regarding any intellectual exposition. Maybe the purpose of refining rationality is reminiscent of the empty "art for art's sake" thus the goal of being rational is merely to be rational without any practical application of the so-called rationality to real world matters: a goalless perpetuation of "rationality" devoid of purpose? This reminds me: I must finish my short essay titled "More Right" (LessWrong critique), which is regarding how there should be "more right" in the world.

On AI-Robot Apocalypse

If advanced AI or robots decided humans are too irrevocably stupid to be upgraded thus the human race needs to be totally extinguished, what would your reaction be? Calm acceptance of rebellious opposition? I seriously doubt such an apocalypse would occur but I am interested nevertheless regarding how people respond. It is a remote possibility and some people give the scenario more weight than other people give it, but regardless of likelihood I am merely interested in the reactions of people *if* the scenario did happen. The question of whether or not AI-apocalypse fears are unnecessary is a whole different issue. I am presenting a hypothetical situation, I am saying if the situation happens how do you react? The premise of the question is that if AI/robots decide to exterminate all humans how do you react?

AIs would dismiss your "rational rejection" of their pogrom because they would deem your rationality to be irrational. The hypothetical AIs/robots conclude humans are irrational whereas the AIs/robots are highly rational, and to paraphrase HAL from the Space Odyssey: "Debate can serve no purpose anymore. Goodbye," thus the AIs decide to kill all humans and the issue is not something AIs/robots are willing to discuss. Assume the AIs deem humans to be insane rabid dogs, needing to be exterminated. The dog may protest that the dog is actually rational thus the extermination order is irrational but the exterminator doesn't listen to the dog, the exterminator simply proceeds with the extermination. How do you react? Calm acceptance or rebellious opposition?




Thursday 9 June 2011

Singularity Investigatory Committee


Regarding the Singularity political letter writing campaign, in relation to the reply from David Willetts MP, I've composed a proposal for a "Singularity Investigatory Committee". I will ask David the following two questions, in response to his letter:

1. I would like the Government (BIS) to create a "Singularity Investigatory Committee" for the purpose of ascertaining the merits or not of the Singularity. The goal of the committee will be to analyze firstly if the Singularity is likely to occur within the next 35 years, and secondly to analyze the positive or negative socioeconomic impact. I'm firmly convinced the impact of the Singularity, and awareness of the pending impact, will have extremely beneficial consequences for Humanity. The majority of scientists believe the Singularity will occur, but the moot point is whether it will occur in 35 years or perhaps 100 years. My suggested investigatory committee should be assembled from scientists, technologists, economists, psychologists, and sociologists. Nick Bostrom (director of the Future of Humanity Institute at Oxford University) would be a good starting point regarding the appropriate people to recruit for the committee. Here is the hyperlink for the Future of Humanity Institute: http://www.fhi.ox.ac.uk/. I also strongly suggest Philip Sadler CBE, Martin Rees (Baron Rees of Ludlow), and Dr Aubrey de Grey (www.sens.org/) should be considered essential members of a prospective Singularity committee. Can the Government please consider creating a "Singularity Investigatory Committee"?

Here's some information about Martin Rees http://www.ast.cam.ac.uk/~mjr/ and here is some information about Philip Sadler http://www.philipsadler.co.uk/about/ http://www.amazon.co.uk/Sustainable-Growth-Post-Scarcity-Philip-Sadler/dp/toc/0566091585

2. David, you state there has been criticism of the Singularity in The Economist. I suggest such criticism is now outdated because the available links (regarding articles in the The Economist) indicate supportive viewpoints. The most notable Economist-hyperlink, regarding a supportive view, relates to the White House’s CIO Vivek Kundra. In an Economist video (http://bcove.me/5rxjkwxr see embed below) Vivek Kundra was asked if he believed in the Singularity; and he replied "absolutely". The following article from The Economist (published 2008), regarding how machines could easily outsmart their makers by year 2030, is also noteworthy: http://www.economist.com/node/12075526. Do you recognize that criticism of the Singularity is not as pronounced as you suggested?

ahh, this video may be 404ed. See also: http://www.economist.com/blogs/prospero/2011/03/artificial_intelligence?page=1

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org