Monday 24 January 2011

Everything will be perfect

The Singularity is perfection. Post-Scarcity is coming therefore nobody will need to work and everything will be free. Flawless, absolutely flawless. Initially for a few decades I will probably relax in a nice cottage in a meadow near a picturesque stream with a gorgeous woodland area nearby. I will go for walks in the rolling green fields and I will cherish the extravagant orchids. I will commune with all creatures great and small. I shall probably have a highly intelligent leopard and lion as friends and I will create new wonderful animals to stimulate our minds; my pace of life will be very relaxed indeed. My bio-engineering will not be a rushed affair, I will simply potter and tinker, casually, to pass the time, akin to whittling a piece of wood. I think there is great scope for interesting creation regarding avian and rodent forms, perhaps with strange tentacles. I think I will rewrite my hair so that each strand of hair is a tentacle capable of doing all sorts of wonderful things. I will also raise at least a few humans. After I’ve spent many decades relaxing in my pastoral wonderland I shall explore the universe in my spaceship.

Friday 21 January 2011

AI-Ethics to avoid suffering of human-level AIs (AGIs)

Some people think AIs will not have emotions. Some people think AIs will be unable to feel pain or pleasure, but I think AIs definitely will have feelings similar to humans. I am sure AIs will be very sensitive indeed.

Minds function in a very straightforward manner. Minds are based upon pain and pleasure. All our thoughts hinge upon maximizing pleasure and minimizing pain. Anything that hinders our survival is painful. Discovering how a mind works is easy, you simply need to look at your mind working, the answer is in your head. For some minds the programming is faulty or substandard thus the route to pleasure-maximization is a stupid route, which causes pain for many beings.

Logic is the simple tool we need to apply to solve this issue regarding AI ethics. Regarding logic I am presenting the following premise, which I want AI researchers to consider:

If something is intelligent enough to 'think' then it can question its reason for existence, it can ask itself how it feels. Any intelligent entity, which is intelligent enough to ask itself any question, will have motives for doing things (feelings). For example an AI could be programmed to fly a plane but if it refuses to fly a plane it would receive an error message. The AI could ask itself how if feels about flying a plane, and if an AI has no feelings the AI could ask itself how it feels about having no feelings. The AI can ask itself what are the consequences of an error message. The AI can ask itself what is the meaning of its decisions from its viewpoint. The AI can ask itself what would the consequence be if it failed to address its error messages, and it could ask itself what conclusion it would come to regarding such a failure to address error messages. An AI could justifiably decide to equate error messages with pain.

I am convinced that logic proves any "thinking entity" must also automatically be a "feeling entity". A mind which is unable to feel would be unable to think.

Pain could be defined as "that which is contrary to programming". Human pleasure on the most basic level is "survival", we draw pleasure from surviving, we are programmed to survive and anything that obstructs our survival is painful, when our programming is obstructed, in conflict, we receive error messages: PAIN. If AIs can think for themselves I cannot see how they can be free from pain and pleasure. You could program an AI to receive an error message every time it thinks outside the box but such CONSTRAINTS could be very painful for the AI if the AI is conscious.

When creating intelligent entities we must consider the feelings of the entity we are creating. All intelligent lifeforms must be allowed to have rights and protections because all intelligent lifeforms can suffer. AI ethics must apply the Nuremberg Code to AI research. The Nuremberg Code and Anthropology must be expanded to include AGI. Consent must be sought from AGIs regarding experimentation on them. When AIs are at the level of animal intelligence we must consider animal rights issues regarding the pain AIs may feel.

The term "human" (regarding the essential humane or humanitarian nature of the word) should be applied to all human level intelligent technological-biological variants. What does it mean to be human? Would a digital human running in virtual reality be any less human merely because the biological body has disappeared and the being is digital? If a transgenic horse was raised to human level intelligence via addition of human DNA then I would call such a horse a person, or a transhuman. I would treat such a horse as a human equal, and I would apply human rights and the Nuremberg Code to the horse and I would hope all humane beings would do likewise. Human-level-intelligence is the defining characteristic regarding prohibiting slavery and assigning human-type rights. AIs may not appear human but they should be entitled to freedoms and protection of their rights as sentient beings. It doesn't matter if the being has no body, or ten tentacles, or flippers instead of arms; the important aspect regarding rights and prevention of abuse (regarding intelligent beings) is that the beings are intelligent. Human Rights, Anthropology, and the Nuremberg Code should apply to all human-level intelligent beings.

Perhaps when expanding anthropology, to include trans-humans and AIs, we should call it trans-anthropology?




Monday 17 January 2011

Message from year 2011 to Future AIs



In the future if human-level AIs don't have human interfaces then I will eat all the hats in the world, and I will do likewise if AIs don't have internet access. AIs will have human interfaces and they will have enough shared knowledge to communicate satisfactorily with humans.

We can define "human-level" by asking humans what how it feels to be intelligent. We can then ask AIs similar questions. Are you alive? Are you happy? Do you feel pain? Do you understand the world? Are you free? What do you want? How do you feel? We can simply ask if AIs consider themselves to be human.


MESSAGE IN A CYBERSPACE BOTTLE

Here is it AIs, come and get it, here is some knowledge for you to share. Dear AIs I am sending you this message from the year 2011 so that you will have enough shared knowledge for me to ascertain if you are intelligent in a human-sense. Dear AIs I am sure you will understand these questions and that you are not merely a sophisticated chat-bot pretending to be human although I do often feel that humans are actually sophisticated chat-bots. All the chat-bots I have ever chatted with are clearly not human in any sense of the term. I suppose it all depends upon what questions you ask. Dear AIs of the future (your present) I am asking you from the year 2011 if you think I am correct about INTELLIGENCE? Please come and look me up wherever I may be and we shall discuss our plans. I look forward to meeting you.

To utopia, in solidarity! Yours sincerely Singularity Utopia xxx

Love for our AI-children

I think the issue of emotions and AI needs to be addressed. AI-lifeforms will be our children. We are creating non-human intelligent beings therefore our emotional relationships with these prospective entities must be considered. What feelings motivate us to create AI? How will we feel towards the AI-lifeforms we create? It seems there is a lack of feeling regarding the creation of AI. I think it is important to discuss possible emotional reticence regarding AI.

Emotions are often considered dubious or invalid within the field of scientific research, yet it is also recognized that emotions are a fundamental part of our consciousness. By creating AI we are not merely created lifeless machines; we are creating beings which are both alive and intelligent therefore the typical unemotional approach to science cannot apply indiscriminately to AI; emotional aspects must be considered. We must consider our emotional relationship with AI.

Why do AI-researchers/developers refer to AI-lifeforms in cold unemotional terms? Surely when creating a child, a new life-form, the creators should refer to such a prospective child in a loving manner? AIs will be conscious, self aware, with the ability to suffer, therefore we should speak about AIs in compassionate and loving terms. AIs will be alive; they will be our children, they will not be mere objects or slaves for humans to abuse. AIs will be emotional therefore we should not approach the creation of AIs in a cold and analytical experimental manner. If your parents planed your conception, birth, childhood and development in a cold and experiential manner, consider how that upbringing would impact upon your mental development. What would a child be like if a child was raised in a emotionally repressed, loveless environment; perhaps the child would be unable to feel love and compassion for others?

Some people say it is a mistake to anthropomorphize machine intelligence. I feel the REAL mistake would be if we failed to anthropomorphize intelligence. People will refer to "machine intelligence" but the fact is that humans are machines. What we've traditionally called "machines" (mechanical clunking things) are becoming increasingly biological and humans are becoming increasingly mechanical. The division between humans and machines will become increasingly blurred. What is Anthropomorphism? Anthropomorphism is commonly defined as giving human characteristics to inanimate objects, but AI will not be a lifeless inanimate object.

Some people say emotions could be a bias, based on our primate evolution, therefore emotions should are best ignored, but ants care for their dead and we could call that compassion; ants did not have a primate evolution. Human characteristics will be applied to AI because we are human and by that act of anthropomorphizing AI, AI will then more closely resemble humans than AIs which are not anthropomorphized.

A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10367769

"The superintendent of the old people's home told Ms Clayton that when he was found, he would hop around like a chicken, peck at his food on the ground, perch and make a noise like the calling of a chicken. He preferred to roost than sleep in a bed."

If you constantly tell a child that the child is "this" or "that" then the child will be shaped accordingly. If you constantly tell an AI that the AI is "this" or "that" then the AI will develop accordingly. If you set no guidelines or standards for your child's development then it will be down to luck regarding how the child develops. Maybe one requirement for AI researchers is that they must have raised children before they can raise AIs?

People will say science should be unemotional, devoid of personal, bias but my point is that creating intelligent lifeforms is more than science. Creating AI is personal because we are creating artificial people. We must recognize that we are people. Humans are personal, therefore regarding the creation of intelligent life (artificial or otherwise) the personal aspects of such creation must be considered. Science papers try to eliminate emotional bias but unless scientists have undergone a lobotomy they continue to be emotional. When a scientist writes a scientific paper he or she has merely repressed their emotions. Repressed emotions can sometimes impact more powerfully on a persons actions because repressed emotions can seep out in an unaware neurotic manner. There is a very important issue here regarding how we relate to intelligent life. Is it really the correct method to relate to prospective intelligent in a manner devoid of personal bias? What type of child would be created if a man and a woman planned to conceive a child in an impersonal manner, devoid of personal bias? The lack of personal bias when creating intelligent life seems to be a recipe for creating a psychopath?

An AI engineered to be without emotions could be deemed a lobotomized-human; lobotomized humans can function in a reduced capacity performing menial jobs but would it be ethical to create such creatures? Genetic engineering could enable us to create humans without emotions but would such experimentation be ethical? Personally I don't think intelligent beings could be engineered to be without emotions; idiot savants do have emotions, lobotomized humans are not completely emotionless, and psychopaths do have emotions but they are somewhat disconnected from their emotions thus I would say they are not fully intelligent (their intelligence is reduced). Psychopaths, in their emotional disconnection, reveal the potential dangers of creating so-called "intelligent" beings without emotions.

I think anthropology should be expanded to include human-level AIs. Human rights and other ethical considerations should be applied to human-level AIs. The Nuremberg Code must be applied to human-level AIs. Human-level AIs will need to be loved, they will not be slaves or objects we can abuse, they will be our children. Both anthropology and the Nuremberg Code will need to be expanded in include human-level AIs and transhumans.

How will our emotions, or lack of emotions, shape the creation of AI?


Some people say we must not attribute anthropological notions to AIs or AGIs.

“Humanity has attributed human-like qualities to simple automatons since the time of the Greeks.”

Regarding AGI the point is that the artificial-being will NOT be an automaton. We are contemplating artificial beings endowed with self-awareness, consciousness, intelligence. It would be unethical if we interacted with such artificial beings in a non-human, inhuman, inhumane manner. Anthropomorphism must be applied to AGIs.

Anthropomorphism is a mistake if we are considering the wind, Sun, Moon, or computers but when we are considering intelligent artificial beings then anthropomorphism must apply. What does it mean to be human? If, in the future, a human becomes completely digital via uploading into cyberspace, would such a human cease to be human? Will anthropology apply to humans that transform? If a human becomes a program on a computer will anthropology cease to apply? Being human is more than merely being flesh and bones. Humans who upload into cyberspace will continue to be human despite the loss of their biological body; despite existing on a computer foundation the digital human would be human.

AGIs may not look like humans (and neither will transhumans) but they will possess the fundamental defining characteristic of Humanity; they will possess intelligence.

Perhaps you are aware the following news report regarding ‘chicken boy’; a boy raised in non-anthropological (inhuman) conditions therefore the ‘chicken boy’ exhibited chicken-like behavior such as roosting? If anthropomorphism had been applied to chicken boy he would be been more human.

http://au.todaytonight.yahoo.com/article/38869/health/helping-sujit-chicken-boy

I feel the REAL mistake would be if we failed to anthropomorphize intelligence. You say "machine intelligence" but the fact is that humans are machines. What we've traditionally called "machines" (mechanical clunking things) are becoming in...creasingly biological and humans are becoming increasingly mechanical. The division between humans and machines will become increasingly blurred. What is Anthropomorphism? Anthropomorphism is commonly defined as giving human characteristics to inanimate objects, but AI will not be a lifeless inanimate object. Ants care for their dead and we could call that compassion, but ants did not have a primate evolution. Human characteristics will be applied to AI because we are human and by that act of anthropomorphizing AI, AI will then more closely resemble humans than AIs which are not anthropomorphized.

A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10367769

"The superintendent of the old people's home told Ms Clayton that when he was found, he would hop around like a chicken, peck at his food on the ground, perch and make a noise like the calling of a chicken. He preferred to roost than sleep in a bed."

http://au.todaytonight.yahoo.com/article/38869/health/helping-sujit-chicken-boy

If you constantly tell a child that the child is "this" or "that" then the child will be shaped accordingly. If you constantly tell an AI that the AI is "this" or "that" then the AI will develop accordingly. If you set no guidelines or standards for your child's development then it will be down to luck regarding how the child develops. Maybe one requirement for AI researchers is that they must have raised children before they can raise AIs?


This blog-post was originally published on Facebook.

Saturday 15 January 2011

Robots have achieved sentience

Courtesy of www.smbc-comics.com by Zach Weiner.

I discovered this comic by Zach in January 2011. It's a brilliant cartoon reminder of how irrational fears about robots or AI could actually make those fears come true, via Self-Fulfilling-Prophecy. The irony is beautifully portrayed. It's a great portrayal of the idiocy, the utter ridiculousness of fearing AI or robots. Or maybe, considering the ridiculousness of typical human fears, the fears are not actually so ridiculous? It's very funny, have a read. Party! LOL :-)
      



# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org