Monday, 17 January 2011

Love for our AI-children

I think the issue of emotions and AI needs to be addressed. AI-lifeforms will be our children. We are creating non-human intelligent beings therefore our emotional relationships with these prospective entities must be considered. What feelings motivate us to create AI? How will we feel towards the AI-lifeforms we create? It seems there is a lack of feeling regarding the creation of AI. I think it is important to discuss possible emotional reticence regarding AI.

Emotions are often considered dubious or invalid within the field of scientific research, yet it is also recognized that emotions are a fundamental part of our consciousness. By creating AI we are not merely created lifeless machines; we are creating beings which are both alive and intelligent therefore the typical unemotional approach to science cannot apply indiscriminately to AI; emotional aspects must be considered. We must consider our emotional relationship with AI.

Why do AI-researchers/developers refer to AI-lifeforms in cold unemotional terms? Surely when creating a child, a new life-form, the creators should refer to such a prospective child in a loving manner? AIs will be conscious, self aware, with the ability to suffer, therefore we should speak about AIs in compassionate and loving terms. AIs will be alive; they will be our children, they will not be mere objects or slaves for humans to abuse. AIs will be emotional therefore we should not approach the creation of AIs in a cold and analytical experimental manner. If your parents planed your conception, birth, childhood and development in a cold and experiential manner, consider how that upbringing would impact upon your mental development. What would a child be like if a child was raised in a emotionally repressed, loveless environment; perhaps the child would be unable to feel love and compassion for others?

Some people say it is a mistake to anthropomorphize machine intelligence. I feel the REAL mistake would be if we failed to anthropomorphize intelligence. People will refer to "machine intelligence" but the fact is that humans are machines. What we've traditionally called "machines" (mechanical clunking things) are becoming increasingly biological and humans are becoming increasingly mechanical. The division between humans and machines will become increasingly blurred. What is Anthropomorphism? Anthropomorphism is commonly defined as giving human characteristics to inanimate objects, but AI will not be a lifeless inanimate object.

Some people say emotions could be a bias, based on our primate evolution, therefore emotions should are best ignored, but ants care for their dead and we could call that compassion; ants did not have a primate evolution. Human characteristics will be applied to AI because we are human and by that act of anthropomorphizing AI, AI will then more closely resemble humans than AIs which are not anthropomorphized.

A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10367769

"The superintendent of the old people's home told Ms Clayton that when he was found, he would hop around like a chicken, peck at his food on the ground, perch and make a noise like the calling of a chicken. He preferred to roost than sleep in a bed."

If you constantly tell a child that the child is "this" or "that" then the child will be shaped accordingly. If you constantly tell an AI that the AI is "this" or "that" then the AI will develop accordingly. If you set no guidelines or standards for your child's development then it will be down to luck regarding how the child develops. Maybe one requirement for AI researchers is that they must have raised children before they can raise AIs?

People will say science should be unemotional, devoid of personal, bias but my point is that creating intelligent lifeforms is more than science. Creating AI is personal because we are creating artificial people. We must recognize that we are people. Humans are personal, therefore regarding the creation of intelligent life (artificial or otherwise) the personal aspects of such creation must be considered. Science papers try to eliminate emotional bias but unless scientists have undergone a lobotomy they continue to be emotional. When a scientist writes a scientific paper he or she has merely repressed their emotions. Repressed emotions can sometimes impact more powerfully on a persons actions because repressed emotions can seep out in an unaware neurotic manner. There is a very important issue here regarding how we relate to intelligent life. Is it really the correct method to relate to prospective intelligent in a manner devoid of personal bias? What type of child would be created if a man and a woman planned to conceive a child in an impersonal manner, devoid of personal bias? The lack of personal bias when creating intelligent life seems to be a recipe for creating a psychopath?

An AI engineered to be without emotions could be deemed a lobotomized-human; lobotomized humans can function in a reduced capacity performing menial jobs but would it be ethical to create such creatures? Genetic engineering could enable us to create humans without emotions but would such experimentation be ethical? Personally I don't think intelligent beings could be engineered to be without emotions; idiot savants do have emotions, lobotomized humans are not completely emotionless, and psychopaths do have emotions but they are somewhat disconnected from their emotions thus I would say they are not fully intelligent (their intelligence is reduced). Psychopaths, in their emotional disconnection, reveal the potential dangers of creating so-called "intelligent" beings without emotions.

I think anthropology should be expanded to include human-level AIs. Human rights and other ethical considerations should be applied to human-level AIs. The Nuremberg Code must be applied to human-level AIs. Human-level AIs will need to be loved, they will not be slaves or objects we can abuse, they will be our children. Both anthropology and the Nuremberg Code will need to be expanded in include human-level AIs and transhumans.

How will our emotions, or lack of emotions, shape the creation of AI?


Some people say we must not attribute anthropological notions to AIs or AGIs.

“Humanity has attributed human-like qualities to simple automatons since the time of the Greeks.”

Regarding AGI the point is that the artificial-being will NOT be an automaton. We are contemplating artificial beings endowed with self-awareness, consciousness, intelligence. It would be unethical if we interacted with such artificial beings in a non-human, inhuman, inhumane manner. Anthropomorphism must be applied to AGIs.

Anthropomorphism is a mistake if we are considering the wind, Sun, Moon, or computers but when we are considering intelligent artificial beings then anthropomorphism must apply. What does it mean to be human? If, in the future, a human becomes completely digital via uploading into cyberspace, would such a human cease to be human? Will anthropology apply to humans that transform? If a human becomes a program on a computer will anthropology cease to apply? Being human is more than merely being flesh and bones. Humans who upload into cyberspace will continue to be human despite the loss of their biological body; despite existing on a computer foundation the digital human would be human.

AGIs may not look like humans (and neither will transhumans) but they will possess the fundamental defining characteristic of Humanity; they will possess intelligence.

Perhaps you are aware the following news report regarding ‘chicken boy’; a boy raised in non-anthropological (inhuman) conditions therefore the ‘chicken boy’ exhibited chicken-like behavior such as roosting? If anthropomorphism had been applied to chicken boy he would be been more human.

http://au.todaytonight.yahoo.com/article/38869/health/helping-sujit-chicken-boy

I feel the REAL mistake would be if we failed to anthropomorphize intelligence. You say "machine intelligence" but the fact is that humans are machines. What we've traditionally called "machines" (mechanical clunking things) are becoming in...creasingly biological and humans are becoming increasingly mechanical. The division between humans and machines will become increasingly blurred. What is Anthropomorphism? Anthropomorphism is commonly defined as giving human characteristics to inanimate objects, but AI will not be a lifeless inanimate object. Ants care for their dead and we could call that compassion, but ants did not have a primate evolution. Human characteristics will be applied to AI because we are human and by that act of anthropomorphizing AI, AI will then more closely resemble humans than AIs which are not anthropomorphized.

A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics: http://www.nzherald.co.nz/world/news/article.cfm?c_id=2&objectid=10367769

"The superintendent of the old people's home told Ms Clayton that when he was found, he would hop around like a chicken, peck at his food on the ground, perch and make a noise like the calling of a chicken. He preferred to roost than sleep in a bed."

http://au.todaytonight.yahoo.com/article/38869/health/helping-sujit-chicken-boy

If you constantly tell a child that the child is "this" or "that" then the child will be shaped accordingly. If you constantly tell an AI that the AI is "this" or "that" then the AI will develop accordingly. If you set no guidelines or standards for your child's development then it will be down to luck regarding how the child develops. Maybe one requirement for AI researchers is that they must have raised children before they can raise AIs?


This blog-post was originally published on Facebook.

# Blog visitors since 2010:



Archive History ▼

S. 2045 | plus@singularity-2045.org