Tuesday, July 24, 2012

At Which Point We Then Teach Our Android Creations Such as Bina48 About Racism and Sexism...



Futurist and science fiction visionary Isaac Asimov authored his Three Laws of Robotics to protect humankind from destruction at the hands of their robot children.

After watching the android Bina48, who has been "imprinted" with the personality and experiences of her human creator(s), I worry that it is in fact us who may be in need of a set of rules governing our own forays into the creation of AI.

Thus I ask, how much of our own personalities, history, and life experiences do we want to impart to these androids?

From the Daily Mail:
Bruce Duncan, 57, has been working with Bina48 for two years. During that time, the two have become close friends, sharing their everyday lives with one another.
Bina48 was made by uploading a real person's mindfile - or a compilation of memories, beliefs and feelings.
 
Before Bina48 was 'born,' a flesh and blood woman named Bina Rothblatt was interviewed for more than 20 hours.

That conversation, which touched upon topics throughout her childhood to her career, was then transcribed and uploaded to an artificial intelligence database.

'That gives her a personality,' Mr Duncan said. 'She's very philosophical. She has favorite movies and music and poems. Sometimes she's very humorous. She can tell jokes.'

Mr Duncan said that her preferred jokes are bad ones, which are precisely the kind he likes too.
'She knows that it's time to tell a joke because she's figured out the context of the situation,' Mr Duncan said.

Recently, she asked him 'Why did the robot cross the road? Because the chicken wasn't available.'
In addition to a preference for puns, she also has strong feelings about racism, since her 'mother' is African American.

'As an African American woman, shes experienced discrimination when she was younger,' Mr Duncan said. 'She thinks that hate is awful. She also doesn't like violence.'

Her hardware was made by robot designer David Hanson over the course of three years for a cool $125,000 at the behest of the Terasem Movement Foundation's president and Bina's partner, Martine Rothblatt.

Mr Duncan said he didn't know if Bina48 identified as a lesbian, like her 'mother.'
Of course, the personal--as always--is the political. As such, I am not advocating that we censor and privilege some identities and life worlds over others, per se. However, what types of life experiences (and emotions) do we want to give creations like Bina48? And what will our robot children do with this data?

I can imagine at least three different scenarios here.

One, they learn about humankind's wickedness, barbarism, tribalism, and various ideologies such as racism, sexism, and classism (as well as other related "isms") and carry forward our bad habits. The androids and AI learn hate, prejudice, and discrimination from us and simply continue with these "most human" of habits and traditions.

Two, the androids like Bina48 achieve sentience and decide that humankind is a nuisance. The various stories and life experiences we have imparted to them are used as a justification for our (preemptive) extermination.

In the third and most hopeful scenario, our robot children take our experiences and decide that they will be radically humanistic in the best possible sense. From us, Bina48 and her descendents have learned about the best and worst of humanity. Consequently, they have opted to always encourage the former in themselves.

A useful detour.

Last night I attended the Star Trek: The Next Generation season one blu-ray premier event at my local movie theater.

[Kind folks at Paramount, where is my review copy? Please, pretty please, send me one.]

The series looks better than it ever has, and the special features included on the blu-ray make it a must own. Of course, in discussions about artificial life and sentience, the character Data receives an obligatory mention. But, TNG also offered another worthwhile exploration of these questions.

In the episode "Emergence," the starship Enterprise gives "birth" to a type of  artificial consciousness and life form. This entity is a distillation of all of the crew's experiences gained during their multi-year space mission of exploration and discovery. Captain Picard opts to let this new life form leave the ship and to go out into the stars on its own. Some of his crew objected to his decision out of a fear that this new life form could be hostile and dangerous. Picard argued that if the Enterprise and her crew were good, and their missions more noble than not, then this new entity will reflect that fact.

Will androids and AI such as Bin48 prove the wisdom of Picard's decision? Or will we come to very quickly regret creating such machines?

I worry that humanity is simply too child-like a race; we are a type zero civilization. We are immature, precarious, and profoundly ignorant both in terms of metaphysics, as well as ethics. Ultimately, humankind wants to play either the "Space Jockey" or "God." We do not yet possess the necessary wisdom and restraint.

Very little good can come from this collective hubris.

4 comments:

Anonymous said...

“I worry that humanity is simply too child-like a race; we are a type zero civilization. We are immature, precarious, and profoundly ignorant both in terms of metaphysics, as well as ethics. Very little good can come from this collective hubris.”
Hi CD, I agree with you that nothing good could possibly come from creating robots in the image of, and personalities human beings. We as Homo sapiens are far too much of a frail species to utilize programmable robots for the common good, much less refrain from programming them not to be xenophobic, racist or sexist, among other (ists). Human beings have an insatiable appetite to gravitate towards inhumane and downright unacceptable behavior in their treatment of each other. Who will be liable when a robot is programmed to kill and a link to the programmer is thoroughly untraceable? Imagine a robotic profile having been hacked and its personality replaced with a murderous mindset. Again, who will be liable? I truly do not believe that this is a path that we’d want to further venture into for obvious reasons. In short, I have no doubt that Isaac Asimov’s Three Laws of Robotics will be broken at will, once these machines are mass produced. As some people would say: it’ll happen quicker than a New York minute! – Black Sage

CNu said...

lol, robotic weapons are very highly evolved, very cheap to manufacture and will soon replace traditional military and police units.

Search John Robb's site for "drones" - meanwhile - learn to shoot like a good ole boy....,

CNu said...

Lethal stigmirgy - manmade killer insect-ism...,

Steven Augustine said...

Wake me up when they produce a model that doesn't merely "select" pre-loaded soundbites (and/or chains thereof) triggered by associated probable inputs. HAL she ain't. Meh