Futurist and science fiction visionary Isaac Asimov authored his Three Laws of Robotics to protect humankind from destruction at the hands of their robot children.
After watching the android Bina48, who has been "imprinted" with the personality and experiences of her human creator(s), I worry that it is in fact us who may be in need of a set of rules governing our own forays into the creation of AI.
Thus I ask, how much of our own personalities, history, and life experiences do we want to impart to these androids?
From the Daily Mail:
Bruce Duncan, 57, has been working with Bina48 for two years. During that time, the two have become close friends, sharing their everyday lives with one another.
Bina48 was made by uploading a real person's mindfile - or a compilation of memories, beliefs and feelings.
'She knows that it's time to tell a joke because she's figured out the context of the situation,' Mr Duncan said.Recently, she asked him 'Why did the robot cross the road? Because the chicken wasn't available.'
Of course, the personal--as always--is the political. As such, I am not advocating that we censor and privilege some identities and life worlds over others, per se. However, what types of life experiences (and emotions) do we want to give creations like Bina48? And what will our robot children do with this data?In addition to a preference for puns, she also has strong feelings about racism, since her 'mother' is African American.'As an African American woman, shes experienced discrimination when she was younger,' Mr Duncan said. 'She thinks that hate is awful. She also doesn't like violence.'Her hardware was made by robot designer David Hanson over the course of three years for a cool $125,000 at the behest of the Terasem Movement Foundation's president and Bina's partner, Martine Rothblatt.Mr Duncan said he didn't know if Bina48 identified as a lesbian, like her 'mother.'
I can imagine at least three different scenarios here.
One, they learn about humankind's wickedness, barbarism, tribalism, and various ideologies such as racism, sexism, and classism (as well as other related "isms") and carry forward our bad habits. The androids and AI learn hate, prejudice, and discrimination from us and simply continue with these "most human" of habits and traditions.
Two, the androids like Bina48 achieve sentience and decide that humankind is a nuisance. The various stories and life experiences we have imparted to them are used as a justification for our (preemptive) extermination.
In the third and most hopeful scenario, our robot children take our experiences and decide that they will be radically humanistic in the best possible sense. From us, Bina48 and her descendents have learned about the best and worst of humanity. Consequently, they have opted to always encourage the former in themselves.
A useful detour.
Last night I attended the Star Trek: The Next Generation season one blu-ray premier event at my local movie theater.
[Kind folks at Paramount, where is my review copy? Please, pretty please, send me one.]
The series looks better than it ever has, and the special features included on the blu-ray make it a must own. Of course, in discussions about artificial life and sentience, the character Data receives an obligatory mention. But, TNG also offered another worthwhile exploration of these questions.
In the episode "Emergence," the starship Enterprise gives "birth" to a type of artificial consciousness and life form. This entity is a distillation of all of the crew's experiences gained during their multi-year space mission of exploration and discovery. Captain Picard opts to let this new life form leave the ship and to go out into the stars on its own. Some of his crew objected to his decision out of a fear that this new life form could be hostile and dangerous. Picard argued that if the Enterprise and her crew were good, and their missions more noble than not, then this new entity will reflect that fact.
Will androids and AI such as Bin48 prove the wisdom of Picard's decision? Or will we come to very quickly regret creating such machines?
I worry that humanity is simply too child-like a race; we are a type zero civilization. We are immature, precarious, and profoundly ignorant both in terms of metaphysics, as well as ethics. Ultimately, humankind wants to play either the "Space Jockey" or "God." We do not yet possess the necessary wisdom and restraint.
Very little good can come from this collective hubris.