The Three Laws of Humanotics; An Algorithmic Constitution
By
"The starry heavens above me and the moral law within me, the two things that fill the mind with ever new and increasing admiration and awe, the oftener and the more steadily we reflect on." (Immanuel Kant, from the conclusion of the second Critique, inscribed on his tomb)
Weeks ago, a question slipped into my cranium and refused to leave. It never stops bothering me, buzzing like a trapped fly that never pauses to rest. The question came in a reverie late one night after reading my fill of Rousseau's Social Contract. I had begun wondering, what will happen as computers become super-intelligent? How will interconnected, super-fast thinking machines affect the way we humans govern ourselves? What will happen as robots begin walking among us, robots that are continually connected wirelessly to this intelligence? How will that affect what we are, what we aspire to do? This, Socrates taught, is the noblest of all questions,
"Of all inquiries, Callicles, the noblest is that which concerns ... what a man should be, and what he should practice, and to what extent, both when old and when young." (Plato, Gorgias, 488a)
The imminent birth of the super-intelligence -- I can only call it super-intelligence because it is neither artificial nor human, but each magnifying the other -- has to change the equation of what we should do, and what we should practice, whether young or old, as individuals and as groups. Surely there will emerge a common language, not a human language or a computer language, not mathematics, the language of nature, but some combination of all three, that will guide and direct the plans of this networked super-intelligence. What will it be? And this (the other questions are just leading up to this question, the fly in my cranium) is the big question that has been bugging me:
"Will there ever be a cosmopolitan constitution written not in words but as an algorithm?"
As noted here not long ago, mathematics describes static relations, and algorithms are better at dealing with dynamic relations. An algorithm is like a recipe, it is a set of procedures done one by one, each action following the one before.
By algorithmic I am not talking about a single program, but a program of programs, a language of languages, an interconnected, autonomous internet of thought, of no fixed size or dimension, a morphing entity operating in many minds and computers, one in use wherever people meet and consult, working all kinds of intelligence in concert to find and remember the best solution to every problem of governance.
Then this article from the New York Times shot through the pipes: "Entrepreneurs See a Web Guided by Common Sense," by John Markoff,
"Underscoring the potential of mining human knowledge is an extraordinarily profitable example: the basic technology that made Google possible, known as Page Rank, systematically exploits human knowledge and decisions about what is significant to order search results. (It interprets a link from one page to another as a vote, but votes cast by pages considered popular are weighted more heavily.) One example that hints at the potential of such systems is KnowItAll, a project by a group of
This direction of research offers the prospect of a tremendous leap in human intelligence. In the near future every move we make, every choice, every glance, every micro-expression on our face, will be a vote duly noted and used by somebody, whether we know it or not. Scientists for over two decades have been studying chaos theory to understand how swarming behavior works, and how it can be exploited for benefit. Swarming takes place throughout nature; chaos exists even in simple systems, and swarming emerges from it. It is the old myth of Uranus emerging from chaos, played over and over again, wherever we look in nature and in human affairs. Swarming affects our buying choices in a store, our voting in an election, and even the clotting of our blood when wounded.
One day every question we ask, be it a whim, a passing query or a gadfly in the brain question like my computerized constitution question, will be registered and subtly change the interface between man and machine, and man and man. One project, dubbed Cyc, was designed two decades ago to amass commonsense in a huge encyclopedia of artificial intelligence. It too, although the focus of Cyc is now very different, is mentioned as cutting edge in the New York Times article.
"There is debate over whether systems like Cyc will be the driving force behind Web 3.0 or whether intelligence will emerge in a more organic fashion, from technologies that systematically extract meaning from the existing Web. Those in the latter camp say they see early examples in services like del.icio.us and Flickr, the bookmarking and photo-sharing systems acquired by Yahoo, and Digg, a news service that relies on aggregating the opinions of readers to find stories of interest."
The article describes how Google researchers have begun to figure out ways to tap the enormous unused work potential in adults' casual play. It seems that the time and mental resources that people lavish on the internet every day playing puzzles and other trivial pastimes is enough staff hours to rebuild the twin WTC towers from foundation to top, every single day.
So the question is, can making politics into a game or puzzle played by millions of people in their spare time, using feedback and forth to the internet super-intelligence, be the way to an algorithmic world constitution? Is this the primrose path to cosmopolitanism? If so, we must agree upon a prime directive, we must determine first things and put them first.
The reason that science fiction writers so often conjure up the nightmare of machines becoming smarter than us and then threatening to usurp human self-rule is that we feel, deep down, that we have no idea what is essential, what our prime directive should be. This is because, as a race, humanity has not addressed Socrates’ Most Noble Question, "What should we be, and what should we practice, and to what extent?" Failure to answer this question cogently and concisely means that we can never safely program an autonomous robot. It will always be subject, under some concatenation of circumstances, to inflicting harm on human beings. Isaac Asimov tried to answer this problem with his three laws of robotics, and the Will Smith movie "I Robot" offered an effective refutation of Asimov's formulation. The question remains, how can we enter into dialog with intelligent machines when we ourselves have no commonly agreed upon "three laws" of human moral agency?
Actually, maybe we do. We have the Golden Rule, a moral dictum that is truly cosmopolitan in the fourth of my dictionary's four definitions of the word: "found in most parts of the world and under varied ecological conditions, as, a cosmopolitan herb." The rule to do unto others as we would have them do unto us is found in every major religious tradition, and therefore it can be taken as the first cosmopolitan law of moral behavior, be it instantiated by human or artificial intelligence. It is unlikely to be refuted by a clever screenwriter, as Asimov's laws of robotics were.
The Golden Rule has stood on its own throughout the ages without need of revision. However it was "prepared" for use by the super-intelligence as two more laws of cosmopolitan algorithmic law by Immanuel Kant. His formulation has stood for over two centuries. This is how my Encyclopedia Brittannica sums up Kant's ethical philosophy:
"The essence of morals is the commandment not to perform any act that one would not want to become a precedent for all human action and always to consider an individual as an end in himself, not as the instrument of another's purpose."
These then would be the first three laws of humanity and robotics: one, the Golden Rule, to do unto others; two, to do nothing that will set a bad precedent if universalized; three, to avoid instrumentalizing, to treat each human always as an end in herself.
If the goal is unity in diversity, then all must be firm on clear essentials, something like these three laws, and at the same time very broadminded about non-essentials, what falls outside their purview. But most important, each and all must be very clear on and skilled at distinguishing between the two. Each human, computer and robot can work the three laws out in each particular situation, but the results, that is, the amount of unity in diversity in the broad picture, would be fed back into the super-intelligence for assessment and refinement of the algorithmic constitution.
No comments:
Post a Comment