Why I stopped working on the Bongard Problems |
|
by Harry Foundalis |
|
Please note: this text was written in the summer of 2007, and explains why I, the author, took a long respite from working on my research topics, after the summer of 2006. At the time when I wrote this, and up until around 2011, I had really stopped my research — so the title was correct up to then. Later, after 2011, I timidly started working again, on my own, for reasons explained near the end of this article, and obtaining some results. So the title is not correct anymore. It should be: “Why I Forced Myself to Take a Long Break from My Research”. Still, I preferred to keep this article on the web so as to document an important period of my life and explain my attitude and thoughts on scientific research in general. |
A web page that I wrote some 10 years ago (this one) explains the wonderful domain of my research in cognitive science, the Bongard Problems. Today, a decade later, for all the keen interest that I still retain for Bongard problems, I have stopped working on this domain for some ethical reasons. I would like to explain what these ethical reasons are, because I think they concern everyone of us; not just cognitive scientists like me, not even just scientists in general, but literally everybody. Here is a brief overview: First, what are the Bongard problems? You can best understand the answer by clicking on the link above, but to save you from going on a tangent, I’ll simply say here that Bongard problems are visual puzzles: you look at something like the drawing with the 6+6 boxes below, and you try to figure out why the six boxes on the left have been separated from the six boxes on the right. What is it that the six boxes on the left have in common?
My research focused on writing a computer program, which I called Phaeaco, that could solve such problems automatically. Actually, to write just any program that can do that, is not remarkable at all. How it is done is of utmost importance, because on one hand there are trivial, mechanical, and uninteresting programs, and on the other hand there are more human-like programs to solve such problems. My dissertation describes a computational architecture for cognition (that’s what Phaeaco is) that, among other things, can solve Bongard problems, displaying a more-or-less human-like performance. In other words, the goal of my research was not to write simply a program that solves Bongard problems, but to write a program that implements some fundamental principles of cognition, which help it exhibit — in a rudimentary way — some aspects of human behavior, or of human-like thinking. Okay. So where are the ethical issues in all that? They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt. That’s what I would like to explain below. |
An actroid (credit: GNU license) |
Take a look at the picture of the woman on the left. Does she look real? She’s not. She’s a doll, called an “actroid”. You can learn more about actroids by clicking on the image, but in summary, an actroid is “a humanoid robot with strong graphic human-likeness” that “can mimic such lifelike functions as blinking, speaking, and breathing”. The specific one shown in the picture is an interactive robot with the ability to recognize and process speech, and respond in kind. Such robots include enough “artificial intelligence” to fend off intrusive motions, such as a slap or a poke, but react differently to more gentle kinds of touch, such as a pat on the arm. “The skin is composed of silicone and appears highly realistic.” (Without useless details under the clothes, I suppose.) Now, picture this kind of robot in the not-so-remote future. Imagine that she (or he, or it) has enough computational power in the computer situated in her skull (or in her thorax, doesn’t matter where) to allow her to behave naturally like a human being. And also imagine that in her belly she harbors not guts (which would be useless to her), but a small nuclear bomb. Impossible? Why not? I’m not talking about current technology, by which nuclear weapons look like monsters. I’m talking future technology, when downscaling everything appears reasonable. |
Nor am I talking about a nuclear bomb capable of annihilating a whole city, but one that can turn to smoke perhaps a few building blocks and turn to uninhabitable a great number more.
Let’s see... I suspect you don’t really object that this is a plausible scenario. What you really believe (or maybe just hope) is that it will be us, our side, our army that will acquire such marvelous weapons. The enemy won’t have them, and so we, with our superior technology, will emerge victorious and live happily ever after, having crushed the barbarians. Yey! It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.) Here is what an American journalist wrote in Washington Post, on May 6, 2007:
Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II. (The full article, of which I used to supply the link here, but removed it later so as not to reveal anymore the name of its author, explains how soldiers become sentimentally attached to their robots, extensions of their teenage-time toys, obviously ascribing to them a higher value than human life.) If this attitude were marginal among Americans, if the above story were a fluke, I wouldn’t worry at all. Any moron can say anything they like in a free society, and even have their imbecilic thoughts appear in print. The problem from my point of view is that I’ve seen the above attitude again and again in the years that I lived in the U.S.A. Once, the janitor of the building where I used to do my research, having just learned some sad news about American soldiers killed in Iraq, wondered in a discussion with me: “Why don’t we just nuke ’em all? Just turn the damn desert into glass and be done with those ___” (I don’t remember what adjective he used). You might think the janitor wasn’t very sophisticated in his approach toward war or human lives. But a few days later I was reading another article on a web site, of which unfortunately I didn’t save the address, that was reporting about a similar issue as the one above: how to use robots to enter caves (it was known that the al-Qaeda leader, Osama bin Laden, was hiding in caves at the Afghanistan–Pakistan border back then), search for terrorists, and blast the place, terrorists, robot, and all; “to smoke them out of their holes”, as that pinnacle of wisdom, the American president G. W. Bush, said immediately after the 9/11 attacks.
So, back to our subject: how nice it would be to have “actroids” pregnant with nuclear or biological bombs, right? Perhaps “nuctroids”, how about that? Of course, only we would know they are actually nuctroids. To the terrorists they would pass for normal people. How immature must a person be to believe something like that! Think of nuclear weapons. When they first appeared on the scene, in the second half of the 20th century, originally only five nations possessed them: the U.S.A., the U.K., France, the Soviet Union, and China (the victors of WWII). Gradually, more countries entered the nuclear club, some of them openly (India, Pakistan), others secretly (Israel). Now every pariah state can have their nuclear toys, or dream of acquiring them. At the time the present was written, there was a strong fear in the international community that Pakistan’s American-supported dictator would be overthrown by extremist Islamists, and the nuclear bombs that Pakistan possesses would fall into the hands of terrorists. It is no secret that Iran, an avowed enemy of the U.S.A., is planning to build its own nuclear weapons. Turkey, now an ally of the U.S., is planning to build its own “energy-only” nuclear factories; but after one or two decades Turkey might turn into a hub for radical Islamism due to its gradually changing demographics, and so in the future we might have another Iran-like nuclear-power wannabe in the same region. So by what stretch of the imagination and crooked logic will it become impossible for pariah states, or even individuals, to possess and command “nuctroids” in the foreseeable future? Technology spreads. It’s not something that can be confined within national borders. Especially now, when we talk about globalization, we must understand that knowledge “goes global” too, and this includes the specialized knowledge needed to build an extra-small weapon of mass destruction, or a human-like deadly robot. So how does working toward innocent projects such as the automation of Bongard problems tie in into all this? As I explained earlier, it’s not just the automation of Bongard problems that’s involved. It’s about the automation of cognition. Here is what the program I developed, Phaeaco, would in principle be able to do:
The above animated image starts with a picture of some city blocks, taken from Google Earth. It then shows the result of successive application of various “filters” that do rudimentary “image processing” on it. Then the image is processed further in ways specific to Phaeaco (which does that assuming its input is a Bongard problem), until we finally arrive at an image in which the city blocks have been identified, their locations are known. Now imagine this is your city — your home perhaps is in one of those blocks — and that a drone is flying over it, identifying the blocks as the above image suggests, but very quickly, in real time. The drone could be armed with bombs, and could be “intelligent” enough to select a targeted city block and direct a bomb to it — “intelligent”, but morally blind. Anyone who works toward making machines intelligent, and especially wanting machines to “come alive”, must understand the grave ethical issues involved in such an endeavor. Consider the following email message sent by a student at Indiana University (IU, the academic institution where I did my Ph.D.) in 2008 (my emphasis):
Does anyone at IU realize the ethical issues that these kids are toying with? Is it really more important to be concerned with cloning and stem cell research? Does it not matter at all that these kids, or maybe their children, might be turned to a loose collection of quantum particles some time in the not-so-remote future by the fruits of their own toy-making? Or is it that what causes the indifference is the remoteness of the future, whereas other ethical issues in science are present here-and-now? But don’t the seriousness of the nuctroid threat and its logical inevitability make any impression on anyone? |
My ex-advisor in research offered the following
counter-argument (or rather, a hope of his), recently:
A footnote, unrelated to the content of the above argument: It came to my attention, after a search that I did on the web for a related topic, that someone expressed his shock at the above picture, which shows a half-naked young girl. I must say, I was shocked, too, learning about that person’s shock. When the finger points at the light, some people focus on the finger. What I see in the above picture is a healthy, young, and quite beautiful black girl, living her life in a society that seems to be an example of precisely what I want to show: a technologically unsophisticated society, but which probably gets the benefits of occasional items of modern technology. The person who was shocked looked at this picture and perhaps saw only a pair of breasts. I pity him. The girl’s breasts, in my mind, have no sexual connotation, but they underline the idea that she looks healthy, confident of herself (just look at that face!), and perhaps blissfully unaware of the vast technology that exists behind the mere gum that she chews in her mouth, and of a huge world of mostly well-dressed people who are masters of that technology. She would probably be shocked, too, learning that some of those well-dressed people can be shocked looking at her two ornaments — a most natural part of her feminine body. I would agree with her, and would try to explain to her that some people of that society (the one that makes the chewing gums and is an ardent consumer of them), and specifically males, never really grow up. A pair of breasts has on them the magical lure that the carrot has on the rabbit. They focus on them, so their minds from then on can think of only one thing: “breasts!” But I wrote this article for more mature readers. |
I am fully aware of this. A single person’s abstinence can have absolutely no consequence in the overall scheme of things. My purpose is not to hide my head in the sand like the ostrich, but to raise people’s awareness about this problem. Nor do I expect that a single person’s voicing of his concern is enough. (I don’t know if I am alone; I suspect I am not, but I haven’t heard anyone else’s voice on this issue; contact me if you are aware of others speaking about this.) If others want to undertake the development of nuctroids, let them feel free to do it (and face the consequences), but count me out. I choose to “cast my vote” in favor of voicing my concern. My hope is that in this way a larger percent of people will see the seriousness of this matter and join their voices, putting pressure on society and administrations to do something and take some measures. In the late 19th – early 20th century, with the anticipated spread of the use of electricity, some people were afraid that others would be electrocuted, so electricity was perceived as a public threat. Such worries, which appear even funny today, were not baseless or useless. It was because of such worries that measures were taken and technology developed that made it possible to build essentialy harmless electrical devices. |
I agree that skepticism is a healthy attitude, and I myself am skeptical about many issues. I am skeptical even about the threat that I foresee and explain in the present article. But, weighing rationally what I know about human nature and how far technology can reach on one hand, and any objections that I might have due to skepticism on the other hand, I find that the “nuctroid threat” weighs more. It is up to the reader to think about performing their own weighing before reaching any conclusions. Just a word of caution: just because some imaginative doomsday scenarios did not materialize in the past does not imply that there is really no danger ever — one might be caught in the trap of “the boy crying ‘Wolf!’”, in other words. It is a fundamental feature of the human mind to try and categorize, pigeonhole situations; so if one has seen a number of failed doomsday scenarios, one is strongly tempted to categorize everything that appears similar as “Oh, it’s just one of those”. I believe this feature of our cognition does not help us in this case; one must rationally list the reasons why this is “just one of those”, and ponder over the accuracy of such reasoning. A related issue, specific to the people in the U.S.A., is that after the 9/11 attacks Americans went through a period of intense fear, an apprehension about anything that could upset their easy and cozy way of living. Now, after several years without another attack on U.S. soil, they have timidly started exiting from this period of apprehension, and the first articles that look at their “age of fear” with a critical eye (and even a sense of humor) have started appearing (e.g., read this one). The danger here is that they will experience what I call the “swing of the pendulum to the other extreme position”; i.e., when you release a pendulum from one extreme position it doesn’t go straight to the equilibrium point but swings all the way to the other extreme first. Similarly, Americans might feel disdain for the kind of danger that I describe here, and treat it as just another one of those hateful scenarios that used to send chilling sparks of fear up and down their spines in the past. It’s a natural psychological reaction to try and turn one’s face away from what causes discomfort. But Americans can sense that this is not a case like those they’re familiar with, if they realize that the “reign of terror” was a cheap trick employed for years by their post-9/11 administrations in order to reduce civil liberties and pass antidemocratic policies with no resistance. I am not a member of their administration, not even an American. I am speaking as a person concerned about fellow people and the future of humanity as a whole. |
This is the argument that some correspondents have put forth and, honestly, I find the hardest to counter. The argument says that the individual who invented the knife (suppose there was such a prehistoric individual) cannot be held responsible for all the stabbings that have taken place since then. Sir Isaac Newton cannot be held accountable for others using calculus to find with precision the parabolic orbit of a pelted object, such as a cannon ball. James Clerk Maxwell cannot be accused of the electrocution of criminals (or suspected criminals) in various States of the U.S.A. The more general a discovery is, the more likely it is that a way will be found to apply it so that it will result in the loss of life. One should draw a line between creating a scientific theory and consciously manufacturing weapons using that theory, with the express intent to kill. OK, so where does research in theoretical cognition fall? On which side of the line? Looking at it coolly, it seems that it falls on the same side like Newton’s calculus, Maxwell’s electromagnetism, and even the unknown ancestor’s blade-making activity. Designing cognitive architectures, and implementing ideas in computer programs in order to see whether the ideas work or not, without having in mind how these ideas can be used against humanity, does not seem to be a culpable activity. It’s just that, although I don’t have any mis-applications in mind, I can’t help but think that others will find mis-applications in their own minds, without doubt. So, though now I have started working again in cognition (but in isolation), I can’t avoid seeing the problem coming. |
All right, all right, I admit — that finally did it. I throw the towel. First, allow me to note that this section (up to the horizontal line) was added in late 2013, after I had restarted my research as I explained in the top-note (the one in red letters), and after having reconsidered the situation in the world, as it unraveled after 2006, which is the year that I quit my research. The part of this document that follows after the horizontal line belongs to the original train of thoughts of mine. But let me comment on the above argument. Several people pointed out to me — and I thank them for their earnest will to discuss with me and consider these issues — that we don’t need to wait for the high-tech nuctroids/humanoids/whatever-oids to arrive for the mayhem that I fear of to be realized. Human bombs are already a reality! Indeed, even several years before 2006 (actually right after G. W. Bush invaded Iraq and caused the demolition of Saddam Hussein’s security forces) “human bombs” appeared, in the form of Islamist suicide-bombers: people who (1) are dead-certain that there is an afterlife, (2) are dead-certain that their Prophet will receive them with open arms in paradise if they blow up those they consider their enemies (including women with their babies, children, and old people), and (3) are dead-certain that their Prophet will reward them with virgins (whose number varies, but according to some venerable ancient Islamic holy texts is equal to 72), who will allow them to have what they were denied in this life and missed badly: sex; but which in paradise will be non-stop, and to eternity. Having all those dead-certain convictions, they go and happily blow people up to smithereens. Oh, I forgot a fourth dead-certain conviction: (4) that their enemies will go to hell no matter what they do, so they actually help them arrive there earlier, and this is something Allah approves. Yes, Islamist suicide-bombers existed before I started writing this article. But because they typically cause mayhem in the Muslim world only, and because they belong to that side, not to the side that I fear will abuse its technological prowess, they somehow didn’t fit in my picture of “technologically caused disaster”. Fine. But I failed to realize (at least soon enough) that they represent a danger which is much more immediate, real, and which does all that I fear of, without even the help of technology! Wouldn’t it be better to focus one’s attention on the here-and-now, instead of the possibly/maybe/perhaps danger that I described in the present text? Which tools, other than those that technology makes, can one hope to use in order to educate people, hoping to take them out of their dead-certain convictions? Or, if that is impossible or extremely limited, hoping to have as few young people attracted to such self-destructing ideas as possible? So, finally such thoughts got the better of me and I gave in. I re-started my research in 2011, and had some nice results that will be announced in due time. I do feel a little bit like a traitor to my own ideas, but when I think of it rationally I know that my critics were right. As I said, what follows belongs to my original train of thoughts. |
Seeing the problem coming is one thing, but figuring out what to do is quite another. I do not want to give the impression I know how we can deal with the nuctroid threat. All I can do is propose the following: Research that aims toward making machines appear human must be marked as highly dangerous, or ethically suspicious at least. Such research should not be funded. Note, I am not advocating an enmity toward all research in artificial intelligence and cognitive science; only a discouragement of the research that explicitly leads to the development of machines that can deceive humans, and pass as humans. To have computers that can compose high-quality music, for instance, or translate between languages, is not directly dangerous. Nor is it dangerous to have self-aware, conscious machines. If anything, a self-aware machine that places a high value on its preservation, and on the preservation of humanity, is probably more difficult to persuade to go and explode itself among people — Islamist suicide bombers notwithstanding. This last thought implies that sophistication is probably a desired attribute of machines: the more self-conscious, the less of a nuctroid threat; but self-conscious they must be by law, not by the goodwill of the free market; and self-conscious means human-like in mind, not in form and external appearance. We do need robots that work for us, but not robots that trick us into misidentifying them. High school children, undergraduates, graduates, and in general all people involved in the educational systems of the U.S.A., Europe, Japan, Australia, and elsewhere, must abandon their naïve attitude of “Let’s make stuff come alive”, and become aware of the seriousness of such an endeavor. Children cannot discover the seriousness of this matter by themselves, so it is up to the academic institutions to educate their students and take appropriate measures. If universities, such as my own IU, can be so serious about the ethics of procedures that involve psychological experiments on human subjects and even on animals (as I know they do from first-hand experience), then it is high time they become even more serious about what machines their students in artificial intelligence and cognitive science are experimenting on. Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal. One can’t just tell an entire culture to do this, don’t do that. In this case matters are complicated by the existence of an elite rich class in the U.S.A., in the interests of which is to keep the public uninformed, having my janitor’s “Let’s nuke ’em all” attitude, because the lack of public awareness increases the short-term benefits of the rich class (they support wars, which help them manufacture, advertise, and sell more weapons abroad, for instance). This is compounded by the American myth that it doesn’t matter if you are poor, because if you’re capable enough you can raise your social status all the way to the top. Having believed this myth for decades, the poor among the Americans don’t mind much being kept at bay (i.e., being poor and thus staying at my janitor’s level of political and educational sophistication) because the notion that anyone can rise to the top sweetens the pill and makes it more palatable. I must note here, I believe the so-called American dream of “rags-to-riches” is a myth for at least two reasons: first, it does matter if you are born Black, Hispanic, etc. — after all, what is the percent of non-white billionaires in the U.S.A.? And second, it is analogous to winning the jackpot in a lottery: yes, you can be the sole lucky winner, and you can even nurture your ego by thinking that winning in life is not a matter of mere luck, but the existence of the jackpot itself presupposes a vast number of losers; what are the chances that you’ll have the guts and wits to be the sole winner and rise high in social status? And is it really so attractive to live in a society with a handful of winners and millions of losers, resulting in the hordes of homeless people who search in the garbage, while you — the “good Christian” — look at them with disdain for they didn’t make it and ended up being among the losers?
For the above reasons, I realize the tremendous difficulty of talking to an entire culture. So, the only course of action that seems conceivable to me is to raise their awareness by having as many voices as possible talk about this issue in unison, and do this particularly in the most anarchic, relatively uncensored-from-above medium of communication that humanity has ever known: the Internet. |
Back to Harry’s index page on social issues