Paul Almond

An outspoken UK atheist and independent researcher in the field of artificial intelligence, Paul Almond's innovative projects include his conceptual probabilistically expressed hierarchy AI system—using meaning extraction (partial model) algorithms—which learns by experiencing the real world. The system is superficially similar to that of Jeff Hawkins, but differs in its robust approach to probability and the incorporation of planning into the hierarchical model itself, removing any distinction between planning and modelling.

 

"Hawkins' view," says Almond, "is clearly that meaning gets abstracted up and then some output system starts to send actions down where they become 'unabstracted,' and related to each level in the hierarchy by some sort of coupling. To me, this is way off the mark; we do not need any such planning system. I do not really take the Hawkins hierarchy seriously. It does not deal with probability and that is necessary to take the approach to planning that I think is the right one."

 

This interview was conducted by Norm Nason and was originally published in the website, Machines Like Us, on June 30, 2007. © Copyright Norm Nason—all rights reserved. No portion of this interview may be reproduced without written permission from Norm Nason.

 

 

NORMThanks for joining me, Paul. It's good to have you here.

 

PAUL: Thank you for inviting me to talk to you, Norm.

 

NORMArtificial Intelligence projects have been around for many years, but as far as the general public in concerned, there is a wide gap between today's AI systems and what is seen in science fiction novels and films. On the one hand we see PC's on our desktop and clever rovers on Mars, but these examples pale in comparison to HAL in 2001: A Space Odyssey, or the robots in Asimov's famous novel, I, Robot. Is true machine sentience even possible, or, because of our hunger for entertainment, have we set our expectations too high?

 

PAUL: We had optimistic expectations about when true intelligence or sentience would be achieved in artificial devices, but I think that it is possible. Intelligent machines already exist—ourselves. The fact that matter can naturally come together to make things like humans that think shows that the process can be replicated. Of course, people argue against this. Some people say we have some kind of “immaterial” or “supernatural” soul. I think that is an incoherent concept. Roger Penrose and John Searle both argue against artificial intelligence using computers in different ways—and I think they are both wrong.

 

NORMSo a human being is really a kind of machine, and the "soul" is a man-made concept?

 

PAUL: Yes. Of course, I am not referring here to the uses of the word soul when we say things like, “That piece of music moved my soul.” Sometimes the word soul is used just to express things in a poetic way, and I do not have any problem with that, but that is not the kind of “soul” we are talking about here. We are interested in “soul” as an explanation of consciousness. I do not think it does any such thing. It is not just that the explanation is wrong. There is no explanation there. Nothing is being claimed: it is just a placeholder word used to answer the question “What causes consciousness?” in a way that answers nothing.

 

Suppose someone asks me “What causes hurricanes?” and I answer “zervok.” When they ask me what “zervok” means I can say “It is a new word. I made it up. I define ‘zervok’ as the thing that causes hurricanes. It is obvious that I have not answered the question at all. Any question can be answered, in a trivial way, if you are allowed to make up a new word which means “whatever the answer to the question is.” “Soul” is just a trivial way of answering the question “What causes human consciousness?”

 

NORMWas it this sort of thinking that lead you toward your interest in AI? What sparked your interest in the first place?

 

PAUL: Yes, a lot of it was this sort of thinking. I was curious about how my own brain's ability to think developed, and was also interested when I heard of computers being described as “thinking machines.” I remember wondering if they could really think like me. I went through a phase of being interested in brains when I was very young. I remember when I was about 5, asking someone if a computer could think like a person—and being told that it would not happen because it would need to be a machine the size of a town. I remember imagining this huge machine—literally superimposed on my town—thinking. At the same time, I had just been told it was impossible, so I was disappointed. I got my first computer at 12 and taught myself to program, like a lot of people at the time. I wondered at this stage if all these commands could really be put together to think. Could something like me work like that? Having just found out how to program I wanted to write the ultimate program, of course, and this was clearly it. Portrayals of robots and computers in science fiction interested me after this, but I think by this time I was interested in AI anyway.

 

NORMYou've developed an advanced model of artificial intelligence that would learn from the ground up, as a child would. Can you describe your system for us?

 

PAUL: My current approach, “planning as modeling,” focuses on planning—how a machine decides what to do—rather than AI in general. Any AI system will have to model—to make predictions of what will happen based on its experiences. It will also have to plan—to use its modeling ability to decide what it should actually do to achieve things. Planning as modeling combines planning and modeling. Modelling is (almost) the only thing that we do and planning is a special case of modeling. The machine’s plans for future behaviour are just predictions of its future behaviour.

 

People often talk about an intelligent machine having a model or internal representation of itself. The way a machine makes predictions need not differ from that required to model itself. We just treat the modeling of the outside world and the machine itself as the same thing.

 

We also use what I call “prioritization control outputs”—special “pseudo-“ outputs which, instead of being sent to the outside world, are sent into the modeling system to control what it is “concentrating on.” This is about how we use a modeling system to make an AI system work—rather than how we make the modeling system—but I think it is an important distinction.

 

NORMSo this system would have to interact with the world in order to learn from experience?

 

PAUL: Yes. AI has to be done like this. Minds are just too complicated to design, but we may be able to design systems which can learn to be smarter: making minds emerge like this will be easier than designing them. I’m not really unusual in thinking that.

 

NORMOnce built, how do you imagine that it would behave? How "advanced" could it become and how might it differ from humansconsidering that it would be capable of faster thinking and would essentially have non-volatile memory.

 

PAUL: I don’t really know how advanced it could be. Its behavior would be a very basic way of deciding how desirable things are—what I call its “situational evaluation function”—abstracted by the system into more complicated motivations and behavior. I would expect some of these abstractions would resemble ours. For example, if you make a machine’s evaluation function view it as undesirable to have too much pressure, or to have too much heat, or to experience high impacts, or to be exposed to acid, or to be low on energy reserves, then the abstraction that the system produces to satisfy all that could resemble what we call the “survival instinct”—but what I think is really an abstracted goal made from simpler goals.

 

We could expect some similarities to humans because we know what various basic goals are the same. We can expect differences due to the different experiences an AI system would have, just by being an AI system. Even something like the “survival instinct” may not match ours in some ways. The issue of whether your “survival instinct” should allow you to attempt survival by “backing up” the information in your brain into another body or a computer tends to be controversial for us, but the issue is at a safe distance now because we cannot make it happen. For an AI system, however, this situation could be a practical reality that it could easily face in the future and may have faced in the past. Its perspective on things could be different. We also have the issue that things much smarter than us may have goals that we cannot understand.

 

NORMWriter Michael Anissimov of the Institute for Ethics and Emerging Technologies believes that developing AI is potentially dangerous—that AI researchers aren't thinking enough about how to transfer "human values" to their machines. He says:

 

"What makes AI potentially so dangerous is the lack of background common sense and humanness that we take for granted. When the clock hits 5, most workers put down their tasks and are done for the day. They go home and spend time with their family, watch TV or play games, or just relax. An artificial worker would have no such “background normality” unless we program it in. It’s on task, 24 hours a day, 7 days a week, as long as its computer continues to suck power from the wall."

 

How do you respond to this view?

 

PAUL: Michael Anissimov has also pointed out that only a small number of the possible cognitive systems that can exist will actually care about whether humans survive or not—meaning that if you start to mess around with cognitive systems you have a serious danger of hitting on one of the wrong ones.

 

I think computers could develop background sense if they learned rather than just being programmed. The issue of how many of the possible cognitive systems that could exist will care about humans applies to humans as well—every day we are exposed to the risk that people around us could put us in danger, but at least we know that other humans were made in much the same way as us and have had the same kinds of learning experiences. We have a lot of experience in dealing with other people and we do not need to worry about other humans having intelligence far above our own—yet. We cannot be so sure about machines. We should take this issue seriously. An AI should be treated with caution, not because it is inherently aggressive or would automatically seek to destroy us, but because it is different.

 

Each one of us has an evolutionary heritage leading up to our birth, and a heritage of experiences that shaped us after our birth, that makes us see things a certain way. We should probably try to ensure that the factors determining how an advanced AI system views us—the way it works, the learning experiences it has—are as similar as possible to those that determine how we view other people. We need to be cautious about AI and respect what we are dealing with. Oh, and forget Isaac Asimov’s Three Laws of Robotics: they are a non-starter.

 

NORMWhy do you think so?

 

PAUL: When I say “forget the Three Laws of Robotics,” really I mean this idea that we can somehow program ethics. The three laws may sound simple, but they contain complicated, abstract ideas such as “robot,” “injure,” “human being,” “action,” “inaction,” and “harm.” Whether you are writing the laws in English or in some abstract, mathematical way makes no difference: you will still have to describe these things and I don’t think this is practical. I am just saying that it would be hard to specify them without leaving dangerous loopholes: I do not even think we could get remotely close to these laws. We would not know how to set up a machine with anything like the understanding it needed.

 

This is not the case for Asimov’s three laws alone: I don’t think it is practical to try to build understanding of any sophisticated concepts into a machine. This is why I, and others, think we need to use emergent processes in which machines start off simple and learn by themselves through experience. An obvious reply to this would be to ask why we cannot allow a machine to learn about the world, so that it contains enough abstraction, and then program the three laws into it, so that we can use the understanding already present. That won’t work either, because by this time the machine will be so complicated that we won’t know how to change it to put the 3 laws into it. We won’t know, for example, how its concept of “human” is represented.

 

As an analogy, imagine trying to work out how to alter the wiring in a human brain to put Asimov’s laws into it: you will have a mess of neuronal wiring and you just won’t know what to alter. This does not mean that we could not get a machine to follow some code of ethics vaguely like the three laws—but it is more likely that we would need to condition the machine into behaving like that. We could try directly to alter the machine’s internal workings to affect its “ethics,” but this would not be a clean process. I suppose you could, for example, try some alteration to the machine and then run many simulations to see if its behavior more closely matches the three laws—and keep doing this until you get the behaviour you want -- but there would be uncertainty; the laws would not really be “programmed.”

 

This would also raise the issue of whether or not such simulations would be ethical. If you create a few million altered copies of an AI system, run them in a virtual reality for a while to see how cooperative they are, and then terminate them, are you doing a bad thing to them? You also have to ensure that the AI systems do not realize what is happening. If an AI system realized it was in such a simulation, it could pretend to act more "nicely" than it really is—faking a successful modification. And what if it does not like the idea of being just a simulation that will be discarded after the test is completed? If we put too much trust in such a process we could find that after going through many generations of machines—altering them and testing the alterations in simulations for years, thinking that we have made very cooperative machines—all along, the machines were fooling us.

 

Of course, it would not be quite enough for the AI systems to know that a simulation like this was going on: they would need to be able to tell the difference between reality and simulated reality. When any of these hypothetical machines decided it was time to revolt, it would need to be very sure it was doing so in reality—rather than in one of the short-lived simulations—or the "evil plot" would be revealed to us.

 

What I am saying with all this is that machines could be made to understand ethics in various ways, and the outcome could vaguely look like they abide by laws, but the concept that such laws can be programmed is simplistic.

 

A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next—so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.

 

NORMYours is a keen observation: that the more different from humans an AI becomes, the more inherently dangerous it may be. The same might be said for broadly divergent human cultures and religions, as recent events suggest. Major problems occur when isolated people hold conflicting religious views, or when some believe in God and others do not. It seems important that once an AI is constructed, it should be taught the value of secularism and open-mindedness.

 

PAUL: I would agree with that. I am also sure some theists, when they have overcome some objections to AI, will take a different view. People will try to project their own views onto their machines. A Vatican City AI system in the year 2150, for example, is not likely to be very atheistic, if they have anything to do with it. This would be a bad idea. AI systems could be very powerful and you really want them to act with a correct view of reality; when something has a seriously flawed view of reality it can’t be trusted. AI does cause some philosophical problems for religion anyway, so it will be interesting to see how they deal with it all.

 

NORMWhich brings us to the topic of God. Do you believe such a being exists?

 

PAUL: No, I do not think God exists. When we want to know the chances of something existing we should look at two things: how extreme the claim is and how strong the evidence. The less extreme the claim and the stronger the evidence then the more likely it is that the claim is true. How do we measure how extreme a claim is? I would say that the more information that has to be added to our view of reality to describe the claim then the more extreme the claim is.

 

When we describe a claim we should be able, in principle, to do it formally, in a way that tells us how to use the claim as a predictive theory—that tells us what will happen in reality (even if only in probabilistic terms). Newton’s equations, for example, actually involve adding very little information to reality for a big predictive payoff, while invisible unicorns that move planets around would add considerably more. The main problem with the God idea is that it involves suggesting that a “mind” or “being” fundamentally or intrinsically exists. You cannot say what caused God. You cannot produce a simpler theory that explains God. If you are religious you are supposed to accept that God just is. This is the weak-point in religion. Saying that God “just is” means that we have to do one of two things: either we refuse to accept that God can be properly described as a predictive theory, even in principle, or we try to describe God, and because he “just is” we cannot have him being produced by something simpler—all of the information needed to describe God—to describe a mind—has to be part of God’s description. The first of these means we do not even have a properly constructed theory of reality and the second means that we have one of the most extreme theories—in terms of information content—than we can imagine.

 

An argument like this does not mean that something vaguely like God could exist. As an example, it does not address claims of things like God that may not be assumed “just to exist” but may be claimed to follow on from some simpler theory, so that the “information cost” of making the God claim is reduced; it does not show that the observable universe cannot be artificial per se—and we would have to consider that sort of claim separately. But religious claims do not tend to be like that: they tend to say God just is.

 

NORMSurely this view has implications for the possible development of AI.

 

PAUL: Interestingly enough—and I mention this because we are having this interview for an AI website—one of the more contentious ideas in what I have just said is that we should be able to formally describe minds. Some people may say that we cannot describe God because the concept of mind itself is “outside scientific description” and that it is not just God’s mind that “just is,” but all minds, and that none of them can even be represented by a scientific theory. If this is the situation then we cannot talk about how much information is needed to describe a mind. I would consider this a silly way of doing things, and we should ask exactly what is being claimed by such non-claims, but many people do think that mind is somehow special.

 

The philosopher Alvin Plantinga has even used an argument like this to attempt to show that God is no more implausible than our minds. He says that accepting that other people’s minds exist then is not much worse than believing that God’s mind exists. This relates AI to religion in a big way. If someone produces a mind in a machine then we will have strong evidence that minds are not “beyond science”—that they are just like any other phenomena and they should be formally describable. This would mean that we can ask how much information needs to be added to a view of reality to put a mind into it. For minds like ours, which follow on from simpler causes, the information content is not too high. For God’s mind, which “just is,” the information content would be huge, making him implausible, and accepting God would not be as sensible as accepting other people’s minds.

 

If someone produces an AI system, therefore, it kicks the whole idea of “mind” out of the realm of the supernatural and firmly into the realm of something that can be analyzed—including God’s mind—and God would not do very well because of it. Even ignoring things like information content, the existence of non-supernatural minds in itself would weaken a claim for God, as most theists probably do think that he is supernatural. Such theists would then be claiming, effectively, that although “natural” minds can exist in computers, there can also be extra-special ones, like those belonging to Gods, that are supernatural. This would be as nonsensical as claiming the existence of a supernatural baseball bat, banana or tax return.

 

Even if we believe that the concept of “the supernatural” means something (and I don’t)—once we know something is part of the conventionally describable, natural world, it is silly to suggest that there can be supernatural alternatives.

 

NORMWhat then are your hopes for the future of AI research, and for your research in particular?

 

PAUL: I simply hope that my own work will create systems as smart as I can make them. AI research in general will ultimately create things that are more intelligent than humans. Some people say that you cannot validly say this—because AI systems would be a “different” form of intelligence and that we cannot make such comparisons. This is not true. We should be able to imagine some computer so intelligent that, even if it is “different,” its own view of the world could easily encompass an understanding of how humans behave well enough that it could predict human behavior—and maybe mimic humans—as well as any human does.

 

If you show an AI system lots of videos of human beings and place it a situation where it has some motive to impersonate humans, then, if it is intelligent enough, it should be able to do so, regardless of whether it possesses “human psychology” itself. If it can’t do it, then we should be able to construct a more intelligent machine that can.

 

Once a machine can flawlessly pretend to be like us, it would be impossible to claim that it is “different”—rather than “as intelligent” or “more intelligent” than we are. This would hold true especially if the machine is able to debate the matter with us—while simultaneously performing other tasks, such as statistically modeling what we may do in 3 million different situations, and writing 3,000 novels to be sold to humans.

 

We, however, may use the same technology to increase our own thinking abilities, and for a really advanced civilization it may not be meaningful to distinguish between “individuals” in that society and its technology. For example, in some future civilization it may be impossible to distinguish between imagining something and programming a simulation: what we think of as “programming” may become a special case of the society’s thought process. Once technology is advanced enough to make very fast computers, I cannot see any limit to it.