The potential of artificial intelligence has always been tempered by the limits of computation, a notion that dates back at least to the 1930s work of Alan Turing and his famous thought experiment that gave us the abstract Turing machines.
Jonathan Waskan is a professor of philosophy - and the Beckman Institute's lone philosopher - but his perspective on the computational issues surrounding artificial intelligence (AI) is something computer scientists, cognitive scientists, and AI researchers may want to consider.
Waskan is an Assistant Professor of Philosophy at the University of Illinois and a member of the Cognitive Science group at the Beckman Institute. Some may not know, or think it unusual, that Beckman has a philosopher as one of its faculty members. But Waskan's Ph.D. from a unique program at Washington University in St. Louis that combines philosophy, neuroscience, and psychology attests to his interdisciplinary credentials. At Illinois his work focuses on the philosophy of cognitive science, including issues that apply to artificial intelligence.
In his writings, especially in his 2006 book Models and Cognition (MIT Press), Waskan delves into the intersection of philosophy, neuroscience, and artificial intelligence. He says that the logic metaphor in philosophy is a proposal stating that "the way we reason is very similar to what we do when we construct formal logic proofs" i.e., deducing conclusions from propositions, and that the metaphor also applies to the workings of computers.
"One of the nice things about computers is that they are able to implement that kind of process, to mechanize that sort of formal reasoning," Waskan said.
Creating a truly effective artificial intelligence system supported by logic-based computing, however, runs into problems that Waskan says are probably "insurmountable." He offers a different approach - one that looks to the kind of computer models used by hurricane predictors and systems designers.
Turing, working in 1936 before the advent of the modern computer, explored the limits of computation through abstract computational devices known as Turing machines. The Turing machines could be used to simulate the logic of computers; these mathematical abstractions later provided clues for computer scientists who, equipped with newer algorithms and everincreasing processing power, started to contemplate the possibility of creating artificial intelligence systems.
"In the early days of artificial intelligence, a lot of work was being done trying to figure out how we could use this fact about computers to model human cognition," Waskan said. "People were trying to get systems to engage in simple, practical reasoning in real-world environments. Like if a system wants a glass of water you want it to be able to figure out, for example, a situation where it can get a glass of water in its claw, or whatever. You have to give the system a bunch of rules and each of these rules has to have a huge number of qualifications."
As Waskan wrote in Models and Cognition, the huge number of qualifications needed in order to compute all the variables involved in a potential task make the qualification problem an overwhelming one for logic-based AI systems: "... in order to embody what we know about the consequences of alterations to the world, not only would an infinite number of rules be required, but each rule would also have to be qualified in a seemingly infinite number of ways."
The qualification problem is but one part of an overall difficulty in getting a logicbased system to respond to all the possible alterations to a situation.
"It's impossible to give a rule-based system, a logic-based system all of the knowledge that we have about alterations," Waskan said.
This is what is known as the "frame problem" which has been defined as "the challenge of getting a representational system to predict what will change and what will stay the same following alterations to the state of the world" (Bechtel, Abrahamsen & Graham 1998).
In Models and Cognition, Waskan writes that this is the one great shortcomings of logic-based systems, even if the task is as relatively simple as, in the example he uses in the book, planning your day.
"From an engineering standpoint," he writes, "the problem that quickly arises is that no matter how many alteration/consequence pairs one builds into the knowledgebase of one's model, there will generally be many more that have been overlooked. ... In this case, we are still dealing with a fairly simple physical system; it is far simpler, in fact, than the scenarios that humans generally confront. Where more realistic systems are concerned, the challenge of specifying the consequences of each possible alteration looks to be insurmountable."
Waskan then challenges the notion of using logic-based systems to fashion a viable AI system, writing, "Our knowledge of the consequences of worldly alterations is, however, immeasurably more complex than this. In order to embody what the average human knows about the consequences of worldly alterations, a frame axiom system would have to contain rules specifying how countless objects, both familiar and novel, will behave relative to one another following each of the consequently infinite number of possible alterations. What started off as an engineering problem therefore gives way to serious a priori concerns about the viability of the logic metaphor itself, for no finite set of frame axioms would ever suffice to express what we know about the way the world will change following various alterations."
While there are limits to what even highly advanced computers can do, a human brain is able to account for millions of potential permutations in a given scenario. Even with greatly improved processing power, computers based on sentential reasoning, or the "if this happens, do this" type of logic cannot match the ability of our brains to deal with alterations in a given situation. In his work Waskan looks at the power of non-sentential based representations - similar to scale models - rather than sentential reasoning, to contend with such alterations. He believes these types of computer models offer a much more promising path to viable artificial intelligence systems.
Waskan said neuroscientists know that our brains harbor representational models of everything from tabletops to apples, and can apply rules to those representations in order to predict consequences of alterations, such as the placement of an apple on a tabletop signaling it's OK to eat the apple.
"We know that brains can harbor these sorts of representations and that's the old argument: computers can do this logic-based processing and if a computer can do it, the brain can do it as well," he said. "Now we have computers that harbor models, like the computers over at the National Center for Supercomputing Applications that are harboring really complicated models of physical systems: weather systems, geological systems, automobiles, anything you like. These models are a lot like scale models in that once you've constructed the model you can manipulate it in any of the countless ways in order to predict the consequences of alterations to the physical system."
Representational models can be used to infer consequences from alterations, rather than depending on a multitude of rules required to adapt to millions of possibilities. Waskan says that the representational computer models "don't suffer from the frame problem because they're inferentially productive.
"You don't have to build into the system beforehand what you want to know about the consequences of alterations. You just construct the system and then you let the consequences play out. You let the model produce those consequences for you. Scale models are inferentially productive in a way that these logic models are not."
Waskan's writings on artificial intelligence are one part of his interest in bringing together neuroscience, psychology, and philosophy. He believes that confluence will add to our knowledge in all three areas.
"In addition to explaining our limitless knowledge of the consequences of alterations, one thing I'm hoping to accomplish is to make sense for the first time of how it could be the brain could harbor pictures or scale modellike representations," he said. "Philosophers since the beginning have been wondering how it is that the brain can harbor images or scale models. If you look in the brain you just see meat, you don't see any models. What could it mean for the brain to harbor pictures or scale model like representations? There are a whole bunch of researchers in cognitive science who operate on the assumption that the brain does harbor model-like representations."
Waskan said that showing for the first time that the brain "could harbor such representations would obviously be of some benefit to those researchers."
It could also add to the literature in an area of philosophy known as epistemology, which explores the nature of knowledge.
"What sort of things are we meant to know about? To answer that question philosophers from the very beginning have offered theories about how the mind works," Waskan said. "They are trying to figure out what sort of instrument is the mind, such that it can lay hold of knowledge about the world."
The advances in cognitive neuroscience, especially in technological areas such as imaging techniques, may one day provide answers to questions that were once the domain of philosophy.
"With cognitive science just now developing, starting really in the 50s, it was one of the last sciences to develop," Waskan said. "Now that we are gaining knowledge of this mechanism, this strip of knowledge, it can help us figure out some of these answers to epistemological questions."
The knowledge gained goes both ways, Waskan says.
"As far as philosophers helping science I think the answer to that is definitely yes, as well," he said, adding that philosophy could help clarify neuroscience issues such as how the brain harbors scale models.
"The problem isn't really an empirical one, it's a conceptual one," Waskan said. "The frame problem is really a conceptual problem itself. How could you conceivably get a system to embody our boundless knowledge? You really need a conceptual leap of sorts to try to be able to figure out how to answer that question. It helps if you have people who are sort of in tune with developments in lots of different fields and not focused on one. To be able to figure out what's going on with people modeling geological systems might help to answer cognitive science problems."
That's an interdisciplinary approach that fits in well at the Beckman Institute. Waskan's position at the University of Illinois and at Beckman came about because of his unique background melding philosophy and neuroscience, and because of a timely faculty opening. He was looking for a professorship and the Institute was looking to carry on its tradition of having a resident philosopher.
"I'm the third person to sit in this seat," Waskan said.
Waskan said that while the Ph.D. program he completed at Washington is still somewhat rare, it is "becoming less unique. A lot of deans are very interested in interdisciplinary work so we are finding there are more philosophers doing work in cognitive science.
"I didn't know what my research would ultimately be on when I started the program," Waskan added. "But I knew I was interested in the intersection of philosophy and the cognitive sciences because it seemed like a lot of the questions they were asking (in philosophy) could possibly be answered with cognitive science."
And Waskan just might have some answers for those in the cognitive science and computer science fields.
This article is part of the Summer 2007 Synergy Issue, a publication of the Communications Office of the Beckman Institute.