Brute Force

Thinking some more about the Logic Games project, my first attempt is going to be brute force. Each of the statements in the logic game text can be converted into a series of Boolean rules (rules whose conditions either evaluate to true or false), and then I can create a rule engine to iteratively generate facts until the question has been answered.

Here's an example, taken from another LSAT question:

Logic Games Project

Lately I've become interested in knowledge representation as it applies to computer science. I'm no expert on the subject. The opposite, really. But interest is a good place to start, I guess. So, I wanted to start a project that would give me a good excuse to really explore the topic.

I've been following Ola Bini's blog, and he's had a really interesting series of posts where he has been porting the code from Peter Norvig's book, Paradigms of Artificial Intelligence Programming, from Common Lisp to Ruby. I've been sitting on that book for a couple of years now, and every time I pick it up, my lousy Lisp skills get in the way, and I eventually drop it. But having Bini's Ruby code to compare it with has re-inspired me to pick my way through it again.

One of the things that really stands out with early AI research is how AI is defined. The Turing Test is one of the best known intelligence indicators. It goes like this: A human participates in two conversations, one is with another human, and the other is with a computer program. If the person can't tell the difference between the two, then the computer program has demonstrated intelligence.

There's a competition every year where programmers try to pass the Turing Test. No one has won yet, but people are coming close. What I really find interesting about it, though, is how people are coming close. If you look at some of the code that is producing human-like conversations, you find that a lot of it doesn't seem "intelligent" at all. At least not in the way that we intuitively think of "intelligence" - it has no understanding of the conversation. Take a look at Eliza, for example. Eliza is a very early attempt (1966) to pass the Turing Test. Eliza basically looks for patterns in the human statements and maps those patterns to responses. As a simple example, you might say to Eliza, "I want a new car" and Eliza's response might be "What would it mean to you if you got a new car?". So "I want X" gets mapped to "What would it mean to you if you got X?".

This is obviously a very simple example, Eliza is capable of a lot more complexity, but it is still pattern matching, not understanding. In addition, Eliza is 40 years old - a lot has happened in AI in the last 40 years. However, thinking about this approach to passing the Turing Test got me thinking about other measures of intelligence, and whether or not it would be possible to solve various intelligence tests programmatically without actually achieving what the average person would intuitively know as "intelligence". So here's what I landed on: I want to try to solve certain types of standardized test questions. Specifically the type of logic games you find on the LSAT and GRE.

Here's an example (pulled from an old LSAT):

Welcome to Jason's Blog

Hello! Welcome to Jason's blog!