AGI and system 2 learning
You may be worried that AI will come and replace all the jobs. While I have no consolation for your worries, I’ll show you a riddle that you can easily answer but large language models find difficult. Hope it’ll cheer you up in these trying times.
The Winograd schema
The Winograd schema is a series of questions whose answers are obvious to us but are difficult for even modern AI models to get right. Here’s an example:
The ball broke the table because it was made of styrofoam. What does “it” refer to?
If you’re wondering how GPT is doing with the styrofoam riddle, you can paste it on OpenAI Playground Chat though I’ll save you a few clicks. Setting temperature to zero, GPT-3.5 incorrectly says the answer is “ball” whereas GPT-4 correctly says “table”. What a riddle!
You can come up with your own Winograd tests, swap “styrofoam” with “steel” to see how the model is able to adapt, or just look at the GPT-4 whitepaper where OpenAI tells you that the average Winograd performance is rated at 87.5%. While it’s impressive that GPT-4 is able to perform at the 90th percentile at LSATs and bar exams, I find it more interesting that the performance is far from 100% for Winograd where the questions are much easier for you and me.
If this AI blunder surprised you, try to explain in your own words why the table is made out of styrofoam. What you did in the back of your head is that you combined two key facts:
- In general, styrofoam is more brittle than steel (or more brittle than the average material).
- In general, things that are brittle tend to break.
LLMs don’t take these steps. How they work is akin to predictive text on a mobile keyboard: given the previous words in the sequence, they try to come up with words that have high probability of occurring. As a result, GPT-3.5 could be giving you the incorrect answer partly because balls made out of styrofoam are somewhat more likely in the training data than tables made out of styrofoam. This makes sense; I can’t remember the last time I saw a table made out of styrofoam. It may be possible to teach the model the correct answer; in fact, we can already see that GPT-4 has corrected this mistake. However our world is inundated with Winograd-like realities and micro-facts that are constantly evolving and not always written down. So I doubt how scalable this solution will be.
In contrast, look at the way you answered the Winograd question. You quickly came up with these two key facts, perhaps visualized the two scenarios, mentally swapping the materials, before reaching a deduction. A table made out of styrofoam is a very obscure thing to exist yet you got it right because it’s the only explanation to the riddle. Had you been given a question that begs a much more obscure explanation, you still would be able to answer it correctly because your approach to the problem is correct and does something beyond word prediction: it involves reasoning.
Drawing from Daniel Kahneman’s Thinking Fast and Slow, I’d like to introduce you to the two forms of thinking: system 1 and system 2.
System 2 reasoning
Kahneman says there are two forms of thinking: fast and slow. The fast one is called System 1 and is quick to jump to conclusions, whereas you engage system 2 as you work through a problem that requires a little extra thought. The Wikipedia page for the book has several examples.
Some examples of what System 1 can do:
- determine that an object is at a greater distance than another
- localize the source of a specific sound
- complete the phrase “war and …”
- display disgust when seeing a gruesome image
- solve 2+2=?
- read text on a billboard
- drive a car on an empty road
The third bullet point should look familiar now. When we were talking about predictive text, we described LLMs as essentially machines that completes phrases.
Some examples of what System 2 can do:
- prepare yourself for the start of a sprint
- direct your attention towards the clowns at the circus
- direct your attention towards someone at a loud party
- look for the woman with the grey hair
- try to recognize a sound
- sustain a faster-than-normal walking rate
- determine the appropriateness of a particular behavior in a social setting
The effort involved in system 2 should feel similar to the reasoning steps that you employed to solve the Winograd question.
I hope I convinced you that LLMs are capable of system 1 thinking but is lacking in the system 2 department. While each GPT release gets bigger and becomes more capable, and develops the ability to approximate some system 2 reasoning, my contrarian argument to this type of growth is the following:
An LLM must be paired with an algorithm that performs System 2 learning in order to reach AGI.
There is no consensus to what such an algorithm will look like, yet prominent researchers have been thinking about System 2 in different forms. One common ground is that they mostly have a minimally-constrained and editable graph structure, like a knowledge graph. Another thing is that these System 2 graphs are often explainable unlike their System 1 siblings. Because humans can explain their reasoning step-by-step, it doesn’t surprise me that the same capability should be expected of a reasoning algorithm. One of my favorite examples to System 2 is the do-calculus by Judea Pearl. I especially like how it naturally extends Bayesian statistics.
That being said, I believe any System 2 algorithm will thrive in this space at the moment because there’s no reasoning algorithm that is currently deployed alongside an LLM. The development efforts have been asymmetric: System 1 AI has gotten much stronger than System 2 AI and it’s time for System 2 to catch up.
I had a couple System 2 algorithm ideas based on converting a sentence into a small graph by parsing its grammar but I wasn’t sure about how to pass a graph into an LLM. When I opened up the idea to my friend, Ege, he engineered a very nice prompt that proof-of-concepts the main idea without having to create the graph. We iterated on this prompt and it makes the model break down the reasoning steps in a similar way you did when you answered the Winograd question. And yes, we got GPT-3.5 to correct itself to “table” for the styrofoam question by using this prompt. That’ll be another post.
Until then, cheers!