The intentional stance
Dan Dennett explained that it began as a survival mechanism. It’s important to predict how someone else is going to behave. That tiger might be a threat, that person from the next village might have something to offer.
If we simply wait and see, we might encounter an unwelcome or even fatal surprise. The shortcut that the intentional stance offers us is, “if I were them, I might have this in mind.” Assuming intent doesn’t always work, but it works often enough that all humans embrace it.
There’s the physical stance (a rock headed toward a window is probably going to break it) and the design stance (this ATM is supposed to dispense money, let’s look for the slot.) But the most useful and now problematic shortcut is imagining that others are imagining.
There used to be a chicken in an arcade in New York that played tic tac toe. The best way to engage with the chicken game was to imagine that the chicken had goals and strategies and that he was ‘hoping’ you would go there, not there.
Of course, chickens don’t do any hoping, any more than chess computers are trying to get you to fall into a trap when they set up an en passant. But we take the stance because it’s useful. It’s not an accurate portrayal of the state of the physical entity, but it might be a useful way to make predictions.
There’s a certain sort of empathy here, extending ourselves to another entity and imagining that it has intent. But there’s also a lack of empathy, because we assume that the entity is just like us… but also a chicken.
The challenge kicks in when our predictions of agency and intent don’t match up with what happens next.
AI certainly seems like it has earned both a design and an intentional stance from us. Even AI researchers treat their interactions with a working LLM as if they’re talking to a real person, perhaps a little unevenly balanced, but a person nonetheless.
The intentional stance brings rights and responsibilities, though. We don’t treat infants as though they want something the way we might, which makes it easier to live with their crying. Successful dog trainers don’t imagine that dogs are humans with four legs–they boil down behavior to inputs and outputs, and use operant conditioning, not reasoning, to change behavior.
Every day, millions of people are joining the early adopters who are giving AI systems the benefit of the doubt, a stance of intent and agency. But it’s an illusion, and the AI isn’t ready for rights and can’t take responsibility.
The collision between what we believe and what will happen is going to be significant, and we’re not even sure how to talk about it.
The intentional stance is often useful, but it’s not always accurate. When it stops being useful, we need to use a different model for how to understand and what to expect.