Skip to navigation | Skip to content


Reasons to reason

[Editor’s note: Dan Sperber was interviewed by Charles Stafford in Budapest, Hungary, on March 26, 2013.]

AOTC: I suppose most of us think that humans reason in order to figure out how things work, or in order to take good decisions, and so on. But why do you think we reason?

DAN SPERBER: A classical view of human reason is that reason is what makes us superior to other animals. It’s a cognitive capacity that allows us to go beyond our senses, perceptions, instincts, intuitions, and so on, in order to arrive at better knowledge and better decisions. And so it’s seen as a tool for individual thinking, allowing indeed our thinking to be superior to that of all other animals.

But it’s not the only possible view one can take of it. Hugo Mercier and I have been developing a different approach to reasoning, arguing that it’s a social activity fundamentally.[1] So let me try to explain the argument in very simple terms.

Our cognitive abilities, those of humans and other animals, have evolved over millions of years and are typically adapted to the kinds of problem found in our ancestral environment. Of course, the environments in which we live have been very much modified by human beings themselves, faster than biology could adapt itself to. But still we have rich cognitive abilities, indeed richer than other animals, because humans have evolved by developing cognitive abilities to the highest level among earthly animals. And these capacities which do not involve reasoning – in the sense of a conscious process of going through reasons, but which are very much more in the form of intuitive inferences, coming to a conclusion without consciously going through the steps, and things seeming to you to be obviously the way you interpret them – these capacities on the whole are not bad. And we rely on them.

Click to enlarge

Dan Sperber

So let’s just take a few examples.  We share many of our inferential abilities with other animals – e.g. the way we move around in space implies some understanding of the geometry of space, of gravity, of the kind of physics that allows us not bump into things, not fall all the time, expend our efforts in an efficient manner, and so on. We interact with lots of living kinds – plants and animals – some of them provide us with food, others may be dangerous for us, and we very spontaneously are good at recognising plants and animals, identifying them and having appropriate expectations, hopes and fears when we see them. Of course this develops, I’m not saying this is simply innate, it involves acquisition of knowledge, and much of this is in social contexts.

But still the kind of knowledge and thinking that is involved there needn’t be really reflective, it needn’t be reasoning – if by ‘reasoning’ we mean being aware of reasons while you arrive at a certain conclusion. Things seem obvious. So much of our thinking consists in attending to things in a manner where what follows follows spontaneously, as a matter of course. And that’s made possible because of rich cognitive abilities which are part evolved, part developed.

Our basic cognitive abilities are there to serve us. They are part of us. They don’t have separate interests from our own interests, and this is true of other animals as well. And so in a way these abilities are reliable as far as they go. Of course, there are limits to their reliability because they’re not that perfect. But the notion that somehow by having another faculty, reasoning as it might be, that would double check them we would correct the mistakes of our intuitions is not that obvious, because that higher order faculty itself would be imperfect too. There’s no reason we would suddenly be endowed either through learning, or by some kind of special endowment that would make human beings different from other animals, with a capacity that is more reliable than our intuitive abilities and could double check them and go way beyond them.

On the other hand, what I’ve described so far is only one part of the thinking that we do.  Much of the information that we get, much of the competencies and skills we get – most of them, one might quite sensibly argue– come from others. Not from direct experience of the world through trial and error, but from being informed by others, communicating with others who show us how to do things, teach us, and so on. And so human beings – and that’s the first major difference with other species – depend so massively on communication from others.

To develop, to become a competent adult – there are a few stories of children who survive in the woods, but they don’t become normal adults on their own, there’s just no way – we depend completely on learning a lot from others. But we also depend on information provided by others for everything we do every day. In modern life, we read newspapers, we go on the web, we listen to television, we talk to our friends, face to face or on the telephone. You really have to think of the trapper on his own in the woods for six months to think of somebody who is not drawing on new social knowledge that is provided every day. But even that guy has learned skills and competencies, he couldn’t invent them on his own, he’s typically not a child abandoned in the woods. He is a very social animal.

Others are a wonderful source of information. We get so much from them. But then they have their own interests. Unlike my own senses, my own instincts that work for me, others have their own interests. So when they communicate it’s not a pure altruistic concern for our being well informed, it’s also to influence us or to manipulate us. So communication is in part cooperative, and because we are involved in cooperative ventures we have a real interest in sharing information, but it’s also in part competitive and in part manipulative.

To the same extent you stand to benefit from communication with others you run the risk of being manipulated and misinformed. Not necessarily lied to – many forms of misinformation are subtler than that, but they may still cause you to do what’s in the interests of others and not necessarily in your own. So there are good reasons to be attentive regarding the information that’s provided to you and sometimes to double check it. This is where there is good reason to pay attention to the reliability of information.

On the whole, humans are probably more trustful than any other mammals. Sarah Hrdy’s book, Mothers and others, starts by imagining a group of chimpanzees in an airplane. When humans fly together in commercial airplanes, , they’re polite to each other, there’s no problem, they land at the end without being frantic because of having been obliged to sit next to 60 other people. Chimpanzees in a few hours would have killed each other, maimed each other, they would be in a state of total panic and frenzy. Human beings are remarkably cooperative, including with strangers.

But part of what makes this cooperation possible is also that we’re fairly vigilant about the intentions of others, their competence, their good-will, their honesty and so on. So there’s a basic low-level social vigilance. I’m not saying we’re paranoid, it’s just an awareness that people have a variety of motivations.

On the one hand, we have this competence of ‘epistemic vigilance’, which takes a lot of different forms culturally, of both trusting people and not trusting – that is, knowing which people do you trust, when and how? This is something you find in every culture, but developed in quite different manners, and indeed trust is more fundamental and expected and justified in some cultural contexts than in others.

Take now a situation in which people tell you things you’re not going to accept on trust. It’s not that we just trust some people in everything or nothing. It’s that we trust lots of people on lots of things, nobody on everything, etc. So if people tell you things which if they were true would really be worth knowing,  there are cases where it’s too momentous to just take this on their authority. That puts a limit on the success of communication because I, as a communicator, cannot persuade you of things just because I say so. And for you as the audience, you’re told things that if they’re true it would be nice to know, but the guy who is telling you this cannot be taken as a reliable authority on the matter – and maybe nobody can. So how can we overcome this trust bottleneck?

There is where arguments come in. In presenting you with an argument, I show you that – given what you already believe, and what you accept from me on the basis of trust – there are further consequences that you should also accept, even if you would not accept these on the basis of trust. They follow from what you already accept. And once you see the relationship between what you already believe and these further consequences then you might think it would be more coherent for you to indeed accept these conclusions than to deny or doubt them and keep your initial premises, rather than revise premises which you had good reasons to accept.

So reasoning on this view has argumentation aimed at persuasion as its main function. From the point of view of the communicator, it’s a way to convince people who would not accept what you say on trust. From the point of view of the audience, it’s a way to evaluate the arguments, the reasons that people give to you. Reasoning so understood is first and foremost a tool for communication.

And in a way that’s going back to the origins of the philosophy of reasoning. Go back to Aristotle and his predecessors, what motivated them to think about reasoning, logic and so on were highly social activities, mostly legal procedures and political debates. That’s where people developed rhetorical skills to convince others, and logic came as a branch of rhetoric, it involved the art of persuasion. So the social dimension was there originally in the study of reasoning, but got progressively lost. And if you go to Descartes you get the idea of reasoning as an individual thing which allows you to go on your own and ignore the others. His project was to ignore everything he had learned and reconstruct knowledge on his own just on the basis of reasoning.  Which in a way is crazy.

AOTC: Could you say something about the empirical evidence in support of the argumentative theory of reasoning?

DS: Yes. In fact, part of what motivated us to study this is the fact that there are deep puzzles in the psychology of reasoning. If you take this Cartesian view of reasoning as a tool for coming to better decisions and beliefs – on your own – well, if that was the function we should be quite good at it. At least, it should not have features which are defeating that purpose. If something has a function, it may do it more or less well, but if it doesn’t do it well it will be because the task is too difficult and not because it goes in another direction. But reasoning seems to go in another direction.

Let me just mention one type of evidence that is striking, related to the so-called ‘confirmation bias’. People start with certain views, and when they think about them, and try to deepen them, typically what people do is they look just for arguments that will strengthen their ideas, and they ignore evidence that would cause them to revise them.  They seem to make logical mistakes that serve their own opinions. In theory, reasoning is a way to get better beliefs. Given that we have a lot of beliefs without reasoning, of course, what reasoning should do is allow us to double check, verify and go beyond what we already believe intuitively. But if reasoning is in the business of just confirming what we already believe, strengthening it, it works very much against that function. It’s not that it’s not performing well enough, it’s actually going against it!

If, on the other hand, you think of reasoning as being in the job of convincing others, then when you are the communicator, that’s exactly what you should do. You’re an advocate. If you pay an advocate to defend you in court, you don’t expect them to be balanced, to put forward the arguments for and against you, you expect them to pile up all the arguments they can on your side.

And that’s what we do when we argue. At that point, the confirmation bias makes sense. Instead of being a bug it becomes a feature, as the phrase goes. And then the predictions become more precise. If what I have just told you were the whole story then it wouldn’t make much sense. Put yourself in the shoes of the audience. Why should you be convinced by people arguing in a biased fashion? And if you’re not convinced, how could reasoning help convince others? The prediction however is not only that when you produce arguments you’ll have a “my side bias”, but also that when you’re evaluating the arguments of others you won’t be biased. Of course if you evaluate the arguments of others just in order to argue against them, you are still an advocate, and a biased one. But if I listen to your arguments in order to maybe learn from them, to arrive myself at a sound opinion, then as an evaluator I should be much more objective. And the experimental evidence is again going in that direction. In the literature, production and evaluation of arguments are often treated as equivalent, and, by the way, most of the studies are about production. But in evaluation – when you’re really careful to have a genuine evaluation task – people do not have the confirmation bias, they only have it in production.

That’s a very direct prediction of our theory, with a lot of empirical evidence, and no other theory makes that particular prediction. So we explain this confirmation bias – which in the literature is treated as a disease, why should it be so? And not only do we explain why it’s there, we also predict when it will be there and when it won’t. And we have confirmation of these predictions.

AOTC: So you’ve already said a few things related to what’s known as epistemic vigilance. I wonder if you could say something more about the real world situations in which this phenomenon can be seen and observed?

DS: You can see that at different levels, e.g. it emerges in children pretty young – about the same age that they start lying, which is around 4 or 5. They discover this fantastic possibility that they can mislead others, but also that they can be misled by others. So they start having this background awareness that not everything that’s being said is true. Vigilance is not the same thing as being distrustful, it’s just not automatically taking everything at face value.

Think of the relationship between partners. On the one hand, if they live together, they probably trust each other on a lot of things. But when it comes to fidelity they are vigilant, typically. This is an obvious case in which we may have an interest in misleading our partners, maybe even to protect our partners. And there are also lots of ways in which, without exactly lying, one is led to distort things, to avoid things which might make things harder. And the person who you’re both protecting and misleading in this way may be complicit in this. So you can get vigilance which might take a more dramatic form, as in jealousy, but then also a more subtle form, e.g. in which we pretend that it’s like this – “What’s the matter?”, “No, no, nothing’s the matter!” – and then I just let it go. But it’s not that I really believe it. And on other occasions I may feel it’s important to pry and find out what’s really the matter.

But if we look more at cultural and institutionalized forms of vigilance – certainly, there are really different styles of trust and mistrust across cultures. Where I did my fieldwork among the Dorze of Ethiopia, being truthful was the same as being a good person. And of course this went together with an expectation of trust. The basic explanation of misfortune was in terms of breach of taboos and if you’ve done something prohibited and some misfortune came, the only way to get things right was to publicly declare what you’ve done wrong. So indeed people had strong reason to speak the truth and generally did, and therefore typically trusted each other. Of course, I think there was vigilance, but it’s a culture where trust was highly valued and practiced.

Take, at the other extreme, the famous paper by Michael Gilsenan about lying in a Lebanese village, where not lying, telling things the way they are, is taken as a lack of sophistication. Of course, people there also have genuine, sincere, trustful relations with one another. But the default mode of communication is one where just saying things the way they are – what’s the sense of doing that?  So among brothers, on a variety of issues of common interest, you might communicate without thinking too much about how to present things. But as soon as you move to a larger social sphere there is a staging – and speech is a way of staging yourself – where lying is a perfectly normal and indeed sophisticated way of managing relationships.

Of course, this doesn’t exhaust the issue of trust and distrust in either society. In a trust-based society you can have huge treasons, and in a society where you never say things exactly the way you think they are you may end up having relationships which are nevertheless very trustful.  So you get cultural differences in epistemic vigilance, and there is a lot of evidence of this in the anthropological literature, but in my view it could be studied more directly and in a richer fashion than it has been.

But even within one culture it’s not that there is just a general style of trust – there are institutions, there are frameworks, where vigilance is crucial. Thinking of Western societies, you find people who are puritanical religious people, and who in their family life think it’s a terrible thing to lie. There’s a duty to tell the truth. And the same people, they dress up, they go to their office in the city, they do business where they’re dealing with competitors, and it’s ok to do whatever, to lie – now you use information as a way to manipulate others as much as you can. Of course, you’re limited because others are doing the same thing, and are vigilant. So the only limit of the manipulative use of information is the fact that everybody is both trying to manipulate others and being vigilant too.

Think of the real estate business. Here it seems appropriate to lie – you can make a big sale, which would make your month successful, by convincing somebody this flat doesn’t have any defects when in fact it does, and they may find out only after they sign the contract. As a result, there are a number of spheres of social life where it’s ok to question people’s honesty. If they’re smart in their line of business, they’re not too honest! Whereas if you question their competence they would take it very badly. They’ve achieved their social position because they’re competent. And competence in this case involves misleading others.

Think of the academic world to which we belong. In a way it’s the opposite. If you look at the reviewing process, we can say of somebody’s work, “Your view is completely wrong, you don’t understand a thing”. We can really insult each other’s ideas, questioning intellectual competence. But only very rarely do we question honesty. If we say of an anthropologist that she invented her fieldwork data, or of an experimental psychologist that he fabricated his experimental data – that’s tragic. Honesty is taken for granted, and when you question it you question something absolutely fundamental. But you don’t spend your time doubting, you don’t think it’s invented in the way you would when you read a description of a flat!  So it takes very special circumstances to seriously question scientific honesty, and when a scientist gets discovered for being dishonest it becomes a major event. Imagine a major event in real estate: it’s been discovered that he lied in saying there were no noisy neighbours!  And then it’s on the front page of the newspaper?  But if you do this in academia you do get the front page.

So these are examples from our own societies, that vigilance is to do with special types of activities, interactions, professions, etc., and you’ll find this across cultures in a way that’s worth investigating. And there’s a tension, here, because on the one hand you’ve got a morality about how you should communicate – being truthful is often viewed as a trait of your person – but it’s understood that what you do in the family, what you do in real estate, what you do in academia, etc., differs. So how you handle this across contexts varies greatly.

AOTC: On the topic of morality, you’ve written recently about the mutualistic approach to morality and cooperation. Could you explain what is meant by this?

DS: First let me say something about the notion of morality. On the one hand, there are very striking cultural differences in norms, to the point that the label “moral” may not be worth keeping in anthropology. I should note that I find being a relativist on epistemic matters – thinking that people come up with completely different knowledge of the world – not very plausible. Of course, there are huge cultural differences in the ways people represent the world but there are deep, and there have to be deep, commonalities too. A more plausible argument relates to moral relativism. The position of not being relativistic regarding knowledge, but being very relativistic regarding morality, seems to me a defensible one – I don’t myself hold that position, but it makes sense to me.

There are some reasons however to explore the possibility of deep moral commonalities across cultures. One can think of moral norms as having to do with the fact that humans cooperate – again, we are an incredibly cooperative species and it’s not just that we cooperate in set activities, like bees, we invent new forms of cooperation every day.

And there is a general problem of cooperation much discussed in economics and evolutionary biology, which is this: In most cases, cooperation is in the interest of each of the co-operators, but for each one of them it would be even better to take as much as they can of the benefit and pay as little of the cost as possible – that is, to free ride, to cheat. And so much so that, if people were just self-interested, cooperation itself wouldn’t endure, because if everybody cheats the whole thing ceases to be advantageous to anybody. If you have a small minority of free riders, they might do particularly well, but the others will do well too. If everybody cheats – well, then there is nobody left to be cheated!

So this is a puzzle in game theory and in evolutionary biology: how can cooperation evolve? In the human case, it is generally agreed that there must exist certain normative dispositions that are crucial to cooperation. People cooperate not just out of a calculated self-interest but also because they think they ought to, they ought to respect their word, to behave so that others can trust them, to enter into contractual relationships and be faithful to the contract, and, in many situations, to behave as if there were a contract even when there isn’t one. And so they seem to be motivated not just by interests – a rational choice calculus – but also by valuing the cooperative form of interaction in itself. And again there are lots of different ways of doing so across cultures, and within cultures across different types of interactions, but wherever there is cooperation, there are norms regarding what to demand and expect of co-operators, and these norms are typically ‘internalised’.

There are two main approaches to this question of why we value cooperation, being good to one another, something that seems typically human. One approach, which has been quite influential, goes in the way of much older anthropological literature on group solidarity, cohesion, etc., but is now inspired by the evolutionary theory of group selection. The idea here is that we’ve evolved under the pressure of selection to indeed value the benefit of the group. So there’s a commitment to what is good for the group possibly at the expense of what’s good for yourself, possibly going all the way to self sacrifice, or at least being willing to pay individual costs for the collective good. This goes well with a group oriented morality of the type which in the history of philosophy has been developed by Utilitarians. The idea is that the good thing is to do what will contribute most to the collective benefit.

The problem with this is that if you study in detail both moral behaviour and moral judgement across cultures, utilitarian morality is a very rare form.  It’s been developed by philosophers fairly recently in the West, but it’s not a kind of moral sense you find commonly guiding people across cultures. It’s not that it’s impossible, maybe some people live by it, but it’s not that appealing to most people, it’s not the basis of their spontaneous moral judgements and practical choices.

So the alternative is the idea of mutualism. The simplest form of mutualism is reciprocity. Moral sense has to do with reciprocal relationships and with what people who are in such relationships owe one another. You give me something, I give you something. But that’s the morality of direct exchange, and morality can, indeed must go beyond that, so you get what has then been described as extended or generalised reciprocity. I prefer to think of this as mutualism: you behave towards others in the way you want them to behave towards you, a ‘golden rule’ kind of approach. Rather than mutualism being generalised reciprocity, I prefer to think of reciprocity as a special two-actors case of mutualism.

Another way to describe mutualistic morality is that we value behaving as good, reliable cooperators. We look for partners for all kinds of activities – economic activities, building a family, going to war together, etc. For each type of venture where we might want to choose partners there are desirable qualities which have to do with the special character of the venture. If you want partners to go to war, you want them to be brave. If you want business partners, you want them to be competent. If you want a partner to have children with, still other qualities are needed.

But across all these activities there are some qualities that are always desirable: reliability, that you should be able to trust what they say, that they will contribute to the common goals, take their share but no more, etc. So we desire fairness in others and others desire it in us. One could say – oh, I’ll be fair, because it’s advantageous, and I will then be chosen by others as a partner, I’ll be preferred. But the simpler thing is to internalise fairness as a genuine value. Not doing it through a calculus but basically to have the disposition to behave as a desirable partner.

So that’s a certain view of mutualism, which could have an evolved basis, and this may help explain why human beings, whatever their culture, are incredibly cooperative. If you evolve to be like that, you will indeed be somebody who is preferred as a partner, and we depend so much for our survival and success on cooperation that it’s a great advantage. It’s also something that is culturally fostered, so how much fairness needs cultural fostering is something to be studied.  And of course it can be fostered in a variety of directions, depending on the type of cooperation that takes place in different cultures.

Think of societies in which hunting and war are the central activities. Being brave – today we don’t think much of it as a moral quality. But in lots of societies it would have been a perfect moral quality, because you were not just being brave for yourself, you were being brave for those with whom you took part in hunting, warfare, etc. So you can see why it can be a moral quality in some society, whereas here being physically brave is close to being fool-hardy, more willing to take risks, but that in itself isn’t particularly admirable, at least not from a moral point of view. So the type of cooperative ventures in which people go in different societies will favour certain aspects of what makes you a good partner, but, in one form or another, fairness will be at the centre.

On all this I owe a lot to a former student of mine, Nicolas Baumard.  He has been developing an approach to morality as basically being about fairness, and I and others have contributed to this work.[2] My own take on it is a bit more anthropological than Nicolas’s, i.e. I’m more interested in the different ways it plays out across cultures. For me, it’s not important to say “Oh, that’s morality”. It is crucial to understand that the disposition to fairness, which itself we can unpack as being a good partner for the kind of cooperative venture that takes place in the society in which you are, is something that in one way or another you’ll find across societies. Whether this defines ‘morality’ is, for me, almost a terminological consideration. What I find crucial is the idea that fairness is a core component of all systems of norms, and the most important one when it comes to explaining the very possibility of cooperation.

Of course fairness interacts with a lot of other normative considerations. To what extent do these other norms follow from the norm of fairness? Baumard is exploring the idea that they do. I find the idea worth exploring but I am somewhat sceptical and more inclined to think that norms have different bases and interact, rather than all following form a fairness morality interacting with beliefs about how the world is. Take purity. You might say that in a society where there are strong beliefs about the consequences of impurity, you don’t want to cooperate with people who are impure and will compromise your cooperative ventures; you see people who behave impurely as being thereby unfair to others. But I am not convinced that this fairness- based approach can help that much with explaining norms of purity. To me there’s an interaction between this fairness value, which I take to be a human universal, and other types of normative concerns, related not just to purity but also to group identification and solidarity.  These are all open questions with speculative answers. I do agree with Baumard that by thinking about fairness – how it evolved, how it can be acquired, how it functions in society – we may at least go a long way in explaining not just fairness in the strict sense but also the way other norms often descried as ‘moral’ actually function. How far we can go in this direction, we only find out by working more.

AOTC:  You’ve just posted your books Rethinking symbolism and On anthropological knowledge on your website.[3] How do you feel about that work now? It’s almost forty years since the first was published and almost thirty years for the second.

DS: Well, first, I don’t dislike it to the point of hiding it!  For me, it’s been a long project which began to take form with the book on symbolism. The long project was to investigate what I take to be the common basis of the social and psychological sciences. I find it deeply misguided either to try to derive the social from the psychological, and equally misguided to try to do social science without psychology. Because what’s social and cultural goes in and out of the minds of individuals and it doesn’t go in and out as it would of a vessel. Things happen to ideas, to skills, to values, whatever, not just in the environment but also in the minds of individuals. In a nutshell, that’s why we need an integrated approach.

By “we”, I don’t just mean social scientists, I also mean psychologists or more generally cognitive scientists. If you’re a psychologist working on humans, to study human psychology but leaving out society and culture doesn’t make much sense. We’re such a hugely social species, that there is very little if anything about humans that you can study in any interesting way if you don’t take that into consideration. If you study babies, they show dispositions to pay attention to certain types of social information from the moment they’re born. And they start having others “in the mind”, so to speak, well before they’re one, and that’s very striking. It’s not just that we live in a social context, we have others in our minds all the time, in a variety of ways.

So just as it’s essential for the social sciences to pay attention to the cognitive sciences, the psychology of human beings will have a very narrow agenda if it doesn’t draw from the social sciences.

So that was my basic attitude very early on, largely influenced by Chomsky and by Lévi-Strauss. Lévi-Strauss was somebody who had no problem moving between the psychological and the social and cultural. But his view of the psychological was that of his time – not the best period in the history of the field. And what I got from Chomsky, one of the fathers of the Cognitive Revolution, was the idea you could ask much deeper and more interesting questions about the human mind. And Chomsky was, in any case, studying language which is obviously both a cognitive and a social phenomenon. It doesn’t make sense to think of language as just psychological or social, it’s completely both.

So that was my concern and I’ve continued in the same vein.  Of course, on the details of the books my ideas may have changed, but they were, I believe, steps in the right directions.

AOTC: What’s the thing you’ve published that’s attracted the most readers?

DS: I’m not sure. It may be the book with Deirdre Wilson, Relevance – that’s certainly the one that’s most quoted. In French, I’ve actually gone down all the way, because before that I published a book called Le structuralisme en anthropologie in 1968. And that has had better sales than anything! I’ve just been going down in sales ever since…

AOTC: But seriously, through your much more recent publications in psychology and philosophy journals you must have more readers now than ever.

DS: Yes, globally, there’s been a progressive impact and indeed across disciplines. And in some disciplines there’s a wider audience. If you do anthropology for anthropologists, it’s not as big an audience as psychologists, so there’s a bonus automatically.

AOTC: You mentioned your fieldwork in Ethiopia. Does that early fieldwork experience continue to influence you?

DS: The book Rethinking symbolism came directly from my fieldwork. In fact, it came from a dream. There I was among the Dorze, studying rituals in a fairly standard way, with the then relatively original influence of Lévi-Strauss. And indeed I found lots of rich rituals and a variety of symbols, complex prohibition rules, and so on, and the obvious question was what does it all mean, what do these symbols mean. And I never got an answer. They never, ever had an answer to that.

So if I asked “Why do you put butter on your heads?”, the answer was “Well, we do that because our fathers did it”. And then I would look for another informant – but with the worry that if only a few people knew why it was being done, then why did the other people bother to do it?  And then I had this dream one night, I don’t remember the details, but the gist of it was: listen to what they’re telling you!  They are answering with the right answer and you’re pushing back with the wrong kind of question. These symbols don’t have meanings and the reason they do it is because their fathers did. And this was more cogent than I was taking it to be. So that was my starting point.

And I took a break and came back and wrote the book saying that cultural symbols are not there to convey meaning. The so-called meaning, in cultures where, unlike the Dorze, they do provide you with one, is as much in need of an explanation as the symbol itself. So that’s where I turned to what there was of cognitive psychology at the time, and the attempt to understand cultural symbolism in terms of what it did to people’s minds, what it evoked, how it re-focused their attention, etc. Much if not all of what symbolism does goes well beyond its effects on individuals, and has to do with how people interacted with one another. But not in terms of sharing meaning, rather of sharing an orientation, of aligning oneself more or less with one another. Indeed people are able to use the same cultural objects, practices, artefacts in different ways – and being committed to them is, if anything, strengthened by the fact that they can interpret them individually in a creative manner, rather than just decode a meaning.

I was trying both to describe positively the cognitive processes involved in symbolic behaviour, and to criticise other approaches to cultural symbolism that focused on either looking for the meaning of symbols, as in Turner’s work, or describing symbols as a kind of language, as in Lévi-Strauss’s work.

I still agree with what I said back then, but the detail of the cognitive description I was giving I find simple-minded today. And much of what I’ve been doing since has hopefully contributed to a much better understanding of the way in which we can communicate in lots of ways, we influence each other, we converge, we diverge, we share, with a range of effects that go from sharing a meaning as classically understood to causing a mere convergence of interests and viewpoints.

AOTC: And what about On anthropological knowledge?

DS: This book created more hostility among anthropologists than anything I’ve written. Many people, including friends, disliked it very much. In those days I was in the department of anthropology at Nanterre, , and  when the book was published, I sent copies to most of my colleagues, and, at the next seminar where we would all with one another chat before it started nobody mentioned the book. Nobody but one of my colleagues who took me aside at a safe distance from the others and whispered … “I like it!”  So that was more telling than anything.

In the first chapter, I’m defending ethnography – saying it should be pursued for its own sake, just as historians do history without worrying all the time where it fits in the theoretical disciplines. And I’m promoting anthropology, the asking of theoretical questions, which can’t be done properly if it’s always tethered to ethnographic goals. So the two disciplines should cease being a monogamous couple and develop some sort of open relationship or productive friendship. And that was seen as intolerable because for anthropologists, fieldwork is the crucial thing – and of course I understand why – and everything else should come from it. We could proudly say, we’re ethnographers just as others proudly say they are historians! I don’t see that as being inferior to theoretical anthropologists. It’s harder and in many ways more important. But in the eyes of many field anthropologists being an ethnographer is not good enough. Your ethnography has to be a contribution to a more general discourse –in those days it was to contribute to some kind of general anthropological theory. And it didn’t work very well. Anthropological theory has never been very good. Much of ethnographic literature pays a heavy price for this, it could be better if the author hadn’t felt obliged to somehow fit into some kind of theoretical anthropology which often boils down to a classification scheme.

Even later, when the scientific approach became less respectable, ethnography was still taken as a contribution to ‘theory’ or to social criticism. It’s good, of course, if the study of some special case makes a contribution of a general kind. But it’s not necessary, it’s not easy, and in any case you owe it to the people you’ve studied to give a good account of what you’ve understood of how they are, and this may not be theoretically momentous, but it is relevant to our understanding of our common humanity.

So that book was not well received. Maybe I didn’t anticipate enough what the reaction would be, how much misunderstanding would be involved, I should have been more prudent and careful, but I still stand by the book. And I do really defend ethnography!  A lot of people saw this as an attack on ethnography, because they see ethnography as the ancillary discipline to the noble discipline of anthropology. But it’s a different discipline, to be defended in its own right.

AOTC: And how did people react to the chapter on Lévi-Strauss?

DS: Not much reaction. Lévi-Strauss had invited me to present my previous book on structuralism in four meetings at his seminar in 1968. He was not convinced, but I think he appreciated it (and he never discussed me in print). And I was protected in a way by the fact that he was not attacking me, on the contrary he had given signs of being interested. As you know, I got the first Lévi-Strauss Prize, when he was 100 years old, and he wrote me an extremely nice note saying he was really pleased that I’d gotten it, that it made sense to him, and I think he meant that sincerely. So I was never a Lévi-Straussian but I admire him immensely. And there are a number of things I did learn from him. In particular, the kind of theoretical ambition he had I think is worth trying to pursue.

AOTC: You’ve also interacted a lot with Maurice Bloch over the years, has that been a useful thing for you in terms of what you’ve done?

DS: Absolutely. Friendships of course don’t have to be useful to be rewarding. With Maurice, it is both. There’s enough convergence and enough difference to make the conversation an endless one!  And I appreciate immensely his person and his mind and the kind of questions he asks, and his positive contribution. He has failed to convince me of many things and I’ve failed to convince him of many things. But it’s always been worth trying because there were real issues and it’s not obvious that one is right and the other wrong.

  1. [1]See Hugo Mercier & Dan Sperber (2011), “Why do humans reason? Arguments for an argumentative theory”, Behavioural and brain sciences 34:57-111
  2. [2]See Nicolas Baumard, Jean-Baptiste André and Dan Sperber, 2013, “A mutualistic approach to morality: the evolution of fairness by partner choice” in Behavioral and brain sciences 36:59-122.
  3. [3]http://www.dan.sperber.fr/

Please join our mailing list to receive notification of new issues