generaleducators

Why Being Wrong Should Be Celebrated in School

Madhav Kaushish·12 min read

In the Kalama Sutta, the Buddha tells a group of villagers not to accept any teaching simply because it comes from a tradition, a scripture, or a respected authority. Instead, he says, examine it for yourself. See whether it leads to harm or to benefit. Only then should you accept or reject it. Socrates, in a very different tradition, made a similar point — that the beginning of wisdom is knowing what you do not know, and that the unexamined belief is not worth holding.

These are old ideas. But they are surprisingly difficult to put into practice, especially in school. Most classrooms are organised around getting the right answer. Students who get wrong answers are penalised — through grades, through public correction, through the quiet shame of falling behind. The message, whether intended or not, is clear: being wrong is a failure. The goal is to be right as quickly as possible.

I want to argue that this gets things backwards. In my experience running a theory building course with 12-to-15-year-old students in Pune, some of the most valuable moments were moments of error. Not because errors are inherently good, but because the right kind of error, handled well, reveals something about how thinking works. An error can show you where your reasoning breaks down, what assumption you were relying on without noticing, and where the gap is between what you believe and what you can justify. These are things that being right does not teach you.

What generative errors look like

Let me give some concrete examples.

In the podgon module — a game where students try to figure out two secret definitions by proposing test cases — a student named Imran drew an open figure and was told it was not a podgon. He immediately concluded that all podgons must be closed figures. But that does not follow. All he had established was that one particular open figure was not a podgon. A different open figure might have been. His reasoning was a textbook case of generalising from a single example — concluding that because one instance of a category failed, the entire category fails.

Now, in a traditional classroom, this might be marked as wrong and corrected. But in the context of the activity, Imran's error was visible and discussable. I could draw a different open shape and ask: "How do you know this one is not a podgon?" The question does not tell Imran he is wrong. It asks him to examine his own reasoning. The gap between "this open shape is not a podgon" and "no open shape is a podgon" is the kind of gap that matters far beyond geometry. It is the same gap between "this person from group X behaved badly" and "people from group X behave badly." Learning to see that gap — to notice when you are jumping from a specific case to a general conclusion without justification — is a skill worth developing.

In the triangle theory building module, a student named Anya presented a proof that the angle subtended by the diameter of a circle is 90 degrees. Her argument used the fact that the side opposite a larger angle in a triangle is longer. This is true — for sides and angles of the same triangle. But Anya was applying it across two different triangles, which is outside the scope of the theorem. Her reasoning would have been valid if the claim held across triangles, but it does not.

What made this productive was not the error itself but what happened next. When the problem was pointed out, Anya could see that her conclusion might still be correct but her reasoning did not support it. The issue was not that she had the wrong answer but that her argument had a structural flaw. In her end-of-course reflection, Anya described what she valued about the course as "finding proofs, proving them wrong, again finding another proof." She had, it seems, internalised something about the iterative nature of mathematical reasoning — that the path to a correct proof often passes through incorrect ones, and that the process of finding the flaws is where the learning happens.

A different kind of productive error came up when students tried to define "straight line." A student named Gauri proposed defining straight lines in terms of collinear points. When asked what collinear means, she said: points that lie on a straight line. The circularity was immediately visible. Students had defined straight lines in terms of collinearity and collinearity in terms of straight lines.

This was not a failure of reasoning. It was a discovery. Students had run into one of the deepest problems in the foundations of mathematics — the fact that you cannot keep defining objects in terms of other objects without eventually going in circles. The error, if you want to call it that, led directly to the concept of undefined terms and axioms. They needed to be wrong in order to understand why the right answer is what it is.

Four flavours of flawed reasoning

Looking across the course, I noticed four recurring types of flawed reasoning:

The first is applying a claim outside its scope, as Anya did with the triangle theorem. A statement that is true in one context gets used in a context where the conditions for its truth are not met. This is extremely common in everyday reasoning too — taking a generalisation that works in one domain and assuming it works everywhere.

The second is confusing a statement with its converse. One group of students claimed that their conclusion was the same as another group's conclusion, when in fact it was the converse. "If A then B" is not the same as "if B then A," but the two get conflated surprisingly often.

The third is circular reasoning, as in the straight line example. You define X in terms of Y and Y in terms of X, and the whole thing goes nowhere. When students spotted this circularity, they immediately realised something was wrong but did not know how to fix it. That recognition — knowing something is wrong without yet knowing the solution — is itself a valuable intellectual state.

The fourth is generalising from too few examples, as Imran did. A single confirming or disconfirming case gets treated as proof of a general claim. I found that students were generally better with counterexamples than with confirming evidence — I could not find even one instance where a student persisted with a definition after seeing a clear counterexample. But many students treated a single confirming example as proof that their definition was correct.

Each of these types of flawed reasoning could be explained by how students reasoned, how they communicated their reasoning, or both. And in every case, the flaw was more useful than if the student had simply been given the right answer from the start. The flaw created a situation where there was something specific to examine, discuss, and learn from.

The trouble with being right

There is a deeper point here. In a traditional mathematics class, the implicit goal is to get to the right answer with the right method as quickly as possible. Errors are obstacles to be avoided. But this creates a problem: if you always get the right answer, you never learn what happens when reasoning goes wrong. And reasoning goes wrong all the time — in mathematics, in science, in everyday life.

Dewey says that deep knowledge "is the fruit of the undertakings that transform a problematic situation into a resolved one." The key phrase is "problematic situation." If there is no problem — if the answer is given to you and your job is merely to reproduce it — then there is no transformation, and therefore no deep learning. You need the confusion. You need the perplexity. Dewey is explicit about this: the process begins with "some perplexity, confusion, or doubt" and continues as students try to fit things together.

Polya makes a complementary point about the teacher's role in this process. The student should acquire as much experience of independent work as possible, he writes. But if left alone with a problem without any help, they may make no progress at all. If the teacher helps too much, nothing is left to the student. The balance is delicate. You want students to struggle, but productively — making progress, learning from their errors, building understanding. Not pointlessly, banging against a wall with no tools and no idea why they are stuck.

In my implementation, I saw both kinds. Some students found the struggle engaging and creative. Devika described the discrete geometry module as "very different from what we usually learn" and said that even the irritation of "never getting it right" was fun. But others found the same experience frustrating and opaque. The difference, I think, had less to do with ability and more to do with how comfortable students were with not knowing the answer. Some students can sit with confusion and trust that it will resolve itself. Others need the reassurance of knowing they are on the right track. This is not a fixed trait — it is something that can be cultivated. But it takes time, and a single course is not enough to develop it reliably.

What we are pushing against

The psychology literature makes clear that the habit of examining your own beliefs critically does not come naturally. We all engage in motivated reasoning — we seek out evidence that supports what we already believe and discount evidence that contradicts it. We do not change our minds as often as we should. And we accept claims from authority figures — whether politicians, religious leaders, textbook authors, or teachers — without subjecting them to the level of critical evaluation we ought to.

These are not defects of particular individuals. They are features of how human minds work. The best we can do, it seems, is develop mindsets that push back against these tendencies. The mindset of being okay with being wrong — in fact, even revelling in admitting you were wrong, because it means you have learned something. The mindset of wanting to put your own beliefs through critical evaluation, by yourself and by others. And what I would call intellectual scepticism: the habit of not blindly accepting claims made by authority, but also not blindly rejecting them, as conspiracy theorists do. Rather, subjecting claims to an adequate level of scrutiny before accepting them.

Several students in my course mentioned something like this in their reflections. Zoya said, "Earlier my mind used to believe everything that came in front of me. But now I ask questions like why." Tanya said the course taught her "not to assume the given facts to be true. We should cross-check and go to the root of things to understand them better." Tarini said she learned "not to take anything given in the textbooks for granted." These are encouraging statements, though I should be careful about how much weight to put on them — students may say what they think the facilitator wants to hear, and a stated intention to think critically is not the same as actually doing it.

The ethical dimension

I want to add a complication that I think is important. The tools of critical thinking — questioning assumptions, spotting flawed reasoning, demanding evidence — can be used for good or bad ends. Someone trained in rigorous argumentation can use those skills to defend a position they know to be false, to find flaws in others' reasoning while refusing to examine their own, or to construct sophisticated justifications for harmful actions. Without an ethical sense — a genuine concern for truth and for the consequences of one's reasoning — the thinking tools on their own could potentially cause harm.

This is not something I have worked on in depth, and I do not have solutions to offer. But I think it is dishonest to talk about celebrating error and developing intellectual scepticism without acknowledging that these dispositions need to sit alongside certain moral commitments. The goal is not just a more rigorous thinker. It is a more rigorous thinker who cares about getting things right for the right reasons.

What this asks of schools

If we take seriously the idea that being wrong is often where the learning happens, then several things about how schools operate would need to change. Assessment would need to reward the quality of reasoning, not just the correctness of answers. Classroom culture would need to make errors visible and discussable rather than shameful. Teachers would need to be comfortable saying "I do not know" and modelling what it looks like to work through uncertainty. And perhaps most difficult, the pace of instruction would need to slow down enough for students to actually struggle with ideas rather than being rushed to the next topic.

None of this is easy, and I do not want to pretend that my course achieved all of it. Some students left the course with something like the intellectual dispositions I have been describing. Others found parts of it frustrating or too hard. The mindsets I am talking about — comfort with being wrong, intellectual scepticism, the habit of examining your own reasoning — are not things that develop in nine sessions. They require sustained practice and a culture that values them. But I do think the theory building approach creates genuine opportunities for these mindsets to develop, in a way that a traditional mathematics classroom — focused on correct answers and pre-set procedures — does not.