Cunningham’s Law
Automatic translate
Cunningham’s Law is an empirically observed rule of online communication that states that the best way to get the right answer online is not to ask a question, but to post a knowingly false statement. The principle reflects the asymmetry of human motivation: the desire to correct someone else’s error is significantly more effective than the willingness to answer an open-ended question.
2 Ward Cunningham and his contributions
3 Psychological mechanisms
4 Analogues in different cultures
5 Manifestations on specific platforms
6 Practical application
7 Limits of applicability and criticism
8 Related concepts and laws
9 Ethical dimensions
10 Law in the context of collective knowledge production
11 Connection with the phenomena of propaganda and disinformation
12 Constancy of effect over time
History of origin
The law is named after American programmer Howard Ward Cunningham (born May 26, 1949), the inventor of wiki technology. Cunningham created the first wiki system, WikiWikiWeb, in 1994–1995 and published it on the website of his consulting firm, Cunningham & Cunningham (c2.com), on March 25, 1995. This platform became the forerunner of Wikipedia and the entire culture of collaborative editing that followed.
The law itself was formulated not by Cunningham, but by his colleague Stephen McGeady, who observed Ward’s communication style in early online communities in the early 1980s. According to McGeady, Cunningham had a knack for extracting useful information from his interlocutors without directly asking for help, but by gently provoking them to elaborate. McGeady summarized this observation as an aphorism and dubbed it Cunningham’s Law, although Ward himself later denied authorship, calling it an "inaccurate quotation."
Initially, the law covered user behavior on Usenet, the text-based precursor to modern forums. Later, the law extended to Wikipedia, Stack Overflow, Reddit, and other platforms for shared knowledge.
Ward Cunningham and his contributions
Howard Ward Cunningham is one of the few programmers whose work has changed the very architecture of public knowledge. In addition to wikis, he co-authored the Agile Manifesto, developed the concept of "technical debt," and made significant contributions to the theory of object-oriented programming patterns — jointly with Kent Beck back in 1987.
The idea for a wiki grew out of the offline HyperCard system Cunningham used to track ideas within his company. The name "WikiWikiWeb" arose by chance: at the Honolulu airport, Cunningham took the "Wiki-Wiki" shuttle — in Hawaiian, "wiki" means "fast," and "wiki-wiki" means "very fast." Thus, the concept of speed became woven into the very name of the technology.
Cunningham’s career spans a stint at Tektronix Computer Research Lab, an architectural role in Microsoft’s Patterns & Practices group, and the position of CTO at AboutUs. His wiki engine gave birth to Wikipedia — according to McGeady, Wikipedia has become the most illustrative global implementation of Cunningham’s Law.
Psychological mechanisms
The need to be right
The law works not in spite of human nature, but precisely because of it. Psychologists have long described a persistent need for cognitive superiority in humans — the desire to feel like the bearer of correct knowledge. When a false statement appears before one’s eyes, this desire is activated almost reflexively: the person sees a goal, not a plea for help.
An open-ended question doesn’t create such a stimulus. It requires effort, research, and formulation — and yet doesn’t guarantee a feeling of being right. A false statement, on the other hand, provides an immediate reward: correcting someone is much more satisfying than simply helping them.
This mechanism aligns well with Leon Festinger’s theory of cognitive dissonance. A person who discovers a discrepancy between their knowledge and what they see in a text experiences internal discomfort. The only quick way to resolve this tension is to write a correction. The more confident a person is in their rightness, the stronger the impulse.
The Dunning – Kruger effect in the online environment
Interestingly, it’s not just experts who are posting corrections online. The Dunning-Kruger effect describes a cognitive bias in which people with low levels of knowledge in a given field systematically overestimate their competence. In Dunning and Kruger’s study, participants who scored in the bottom quartile of test results estimated themselves to be in the 62nd percentile, while actually ranking around the 12th.
In practice, this means that Cunningham’s Law attracts not only those who actually know the correct answer, but also those who only think they do. As a result, an initial erroneous statement often receives several competing "corrections," some of which themselves contain errors.
Disinhibition in the network
In 2004, psychologist John Suler described the phenomenon of the "online disinhibition effect": the anonymity and asynchronous nature of online communication reduce the social inhibitions that prevent people from making harsh statements in real life. David Dunning noted that in real-life conversations, norms of politeness and mutual respect prevail, while in asynchronous social networks, interlocutors "proclaim" rather than communicate — the rules of mutual tact simply don’t apply.
This explains why correcting others’ mistakes online is faster and more aggressive than in a face-to-face conversation. An anonymous user correcting a stranger on a forum incurs minimal social costs while receiving public confirmation of their own competence.
Analogues in different cultures
The principle described by Cunningham’s law is not new at all; it has been found in a wide variety of traditions long before the advent of the Internet.
In French, there’s a saying: "prêcher le faux pour savoir le vrai " — "preach a lie to learn the truth." It’s used in rhetorical and didactic contexts, implying the deliberate advancement of a patently dubious thesis in order to provoke a reaction from the interlocutor.
The Chinese idiom 拋磚引玉 ("to throw a brick to attract jade") describes a similar tactic: offering something imperfect or of little value to entice others to share something significantly more valuable. The emphasis here is not on the error, but on the incompleteness of the original offer.
This technique is also used in literature. In Conan Doyle’s novel "The Sign of Four," Sherlock Holmes deliberately makes false or inaccurate inferences to provoke his interlocutor to elaborate. It’s a classic detective technique: a false theory draws more information from the source than a direct question.
Manifestations on specific platforms
Usenet and early forums
Usenet, a networked newsgroup system that had existed since 1980, was the first environment in which McGeady observed this effect. Usenet users were technically literate and highly motivated to be precise. An inaccurate statement in such an environment provoked an immediate and often extensive response — complete with citations, sources, and counterarguments.
Wikipedia
McGeady considered Wikipedia "the most famous embodiment" of the law. Wikipedia’s mechanism, by its very nature, assumes that someone will write an initial, imperfect version of an article, and the community will correct it. In practice, articles with factual errors often receive edits significantly faster than articles with gaps and omissions — the incompleteness is less noticeable than the error.
Stack Overflow
The Stack Overflow platform, launched in 2008, has incorporated this principle into its architecture. A programmer who posts a broken code fragment with the question, "Why isn’t this working?" receives diagnostics, fixes, and often several alternative approaches within minutes. The impulse to fix it serves as a driver of collective learning.
On Reddit, confidentially false claims posted with apparent confidence often spawn entire threads with detailed rebuttals, source links, and explanations. Often, the result is a more comprehensive and informative analysis of the topic than can be obtained by searching knowledge bases.
xkcd and popular culture
The webcomic xkcd dedicated an episode of "Duty Calls" to this phenomenon, subtitled "Someone is wrong on the Internet." The character is unable to go to bed because someone online has written an incorrect statement. The comic became an internet meme, perfectly capturing the irrational nature of this reflex.
Practical application
In search of information
The law is actively used as a deliberate strategy. Researchers, journalists, and simply curious users sometimes publish inaccurate versions of facts to elicit corrections from those with actual knowledge. This method works particularly well in highly specialized communities where expertise is high and the barrier to intervention is low.
An example would be any discussion of historical dates, technical specifications, or geographical names: just write "the capital of Australia is Sydney," and corrections from knowledgeable people will quickly follow. The correct answer — Canberra — will appear instantly, usually with an explanation.
In marketing and PR
Communications experts observe a similar dynamic in public debate. A provocative, deliberately oversimplified, or partially inaccurate statement from a brand or public figure generates significantly more discussion than a balanced and accurate one. This creates obvious ethical contradictions: attracting attention through deliberate inaccuracy blurs the line between communication and manipulation.
In intelligence practice
The strategy of deliberately disseminating false information to provoke a reaction is also well-known in intelligence operations. By releasing a knowingly false fact into circulation, one can expect informed individuals to refute it — thus revealing information that would otherwise be unobtainable. This technique is fundamentally different from conventional disinformation: the goal is not to mislead, but to provoke a reaction that will lead to disclosure.
In pedagogy
Teachers have long used a similar method, calling it "Socratic provocation" or the fallacious thesis method. The teacher deliberately formulates an incorrect statement, challenging students to refute it. This approach increases engagement: a student who proves a mistake learns the material more deeply than one who passively accepts the correct answer.
Limits of applicability and criticism
The problem of incompetent amendments
Cunningham’s Law assumes that those correcting a statement know the correct answer. In practice, this is far from always the case. A confidently written correction, itself containing an error, creates a double problem: the original misconception is reinforced by the authority of the correction. Readers who see the "correction" are inclined to trust it more than the original statement.
On Reddit and similar platforms, this effect is well known by the informal name "confidently incorrect": an answer written with a confident tone gets more upvotes than an accurate but uncertain one.
The backfire effect
Research on disinformation refutation has shown that correcting a false fact in some cases strengthens rather than weakens belief — especially when it touches on worldview-relevant topics. This phenomenon, described by Nyhan and Reifler in 2010 and dubbed the "worldview backfire effect," significantly limits the range in which Cunningham’s law reliably holds.
However, later meta-analytic studies, covering 31 studies and 72 dependent variables, showed that backfire effects are significantly less common than initially assumed and are often explained by measurement error. Correcting for misinformation does, however, reduce the degree of false belief in most of the situations studied.
Spreading the error to a new audience
A separate problem is that publishing an incorrect claim potentially introduces it to a new audience — those who read the original post but don’t finish the correction. In algorithmically driven feeds, where every reaction increases the visibility of content, an incorrect claim that attracts many corrections can reach far more than the correct answer.
This turns the method into a double-action tool: it can accelerate the search for truth among an informed audience and simultaneously spread error among the uninformed.
Aggressiveness of the amendments
The reflex to correct someone else’s mistake is often accompanied by disdain or ridicule. A correction written with superiority accomplishes one goal — demonstrating competence — but fails to accomplish another: imparting knowledge. Someone who feels humiliated often defends their original position rather than revising it. Thus, the effectiveness of this method depends not only on the presence of an error in the post but also on the tone of the responder.
Related concepts and laws
Godwin’s Law
In 1990, Mike Godwin formulated the observation that as an online discussion grows, the probability of comparisons to Nazis or Hitler approaches one. This observation describes the decay of discussion over time — a process inverse to that predicted by Cunningham’s Law. While Cunningham describes how error triggers a movement toward truth, Godwin documents how a long discussion moves from subject matter to emotion.
Pixie Hollow Effect (anecdotal name)
A number of sources describe an informally observed effect: a single, confidently incorrect answer on a forum is enough to trigger the emergence of experts who would otherwise remain silent. The mechanism is similar to Cunningham’s Law, but emphasizes not the informational value of amendments, but their social function — demonstrating one’s membership in "those in the know."
Goodhart’s and Gall’s laws
In the broader context of network interaction, Cunningham’s law is closely related to other observations on the irrationality of collective behavior. Gall’s law states that a functioning complex system invariably evolved from a functioning simple system. Applied to collective knowledge, this means that Wikipedia became functional precisely because it began as a collection of simple, editable pages, not as a pre-designed encyclopedia.
Ethical dimensions
Intentionally publishing an inaccurate statement in order to obtain a correction raises questions about the integrity of communication. On the one hand, if the original statement is labeled as a hypothesis or speculation, the problem is minimal. On the other hand, intentional lying, even for informational purposes, violates the basic norms of public discourse.
This issue is even more pressing in a context where the audience doesn’t have time to read the correction. Journalists have repeatedly documented situations in which a refutation of a viral misconception itself became a source of its spread: content ranking mechanisms fail to distinguish between "false assertion" and "refutation of a false assertion" — both forms increase the visibility of the original thesis.
Finally, Cunningham’s Law is a convenient self-justification for those who publish erroneous statements without the intention of seeking correction. The retrospective explanation of "I was testing a hypothesis" has become a standard rhetorical device after exposure.
Law in the context of collective knowledge production
Cunningham’s wiki model and the law that bears his name point to one thing: collective knowledge operates according to rules very different from those of academic peer review. In an academic environment, an expert answers a question when asked. In an online environment, an expert intervenes when they see an error.
This distinction has an important consequence. A question is filtered by the questioner’s reputation: a stranger without status rarely receives a detailed answer. An error is not filtered at all: it affects the reader’s competence, regardless of their relationship to the author. This is why the law operates horizontally — through communities of strangers — much more effectively than direct inquiries to experts.
Asymmetry of help and criticism
Social psychologists note that criticism and assistance activate different motivational systems. Assistance requires empathy — the ability to put yourself in the other person’s shoes and understand what they need. Criticism requires only self-righteousness. Criticism is psychologically less costly, which is why there are statistically more critical responses online than helpful ones.
This doesn’t mean that people are inherently malicious. Rather, the specific configuration of incentives in the online environment — anonymity, asynchrony, publicity of correction — systematically biases behavior toward criticism, even among people with good intentions.
The role of algorithms
Modern platforms amplify this effect through algorithmic ranking. A post that generates many reactions and corrections receives higher reach, regardless of its credibility. This creates a feedback loop: the more an error provokes corrections, the more people see it — and the more new corrections come.
Stack Overflow platforms partially addressed this problem by introducing a vote system for answers. An accepted answer receives a visual marker, reducing the likelihood that a user will mistake one of the erroneous correction posts for the correct one. However, most social platforms lack such a mechanism.
Connection with the phenomena of propaganda and disinformation
The logic of Cunningham’s Law explains one of the paradoxes of information warfare: debunking a fake often spreads it more widely than ignoring it. Intelligence communities and information operations researchers have long described this effect as "amplification by rebuttal."
A separate and little-studied phenomenon is when a provocative false claim is deliberately published by state actors or political groups to force opponents to waste time and resources on corrections instead of advancing their own agenda. Here, Cunningham’s Law transforms from a tool for knowledge acquisition into a tool for information depletion.
Constancy of effect over time
Cunningham’s Law was described for Usenet in the 1980s, verified on Wikipedia in the 2000s, Stack Overflow in the 2010s, and remains observable on modern forums and messaging apps in the 2020s. This consistency suggests that this isn’t a random artifact of a specific platform, but rather a consistent pattern in how people behave in situations of public disagreement.
Technologies change, interfaces become more complex, but the basic instinct — the urge to correct someone who makes a public mistake — remains unchanged. This makes Cunningham’s Law not just an observation about early networks, but a description of a structural property of any public environment where errors are visible and the cost of correction is low.
You cannot comment Why?