Computational complexity and philosophy
What might computational complexity have to do with philosophy? This is a write-up of my usual (informal) explanation at parties to people who have no knowledge of computational complexity.
In short: philosophical theories often require a notion of the difficulty of an information-processing task, and computational complexity is a well-developed measure of such difficulty. Unfortunately, many existing philosophical appeals to complexity theory have abstracted away from details of individual cases that turn out to be quite important.
Philosophy and difficulty.
In philosophy and allied disciplines, questions about the difficulty of reasoning and other forms of information processing often arise. I don’t propose here to demonstrate any exciting philosophical claims, but rather to show that what I hope are fairly plausible philosophical arguments lead us to the question: how hard is such-and-such a task?
-
Right and wrong. It’s uncontroversial that knowledge and ignorance have at least some significance to questions of right and wrong. But what is that significance?
Consider: I slam open a door, ignorant that someone is behind it, and knock them over.
A straightforward, if clearly wrong, principle is that ignorance is a complete excuse: a person who does something ignorant of a fact, who would not have done it had they known that fact, thereby does no wrong. To return to the case above, if the door is in a crowded public place, I ought to have opened the door slowly and looked—even though I wouldn’t have slammed the door open had I known someone was outside.
So it seems it is important to ask how reasonable it is to expect knowledge rather than ignorance. Had the door been to my own bedroom, it wouldn’t have been reasonable to expect me to have checked.
Sometimes, we are ignorant of facts because it would be impossible to know any better. In cases more analogous to moral decisions we routinely have to make, however, it would be difficult (perhaps too difficult) to know any better, rather than impossible. Suppose a parent must choose between two medical procedures that haven’t been scientifically compared in detail, and, through bad luck, chooses one that happens to be worse. In principle, it might not be impossible to determine which is better. But, at least for the purposes of attributing blame, the parent has not obviously done something wrong.
- Rationality. A roughly analogous line of argument applies to philosophical investigation of rationality. Some philosophers are interested in practically achievable notions of rationality, which must take into account our limited cognitive resources. Some forms of reasoning are easier than others, and therefore can be expected to be undertaken by any rational agent.
-
Cognition. We process information through various cognitive processes: perception, inference, language processing, and so on. From auditory inputs we come to awareness of sentences uttered by others; from knowledge of some facts we come to know other facts (e.g. if either A or B is the case, when we come to know that A isn’t the case, we typically infer B); and from rays of light in the retina, we apprehend ordinary objects (tables, chairs) before us.
Philosophers (and others) often interested in how we carry out such information-processing, in investigating the nature of the mind. A natural, if inchoate, view is that theories that suggest we undertake very difficult forms of information processing should be treated with suspicion. Consider a (quite incredible) theory that says: (1) we retain a record of all utterances we ever hear, and (2) our competent use of language involves exhaustively searching these past utterances: e.g., in determining whether a sentence is grammatical, we exhaustively search for a similar sentence we have heard before. To retain such a record would be very difficult, and we might suspect that this is not, on those grounds, a very likely theory.
This is a rather inchoate principle in that (1) it does not tell us how suspicious we should be; but, leaving that aside, (2) to apply this principle more generally, we need to know what should count as difficult.
Computational complexity.
Computer science offers one measure of the difficulty of a task: computational complexity.
Consider a list containing two numbers. The task is to sort the list in ascending order, by repeated use of the following operation: one compares two adjacent numbers, and, if they are in the wrong order, one swaps them (let us call these ‘comparisons’). How many comparisons do we need to sort this list of two elements? Obviously: one.
Suppose that the list contains three elements. How many comparisons do we need to sort the list? One comparison isn’t enough: it doesn’t allow us to even consider one of the numbers. What about two comparisons? If we are lucky, two comparisons might work: to verify that the list [1,2,3] is sorted, we only need to note that 1 < 2, and 2 < 3. But two comparisons will not, in fact, suffice. Every time we make a comparison, there are at most two possibilities as to what we do (either we swap the numbers or we don’t). With two comparisons, there are therefore four possibilities as to the order in which we put the initial elements. But there are six possible correct orders (3 × 2 × 1). So we need three comparisons.
What if we don’t know how many elements there are in the list? Suppose that there are elements; there could be five, or a hundred, or a million. How many comparisons, in terms of , do we need? A very pessimistic answer would be: . That would mean that the number of comparisons would double every time we add a number to the list. A very optimistic answer would be: . That would mean that the additional number of comparisons needed from adding an element to the list would be fixed. That is the sort of question that computational complexity answers. A classic result in complexity theory is that to sort a list requires fewer than but more than comparisons: rather, they take (roughly) (Hoare).
Complexity theory does not only concern lists; it considers the difficulty of problems, in general, in terms of their size. For example: what’s the shortest route that visits each of cities at least once? Given statements, are they contradictory?
In these terms, what counts as easy? The answer is typically polynomial time. What does that mean? A polynomial-time algorithm that solves a problem of size in polynomial time solves it in steps, where is an arbitrary but fixed number. List-processing algorithms that take and comparisons to finish take polynomial time. If they take comparisons, they don’t.
Polynomial time is very mathematically elegant. A natural worry from the foregoing is: when should we consider an operation to really be a single operation, and when should we consider it really to be several distinct operations? If I sort cards with numbers on them into a single list, I might look at three cards at the same time, rather than the two that the model above required. The problems that can be solved in polynomial time do not change when we make seemingly reasonable changes to the model like this. (I think that, unnoticed, a philosopher was the first to formulate something like this view.) Polynomial time therefore seems a fairly reasonable way to answer the question: what counts as difficult?
Applications.
Although philosophers are usually more familiar with logic and probability than complexity theory, some philosophical work has appealed to it. (See Aaronson, Szymanik and Verbrugge, van Rooij et al. and Dean for surveys.) Here’re some examples.
- Stenseke: most moral systems require us to solve computationally intractable problems. Complexity theory therefore shows that the best we can feasibly achieve will be quite far from what these moral systems prescribe in principle.
- Levesque: classical logic cannot yield good models of human reasoning, because it is too complexity-theoretically difficult in which to reason. But modified logical systems might yield better models that respect complexity-theoretic limits.
- Van Rooij proposes a tractable cognition thesis, a standard to which theories in cognitive science should be held. Such theories should not attribute to us forms of information processing that take longer than polynomial time (roughly).
- Ristad (§ 6.3) argues that the forms of cognition required for competent use of language are complexity-theoretically difficult. The central question in studying language should not, therefore, be why we make mistakes, but how we manage to produce broadly correct output in the first place. This undermines the (contested) distinction in linguistics and philosophy of language between idealised language (competence) and our actual linguistic output, that contains mistakes (performance).
- Some Chomskyans argue that complexity theory shows that we must have innate knowledge of language, since learning it without innate knowledge would be too complexity-theoretically difficult.
Some objections.
Here are two natural worries about the identifiation of complexity-theoretic classes (e.g. polynomial time) with what is feasible or practicable.
- Can’t polynomial functions turn out to be very large (e.g. —larger than many reasonable exponential functions on even medium-sized inputs? And, similarly, can’t exponential functions turn out to be quite reasonable (e.g. rather than ) on reasonably sized medium inputs?
- Complexity theory concerns the difficulty of problems with respect to inputs of unbounded size. But some problems typically have inputs of bounded size, so complexity theory erroneously gives significance to overly large cases.
In the case of computers, much of this objection has in practice been misplaced. Most polynomial algorithms we’ve encountered have ultimately been reasonable (, let’s say, rather than ). And due to improvements in computing power, what were once unrealistically large inputs are now comonplace (Moore). However, some exponential-time algorithms really are used in practice on medium-sized inputs.
These responses are less satisfactory in the human case. Humans often are confronted with problems of fairly bounded size. Consider visual processing of retinal input into ordinary objects. It doesn’t seem that we have substantially more retinal input than our ancestors a thousand or million years ago did. Nor is it clear that we need to resolve these inputs into substantially more objects. In the case of a computer, we are generally justified in assuming that it’s likely that we’ll eventually want to process inputs of, say, double the existing size. Such a rule doesn’t seem to apply to the human case.
van Rooij et al. (5 ff) argue by example that anything worse than polynomial time will restrict us to really rather small inputs. The examples they give are and . In the examples they give, inputs become unreasonably large from about to . There are a few problems with this argument.
- It is possible to take longer than polynomial time without taking exponential or factorial time. For example, is far more forgiving than .
- is not a very helpful example. The best known algorithm for one important problem (Boolean satisfiability) takes time (Hansen et al.).
van Rooij et al. (§ 9.9) agree that ‘if the input size is small’, complexity-theoretic considerations are irrelevant; but they suggest that in practice, ‘for many cognitive capacities the size of the input as a whole is not small’. This suggests that ‘small’ has some fixed meaning in different cases.
That is where I think their approach fails. Rather, complexity theory can show, on a case-by-case basis, that given
- certain computational resources, and
- the complexity of a certian problem
there will be a bound on feasible input sizes. This applies to polynomial functions—indeed, all complexity classes. For some machine learning datasets, is prohibitively large. is prohibitively large in much smaller cases; and for cases yet smaller.
The attractiveness of identifying polynomial time with feasibility is that this standard is mathematically simple to use. It allows us to abstract away from specific details of the model of computation involved in a particular cognitive capacity, since these don’t affect whether a problem takes polynomial time. Unfortunately, the price of this mathematical convenience is accuracy. Complexity theory may be useful to philosophy, but only by close examination of individual cases.
-
Aaronson, Scott, ‘Why Philosophers Should Care about Computational Complexity’, Computability, ed. by B. Jack Copeland, Carl J. Posy and Oron Shagrir, The MIT Press, 2013, 261–328.
-
Dean, Walter, ‘Computational Complexity Theory’, The Stanford Encyclopedia of Philosophy, ed. by Edward N. Zalta, Fall 2021, Metaphysics Research Lab, Stanford University, 2021.
-
Hansen, Thomas Dueholm et al., ‘Faster k-SAT algorithms using biased-PPSZ’, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, New York, NY, USA: Association for Computing Machinery, 2019, 578–589.
-
Hoare, C.A.R., ‘Quicksort’, The Computer Journal 5.1 (1962), 10–16.
-
Levesque, Hector J., ‘Logic and the Complexity of Reasoning’, Journal of Philosophical Logic 17.4 (1988), 355–389.
-
Moore, Gordon E, ‘Cramming more components onto integrated circuits’, Electronics 38.8 (1965).
-
Ristad, Eric Sven, The language complexity game, Artificial intelligence series, Cambridge (MA), London: MIT Press, 1993.
Stenseke, Jakob, ‘On the computational complexity of ethics: moral tractability for minds and machines’, Artificial Intelligence Review 57.4 (2024).
-
Szymanik, Jakub and Rineke Verbrugge, ‘Tractability and the computational mind’, The Routledge Handbook of the Computational Mind, Routledge, 2018.
-
Van Rooij, Iris, ‘The Tractable Cognition Thesis’, Cognitive Science 32.6 (2008), 939–984.
-
Van Rooij, Iris et al., Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis, Cambridge: Cambridge University Press, 2019.
À propos.
Étiquettes.
computational complexity, philo, en cours.
Mises à jour.
- J.P. Loo (17 août 2025): Second draft (complexity/philo)
- J.P. Loo (17 août 2025): Copying an old first draft (philo and complexity).