La Vignetterie.

Computational complexity and philosophy

What might computational complexity have to do with philosophy? This is a write-up of my usual (informal) explanation at parties to people who have no knowledge of computational complexity.

In short: philosophical theories often require a notion of the difficulty of an information-processing task, and computational complexity is a well-developed measure of such difficulty. Unfortunately, many existing philosophical appeals to complexity theory have abstracted away from details of individual cases that turn out to be quite important.

Philosophy and difficulty.

In philosophy and allied disciplines, questions about the difficulty of reasoning and other forms of information processing often arise. I don’t propose here to demonstrate any exciting philosophical claims, but rather to show that what I hope are fairly plausible philosophical arguments lead us to the question: how hard is such-and-such a task?

Computational complexity.

Computer science offers one measure of the difficulty of a task: computational complexity.

Consider a list containing two numbers. The task is to sort the list in ascending order, by repeated use of the following operation: one compares two adjacent numbers, and, if they are in the wrong order, one swaps them (let us call these ‘comparisons’). How many comparisons do we need to sort this list of two elements? Obviously: one.

Suppose that the list contains three elements. How many comparisons do we need to sort the list? One comparison isn’t enough: it doesn’t allow us to even consider one of the numbers. What about two comparisons? If we are lucky, two comparisons might work: to verify that the list [1,2,3] is sorted, we only need to note that 1 < 2, and 2 < 3. But two comparisons will not, in fact, suffice. Every time we make a comparison, there are at most two possibilities as to what we do (either we swap the numbers or we don’t). With two comparisons, there are therefore four possibilities as to the order in which we put the initial elements. But there are six possible correct orders (3 × 2 × 1). So we need three comparisons.

What if we don’t know how many elements there are in the list? Suppose that there are n elements; there could be five, or a hundred, or a million. How many comparisons, in terms of n, do we need? A very pessimistic answer would be: 2n. That would mean that the number of comparisons would double every time we add a number to the list. A very optimistic answer would be: 2n. That would mean that the additional number of comparisons needed from adding an element to the list would be fixed. That is the sort of question that computational complexity answers. A classic result in complexity theory is that to sort a list requires fewer than 2n but more than 2n comparisons: rather, they take (roughly) nlogn (Hoare).

Complexity theory does not only concern lists; it considers the difficulty of problems, in general, in terms of their size. For example: what’s the shortest route that visits each of n cities at least once? Given n statements, are they contradictory?

In these terms, what counts as easy? The answer is typically polynomial time. What does that mean? A polynomial-time algorithm that solves a problem of size n in polynomial time solves it in nc steps, where c is an arbitrary but fixed number. List-processing algorithms that taken5,2n + 7,n + logn, and 3 comparisons to finish take polynomial time. If they take 2n comparisons, they don’t.

Polynomial time is very mathematically elegant. A natural worry from the foregoing is: when should we consider an operation to really be a single operation, and when should we consider it really to be several distinct operations? If I sort cards with numbers on them into a single list, I might look at three cards at the same time, rather than the two that the model above required. The problems that can be solved in polynomial time do not change when we make seemingly reasonable changes to the model like this. (I think that, unnoticed, a philosopher was the first to formulate something like this view.) Polynomial time therefore seems a fairly reasonable way to answer the question: what counts as difficult?

Applications.

Although philosophers are usually more familiar with logic and probability than complexity theory, some philosophical work has appealed to it. (See Aaronson, Szymanik and Verbrugge, van Rooij et al. and Dean for surveys.) Here’re some examples.

Some objections.

Here are two natural worries about the identifiation of complexity-theoretic classes (e.g. polynomial time) with what is feasible or practicable.

In the case of computers, much of this objection has in practice been misplaced. Most polynomial algorithms we’ve encountered have ultimately been reasonable (n3, let’s say, rather than n1000). And due to improvements in computing power, what were once unrealistically large inputs are now comonplace (Moore). However, some exponential-time algorithms really are used in practice on medium-sized inputs.

These responses are less satisfactory in the human case. Humans often are confronted with problems of fairly bounded size. Consider visual processing of retinal input into ordinary objects. It doesn’t seem that we have substantially more retinal input than our ancestors a thousand or million years ago did. Nor is it clear that we need to resolve these inputs into substantially more objects. In the case of a computer, we are generally justified in assuming that it’s likely that we’ll eventually want to process inputs of, say, double the existing size. Such a rule doesn’t seem to apply to the human case.

van Rooij et al. (5 ff) argue by example that anything worse than polynomial time will restrict us to really rather small inputs. The examples they give are 2n and n!. In the examples they give, inputs become unreasonably large from about n = 5 to n = 20. There are a few problems with this argument.

van Rooij et al. (§ 9.9) agree that ‘if the input size is small’, complexity-theoretic considerations are irrelevant; but they suggest that in practice, ‘for many cognitive capacities the size of the input as a whole is not small’. This suggests that ‘small’ has some fixed meaning in different cases.

That is where I think their approach fails. Rather, complexity theory can show, on a case-by-case basis, that given

there will be a bound on feasible input sizes. This applies to polynomial functions—indeed, all complexity classes. For some machine learning datasets,n2 is prohibitively large. 1.307n is prohibitively large in much smaller cases; and 2n for cases yet smaller.

The attractiveness of identifying polynomial time with feasibility is that this standard is mathematically simple to use. It allows us to abstract away from specific details of the model of computation involved in a particular cognitive capacity, since these don’t affect whether a problem takes polynomial time. Unfortunately, the price of this mathematical convenience is accuracy. Complexity theory may be useful to philosophy, but only by close examination of individual cases.

Aaronson, Scott, ‘Why Philosophers Should Care about Computational Complexity’, Computability, ed. by B. Jack Copeland, Carl J. Posy and Oron Shagrir, The MIT Press, 2013, 261–328.

Dean, Walter, ‘Computational Complexity Theory’, The Stanford Encyclopedia of Philosophy, ed. by Edward N. Zalta, Fall 2021, Metaphysics Research Lab, Stanford University, 2021.

Hansen, Thomas Dueholm et al., ‘Faster k-SAT algorithms using biased-PPSZ’, Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, New York, NY, USA: Association for Computing Machinery, 2019, 578–589.

Hoare, C.A.R., ‘Quicksort’, The Computer Journal 5.1 (1962), 10–16.

Levesque, Hector J., ‘Logic and the Complexity of Reasoning’, Journal of Philosophical Logic 17.4 (1988), 355–389.

Moore, Gordon E, ‘Cramming more components onto integrated circuits’, Electronics 38.8 (1965).

Ristad, Eric Sven, The language complexity game, Artificial intelligence series, Cambridge (MA), London: MIT Press, 1993.

Stenseke, Jakob, ‘On the computational complexity of ethics: moral tractability for minds and machines’, Artificial Intelligence Review 57.4 (2024).

Szymanik, Jakub and Rineke Verbrugge, ‘Tractability and the computational mind’, The Routledge Handbook of the Computational Mind, Routledge, 2018.

Van Rooij, Iris, ‘The Tractable Cognition Thesis’, Cognitive Science 32.6 (2008), 939–984.

Van Rooij, Iris et al., Cognition and Intractability: A Guide to Classical and Parameterized Complexity Analysis, Cambridge: Cambridge University Press, 2019.

À propos.

Étiquettes.

computational complexity, philo, en cours.

Mises à jour.