In the last post, I explained how I conceive of physical theories: roughly, a vocabulary and axioms, from which we build models applicable to particular types of situations. Models can be mapped to concrete situations in the world, and we can compare their predictions to data models extracted from these situations. From this picture, we can ask: what is it, for a theory, to be empirically adequate?
Van Fraassen, who is one of the main contemporary defenders of empiricism, proposes a definition of empirical adequacy which is the following:
"A theory is empirically adequate exactly if what it says about the observable things and events in this world, is true—exactly if it ‘saves the phenomena’. A little more precisely: such a theory has at least one model that all the actual phenomena fit inside. I must emphasize that this refers to all the phenomena; these are not exhausted by those actually observed, nor even by those observed at some time, whether past, present, or future."
This definition dates back to his 1980 book "the scientific image" (You can find a similar one in a footnote of his 1989 "laws and symmetry"). What he means by "fit inside" is explicated later in the book in terms of isomorphism between data models and the "empirical substructures" of theoretical models. I think it's not the right way to understand empirical adequacy for several reasons, and although his notion of "fit inside" was challenged by some authors, who proposed more sophisticated accounts, this is hardly my main point of contention.
The first reason (which has been emphasised by many authors) why this definition of empirical adequacy is wanting is that it rests on the notion of "observable", and thus on a distinction between what is directly observable, without the help of instruments, and what is not. This distinction does not go without saying, and is a matter of gradation rather than a strict distinction.
To be sure, van Fraassen does not hold the naive view that this distinction is purely a priori or linguistic (has did the logical empiricists), or that something is unambiguously given in experience (as if direct observations were not as much revisable, and sometimes more, than observations with instruments). He explicitely says that theories themselves can help us know which phenomena are directly observable and which are not. Yet he maintains that observability is an objective property of phenomena, not a mere theoretical posit. What seems puzzling is how we could accept that our theories inform us about this objective feature without also accepting that these theories are true, not only empirically adequate.
As Paul Churchland notes (in "Images of science", 1985), one could also wonder why focus on what we could observe (if we were at different places) and not on what we could detect (if our senses were more acute). The problem is not that his definition of empirical adequacy depends on our epistemic situation, which shouldn't come as a surprise, but that the way our epistemic community is delimited seems arbitrary: the fact that we have more control on our capacities to move to different places than on our capacities to enhance our senses seems contingent. Perhaps we could also think of measuring apparatus as extensions of our senses, in line with Paul Humphreys. That does not mean being a realist, and accepting that our theories are true, even if our theories are involved in the functionning of our apparatus: at most, we're accepting that our theoretical vocabulary is a good (albeit revisable) way to describe our experiences, thus extended. After all, data models are expressed in a theoretical vocabulary.
Another big problem with this focus on the observable is that it restricts experience to observation, to the detriment of interventions. In the last post, I explained how applying a model to a concrete situation implies adjustement in both directions (calibration, ...). This aspect is completly eluded in van Fraassen's definition of empirical adequacy, and the experimenter is restricted to a position of spectator of natural phenomena, which is not how empirical confrontation works in practice. It's far from obvious that all the conditions of application of models to concrete situations (recognizing that it is the right type of system, that it interacts in such a way with its environment etc.) can be reduced to a question of fit between data models and predictions. Rather we should make sure that our models apply before comparing their predictions to data models. Otherwise, having an empirically adequate theory without any constraints on the conditions of applicability of models could be all too easy: couldn't we always find, for any theory, a model, however contrived or irrelevant, such that all phenomena in the universe would fit inside?
Now perhaps all these conditions of applications could in principle reduce to observable things (we would integrate all measuring apparatus inside our theoretical models, and so on), without interventions playing a crucial role. But I find this idea a bit idealistic. More generally, the idea that our theories are only about explaining directly observable phenomena seems a bit naive. Take a biological theory, for example: do we posit proteins to explain something observable, like the behaviour of living organism? Well, not really: in order to bring out the observable manifestations of proteins, several manipulations are required: we must first disrupt a cell's membrane, then purify the resulting mixture by ultracentrifugation, then isolate the proteins by salting out, and eventually, we might contemplate their observable manifestations through chromatography. It's not as if we were passively observing living organism, and inferred the existence of proteins on this basis. And the same could be said about the particles of contemporary physics, which are brought about in particle accelerators that require an incommensurable amount of fine tuned manipulations. I see no reason to think that these finely tuned manipulations play a less central role in experimentation than the resulting observations.
However, this conflicts with his claim that modal statements have no truth value (that they are mere pragmatic devices): how could his definition of empirical adequacy make sense if "observable" is modal?
As a solution, he claims that "observable" is only apparently modal, but is actually an objective feature of phenomena. The modal statement "something observable is something that we could observe" is not a definition of observability, but a statement that we can derive from it, as when we say that "there's a letter in this drawer" implies that if we opened the drawer, we'd see it. But then, as Ladyman (2004) insists, he should tell us what he means by observable without using modal statements, or his definition threatens to be vacuous. One could, again, expect our theories to tell us more about this (in terms of photons and retina, say), but the same kind of difficulty as before arises: don't we have to assume that these theories are true, for the notion of observable to make sense?
Imagine a theory that says that phenomena of type A are observable. If we only know that this theory is empirically adequate, that is, that it is true of observable phenomena, then we only know of phenomena of type A that when they are observable, then they are observable, which is a tautology. Sure, we also know that "observable" implies "actually observed when in the right conditions", but then we only know that our theories are true of what is observed in the right conditions, and his definition of empirical adequacy reduces to a theory being true of actually observed phenomena only, not all observable phenomena.
So being an empiricist, either we restrict empirical adequacy to what is actually observed, which seems too restrictive to account for scientific practice, or we must resort to modalities to explain how our theories still make good predictions when we do not observe anything. This is the price to pay when we don't have an a priori, quasi-linguistic distinction between observable and unobservable.
Model of the universe vs models as indexicals
Here for the notion of observability. Now my main grief with van Fraassen's definition of empirical adequacy has to do with the "model of the universe" view: according to van Fraassen, a theory must have at least one model such that all phenomena, past, present and future, fit inside to be empirically adequate.
I find this view at odds with his later writing, where he puts emphasis on the notion of representation as a "three place relation" (an intentional agent, a model and the represented object) and systematically takes examples of contextual applications to particular experimental situations. Yet to my knowledge, he never updates his old definition of empirical adequacy.
I touched upon the problem of the "model of the universe" view at the end of the last post. Remember that we are not talking about cosmological models, but about models that would contain the biggest structures and the tinest phenomena in distant planets altogether.
One problem is that following this view, empirical adequacy is not a transparent notion, since we do not know what happens or will happen everywhere in the universe. It has tangible consequences, since it brings constraints on what can be observed in any part of the universe (it must be embedable in a model of the universe that respects the laws of the theory), but wouldn't it be more reasonable to focus on these constraints directly, rather than hypothetizing that we could, in principle, build a model of the universe? We do not need this abstract assumption to be confident that our theories are empirically adequate. At most it seems to be a far-fetched consequence of empirical adequacy that maybe we could build such a model. When scientists test a theory, they never note the exact place and time of each experiment with the project of building a big model with all these particular experiments embeded, so this view is disconnected from scientific practice. What really counts is that each particular experiment was succesful, no matter where in the universe.
This aspect can be related to the fact that scientific theories, in general, are indifferent to the place, time or particular object to which they apply (which translates into certain space-time or permutation symmetries). It seems to me that theoretical models work like indexicals do in the philosophy of language: the truth value of a sentence like "I am eating now" will be determined in context by the locutor and time of its utterance. It is not true or false in the absolute. Similarly, a theoretical model, construed as a mathematical structure, does not say when or where it applies, and its predictive success is determined in context, when some aspects of the model (the frame of reference...) are mapped to objective aspects of concrete situations, to which we refer directly (this measuring apparatus, ...). Just as "I" and "now", the reference of elements in the model, such as the origin of its frame of reference, are only fixed in context. If models are indeed indexical, then we should not give credit to the idea that there could be such thing as a model of the universe, that would correspond to a "view from nowhere". At least not when empirical adequacy is at stake.
The idea that empirical adequacy should be explicated in terms of a model of the universe is probably inherited from the logical empiricists, who thought that theories were akin to general statements about the world, but taking into account the indexical aspects of theoretical models, we could reach a more sensible, and modest account of empirical adequacy in terms of predictive success of models in various concrete situations, rather than in terms of a single model of the universe.
Empirical adequacy as predictive success of models
I think there is a simple solution, which is to express empirical adequacy in terms of the predictive success of theoretical models when applied to various situations. The idea is to rely on the notions of application and prediction, which directly refer to experimental practice, rather than to the dubious notion of observable. This also takes into account experimental interventions involved when applying models to concrete situations, and not only observations. Meanwhile, we do not need to mention any model of the universe, and we stay close to scientific practice. Yet we do not have to restrict ourselves to what has been experimented so far. The idea is quite simple: a theory is empirically adequate exactly if, everytime we would test it, it would make good predictions. The only remaining issue, shared with van Fraassen's position, is the modal aspect involved, but obviously, this is not an issue for someone who accepts that modal statements have truth value.
All this, of course, needs to be expanded in more details, but let me reserve these developments for a future post.