Israel-Jost, Vincent
(2009)
Data processing in observation.
In: UNSPECIFIED.
Abstract
Although the observational status of data produced by instruments has been widely discussed among philosophers of science, those who defend it (e.g. Shapere (1982), Hacking (1983), Humphreys (2004) and many others) still do not completely account for contemporary practices of observation. Indeed, these data are very often computationally processed before they are examined by scientists and tend to be more and more so, as most detectors now produce data in a digital form. Hence, the raw data (the data detected and not yet modified) are stored as matrices or vectors that can easily be mathematically processed. In addition, while computational data processing shares important features with simulations, as both practices are based on the solving of equations associated to models, philosophical analyses of simulations (e.g. Humphreys (1994 and 2004), Hartman (1996)) are unable to account for data processing in observation, for in these different studies of simulations, the model aims to describe the phenomenon which is precisely at the center of scientific investigation. On the contrary, in data processing for observation, scientists make use of two types of models which are both neutral regarding the studied object or phenomenon. The first type of models concerns the different steps of data acquisition, and permits the scientist to predict the data corresponding to a given phenomenon. When used the other way around in an inverse problem, this type of treatments allows to specify the original phenomenon from the data, in a greater purity or in a spatial representation that can be grasped more easily by the observer. Hence, one can "deblur" (or deconvolve) images that are blurry due to a detector that is not accurate enough (e.g. in microscopy) or give a 3D representation of a phenomenon for which we originally could only produce 2D images (e.g. in CAT-scan imaging.) The second type of models, which deals more specifically with images, aims to describe some mechanism of vision such as the demarcation (or segmentation) of objects or the simplification of images, for example by making homogeneous some regions which are not so, but that we would tend to see as such. This permits to facilitate the reading of images and to obtain a better correlation between what two different observers see. While the inferential nature of the treatments applied to data is not dubious (see Delehanty (2005) for positron emission tomography (PET) images), the fundamental distinction in the context of observation is not between inferential and non-inferential, but rather between inferences which concern the very object of the scientific inquiry, and those which concern data acquisition and perception processes, since only the two latter types of inferences can be compatible with observation. More specifically, I shall argue that computer treatments which involve models of data acquisition do not bring any additional difficulty regarding the observational status of data, compared to the raw data produced by the same instrument, since the treatments only make explicit use of the knowledge of processes that the observer already adheres to (explicitly or implicitly). By implementing this knowledge in a systematic and reliable way, they also permit one to reduce the gap between (raw) data and phenomena. However, the role of treatments that make use of models of perception in observation is much harder to defend, since the resulting images often lack many of the original features of the raw data.
Monthly Views for the past 3 years
Monthly Downloads for the past 3 years
Plum Analytics
Actions (login required)
|
View Item |