Aronson, J., Auker-Howlett, D., Ghiara, V., Kelly, M. and Williamson, J. (2020). The use of mechanistic reasoning in assessing coronavirus interventions. Journal of Evaluation in Clinical Practice [Online]. Available at: http://dx.doi.org/10.1111/jep.13438.
Evidence-based medicine (EBM), the dominant approach to assessing the effectiveness of clinical and public health interventions, focuses on the results of association studies. EBM+ is a development of EBM that systematically considers mechanistic studies alongside association studies. In this paper we provide several examples of the importance of mechanistic evidence to coronavirus research. (i) Assessment of combination therapy for MERS highlights the need for systematic assessment of mechanistic evidence. (ii) That hypertension is a risk factor for severe disease in the case of SARS-CoV-2 suggests that altering hypertension treatment might alleviate disease, but the mechanisms are complex, and it is essential to consider and evaluate multiple mechanistic hypotheses. (iii) To be confident that public health interventions will be effective requires a detailed assessment of social and psychological components of the mechanisms of their action, in addition to mechanisms of disease. (iv) In particular, if vaccination programmes are to be effective, they must be carefully tailored to the social context; again, mechanistic evidence is crucial. We conclude that coronavirus research is best situated within the EBM+ evaluation framework.
Wallmann, C. and Williamson, J. (2020). The Principal Principle and subjective Bayesianism. European Journal for the Philosophy of Science [Online] 10. Available at: https://doi.org/10.1007/s13194-019-0266-4.
This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we show that, where chances inform degrees of belief, subjective Bayesianism fails to validate normal informal standards of what is reasonable. This problem points to a tension between the Principal Principle and the claim that conditional degrees of belief are conditional probabilities. However, one version of objective Bayesianism has a straightforward resolution to this problem, because it avoids this latter claim. The problem, then, offers some support to this version of objective Bayesianism.
Samet, J., Chiu, W., Cogliano, V., Jinot, J., Kriebel, D., Lunn, R., Beland, F., Bero, L., Browne, P., Fritschi, L., Kanno, J., Lachenmeier, D., Lan, Q., Lasfargues, G., Curieux, F., Peters, S., Shubat, P., Sone, H., White, M., Williamson, J., Yakubovskaya, M., Siemiatycki, J., White, P., Guyton, K., Schubauer-Berigan, M., Hall, A., Grosse, Y., Bouvard, V., Benbrahim-Tallaa, L., Ghissassi, F., Lauby-Secretan, B., Armstrong, B., Saracci, R., Zavadil, J., Straif, K. and Wild, C. (2019). The IARC Monographs: Updated procedures for modern and transparent evidence synthesis in cancer hazard identification. JNCI: Journal of the National Cancer Institute [Online]. Available at: https://doi.org/10.1093/jnci%2Fdjz169.
The Monographs produced by the International Agency for Research on Cancer (IARC) apply rigorous procedures for the scientific review and evaluation of carcinogenic hazards by independent experts. The Preamble to the IARC Monographs, which outlines these procedures, was updated in 2019, following recommendations of a 2018 expert Advisory Group. This article presents the key features of the updated Preamble, a major milestone that will enable IARC to take advantage of recent scientific and procedural advances made during the 12 years since the last Preamble amendments. The updated Preamble formalizes important developments already being pioneered in the Monographs Programme. These developments were taken forward in a clarified and strengthened process for identifying, reviewing, evaluating and integrating evidence to identify causes of human cancer. The advancements adopted include strengthening of systematic review methodologies; greater emphasis on mechanistic evidence, based on key characteristics of carcinogens; greater consideration of quality and informativeness in the critical evaluation of epidemiological studies, including their exposure assessment methods; improved harmonization of evaluation criteria for the different evidence streams; and a single-step process of integrating evidence on cancer in humans, cancer in experimental animals and mechanisms for reaching overall evaluations. In all, the updated Preamble underpins a stronger and more transparent method for the identification of carcinogenic hazards, the essential first step in cancer prevention.
Tonelli, M. and Williamson, J. (2019). Mechanisms in clinical practice: use and justification. Medicine, Health Care and Philosophy [Online]. Available at: https://doi.org/10.1007/s11019-019-09915-5.
While the importance of mechanisms in determining causality in medicine is currently the subject of active debate, the role of mechanistic reasoning in clinical practice has received far less attention. In this paper we look at this question in the context of the treatment of a particular individual, and argue that evidence of mechanisms is indeed key to various aspects of clinical practice, including assessing population-level research reports, diagnostic as well as therapeutic decision making, and the assessment of treatment effects. We use the pulmonary condition bronchiectasis as a source of examples of the importance of mechanistic reasoning to clinical practice.
Williamson, J. (2019). Evidential Proximity, Independence, and the evaluation of carcinogenicity. Journal of Evaluation in Clinical Practice [Online]. Available at: https://doi.org/10.1111/jep.13226.
This paper analyses the methods of the International Agency for Research on Cancer (IARC) for evaluating the carcinogenicity of various agents. I identify two fundamental evidential principles that underpin these methods, which I call Evidential Proximity and Independence. I then show, by considering the 2018 evaluation of the carcinogenicity of styrene and styrene‐7,8‐oxide, that these principles have been implemented in a way that can lead to inconsistency. I suggest a way to resolve this problem: admit a general exception to Independence and treat the implementation of Evidential Proximity more flexibly where this exception applies. I show that this suggestion is compatible with the general principles laid down in the 2019 version of IARC's methods guide, its Preamble to the Monographs.
Williamson, J. (2019). Establishing causal claims in medicine. International Studies in the Philosophy of Science [Online]. Available at: https://doi.org/10.1080/02698595.2019.1630927.
Russo and Williamson (2007) put forward the following thesis: in order to establish a causal claim in medicine, one normally needs to establish both that the putative cause and putative effect are appropriately correlated and that there is some underlying mechanism that can account for this correlation. I argue that, although the Russo-Williamson thesis conflicts with the tenets of present-day evidence-based medicine (EBM), it offers a better causal epistemology than that provided by present-day EBM because it better explains two key aspects of causal discovery. First, the thesis better explains the role of clinical studies in establishing causal claims. Second, it yields a better account of extrapolation.
Williamson, J. (2019). Calibration for epistemic causality. Erkenntnis [Online]. Available at: https://doi.org/10.1007/s10670-019-00139-w.
The epistemic theory of causality is analogous to epistemic theories of probability. Most proponents of epistemic probability would argue that one's degrees of belief should be calibrated to chances, insofar as one has evidence of chances. The question arises as to whether causal beliefs should satisfy an analogous calibration norm. In this paper, I formulate a particular version of a norm requiring calibration to chances and argue that this norm is the most fundamental evidential norm for epistemic probability. I then develop an analogous calibration norm for epistemic causality, argue that it is the *only* evidential norm required for epistemic causality, and show how an epistemic account of causality that incorporates this norm can be used to analyse objective causal relationships.
Williamson, J. (2018). Establishing the teratogenicity of Zika and evaluating causal criteria. Synthese [Online]. Available at: https://doi.org/10.1007/s11229-018-1866-9.
The teratogenicity of the Zika virus was considered established in 2016, and is an interesting case because three different sets of causal criteria were used to assess teratogenicity. This paper appeals to the thesis of Russo and Williamson (2007) to devise an epistemological framework that can be used to compare and evaluate sets of causal criteria. The framework can also be used to decide when enough criteria are satisfied to establish causality. Arguably, the three sets of causal criteria considered here offer only a rudimentary assessment of mechanistic studies, and some suggestions are made as to alternative ways to establish causality.
Aronson, J., La Caze, A., Kelly, M., Parkkinen, V. and Williamson, J. (2018). The use of evidence of mechanisms in drug approval. Journal of Evaluation in Clinical Practice [Online]. Available at: https://doi.org/10.1111/jep.12960.
The role of mechanistic evidence tends to be under?appreciated in current evidence?based medicine (EBM), which focusses on clinical studies, tending to restrict attention to randomized controlled studies (RCTs) when they are available. The EBM+ programme seeks to redress this imbalance, by suggesting methods for evaluating mechanistic studies alongside clinical studies. Drug approval is a problematic case for the view that mechanistic evidence should be taken into account, because RCTs are almost always available. Nevertheless, we argue that mechanistic evidence is central to all the key tasks in the drug approval process: in drug discovery and development; assessing pharmaceutical quality; devising dosage regimens; assessing efficacy, harms, external validity, and cost?effectiveness; evaluating adherence; and extending product licences. We recommend that, when preparing for meetings in which any aspect of drug approval is to be discussed, mechanistic evidence should be systematically analysed and presented to the committee members alongside analyses of clinical studies.
Romeijn, J. and Williamson, J. (2018). Intervention and Identifiability in Latent Variable Modelling. Minds and Machines [Online] 28:243-264. Available at: https://doi.org/10.1007/s11023-018-9460-y.
We consider the use of interventions for resolving a problem of unidentified statistical models. The leading examples are from latent variable modelling, an influential statistical tool in the social sciences. We first explain the problem of statistical identifiability and contrast it with the identifiability of causal models. We then draw a parallel between the latent variable models and Bayesian networks with hidden nodes. This allows us to clarify the use of interventions for dealing with unidentified statistical models. We end by discussing the philosophical and methodological import of our result.
Williamson, J. (2018). Justifying the Principle of Indifference. European Journal for the Philosophy of Science [Online]. Available at: https://link.springer.com/content/pdf/10.1007%2Fs13194-018-0201-0.pdf.
This paper presents a new argument for the Principle of Indifference. This
argument can be thought of in two ways: as a pragmatic argument, justifying
the principle as needing to hold if one is to minimise worst-case expected loss,
or as an epistemic argument, justifying the principle as needing to hold in order
to minimise worst-case expected inaccuracy. The question arises as to which
interpretation is preferable. I show that the epistemic argument contradicts
Evidentialism and suggest that the relative plausibility of Evidentialism provides
grounds to prefer the pragmatic interpretation. If this is right, it extends to a
general preference for pragmatic arguments for the Principle of Indifference,
and also to a general preference for pragmatic arguments for other norms of
Williamson, J. (2017). Models in Systems Medicine. Disputatio [Online] 9:429-469. Available at: https://content.sciendo.com/view/journals/disp/9/47/article-p429.xml.
Systems medicine is a promising new paradigm for discovering associations, causal relationships and mechanisms in medicine. But it faces some tough challenges that arise from the use of big data: in particular, the problem of how to integrate evidence and the problem of how to structure the development of models. I argue that objective Bayesian models offer one way of tackling the evidence integration problem. I also offer a general methodology for structuring the development of models, within which the objective Bayesian approach fits rather naturally.
Hawthorne, J., Landes, J., Wallmann, C. and Williamson, J. (2015). The Principal Principle Implies the Principle of Indifference. The British Journal for the Philosophy of Science [Online] 68:123-131. Available at: http://dx.doi.org/10.1093/bjps/axv030.
We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism.
1 The Argument
2 Some Objections Met
Landes, J. and Williamson, J. (2015). Justifying Objective Bayesianism on Predicate Languages. Entropy [Online] 17:2459-2543. Available at: http://doi.org/10.3390/e17042459.
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss.
Williamson, J. (2015). Deliberation, Judgement and the Nature of Evidence. Economics and Philosophy [Online] 31:27-65. Available at: http://dx.doi.org/10.1017/S026626711400039X.
A normative Bayesian theory of deliberation and judgement requires a procedure for merging the evidence of a collection of agents. In order to provide such a procedure, one needs to ask what the evidence is that grounds Bayesian probabilities. After finding fault with several views on the nature of evidence (the views that evidence is knowledge; that evidence is whatever is fully believed; that evidence is observationally set credence; that evidence is information), it is argued that evidence is whatever is rationally taken for granted. This view is shown to have consequences for an account of merging evidence, and it is argued that standard axioms for merging need to be altered somewhat.
Williamson, J. (2014). How Uncertain Do We Need to Be?. Erkenntnis [Online] 79:1249-1271. Available at: http://dx.doi.org/10.1007/s10670-013-9516-6.
Expert probability forecasts can be useful for decision making (Sect. 1). But levels of uncertainty escalate: however the forecaster expresses the uncertainty that attaches to a forecast, there are good reasons for her to express a further level of uncertainty, in the shape of either imprecision or higher order uncertainty (Sect. 2). Bayesian epistemology provides the means to halt this escalator, by tying expressions of uncertainty to the propositions expressible in an agent’s language (Sect. 3). But Bayesian epistemology comes in three main varieties. Strictly subjective Bayesianism and empirically-based subjective Bayesianism have difficulty in justifying the use of a forecaster’s probabilities for decision making (Sect. 4). On the other hand, objective Bayesianism can justify the use of these probabilities, at least when the probabilities are consistent with the agent’s evidence (Sect. 5). Hence objective Bayesianism offers the most promise overall for explaining how testimony of uncertainty can be useful for decision making. Interestingly, the objective Bayesian analysis provided in Sect. 5 can also be used to justify a version of the Principle of Reflection (Sect. 6).
Clarke, B., Leuridan, B. and Williamson, J. (2014). Modelling Mechanisms with Causal Cycles. Synthese [Online] 191:1651-1681. Available at: http://dx.doi.org/10.1007/s11229-013-0360-7.
Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way.
Clarke, B., Gillies, D., Illari, P., Russo, F. and Williamson, J. (2014). Mechanisms and the Evidence Hierarchy. Topoi [Online] 33:339-360. Available at: http://dx.doi.org/10.1007/s11245-013-9220-9.
Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM.
Williamson, J. (2013). From Bayesian Epistemology to Inductive Logic. Journal of Applied Logic [Online] 11:468-486. Available at: http://dx.doi.org/10.1016/j.jal.2013.03.006.
Inductive logic admits a variety of semantics (Haenni et al. (2011) [7, Part 1]). This paper develops semantics based on the norms of Bayesian epistemology (Williamson, 2010 [16, Chapter 7]). Section 1 introduces the semantics and then, in Section 2, the paper explores methods for drawing inferences in the resulting logic and compares the methods of this paper with the methods of Barnett and Paris (2008) . Section 3 then evaluates this Bayesian inductive logic in the light of four traditional critiques of inductive logic, arguing (i) that it is language independent in a key sense, (ii) that it admits connections with the Principle of Indifference but these connections do not lead to paradox, (iii) that it can capture the phenomenon of learning from experience, and (iv) that while the logic advocates scepticism with regard to some universal hypotheses, such scepticism is not problematic from the point of view of scientific theorising.
Clarke, B., Gillies, D., Illari, P., Russo, F. and Williamson, J. (2013). The Evidence that Evidence-based Medicine Omits. Preventative Medicine [Online] 57:745-747. Available at: http://dx.doi.org/10.1016/j.ypmed.2012.10.020.
According to current hierarchies of evidence for EBM, evidence of correlation (e.g., from RCTs) is always more important than evidence of mechanisms when evaluating and establishing causal claims. We argue that evidence of mechanisms needs to be treated alongside evidence of correlation. This is for three reasons. First, correlation is always a fallible indicator of causation, subject in particular to the problem of confounding; evidence of mechanisms can in some cases be more important than evidence of correlation when assessing a causal claim. Second, evidence of mechanisms is often required in order to obtain evidence of correlation (for example, in order to set up and evaluate RCTs). Third, evidence of mechanisms is often required in order to generalise and apply causal claims. While the EBM movement has been enormously successful in making explicit and critically examining one aspect of our evidential practice, i.e., evidence of correlation, we wish to extend this line of work to make explicit and critically examine a second aspect of our evidential practices: evidence of mechanisms.
Williamson, J. (2013). How can Causal explanations Explain?. Erkenntnis [Online] 78:257-275. Available at: http://dx.doi.org/10.1007/s10670-013-9512-x.
The mechanistic and causal accounts of explanation are often conflated to yield a `causal-mechanical' account. This paper prizes them apart and asks: if the mechanistic account is correct, how can causal explanations be explanatory? The answer to this question varies according to how causality itself is understood. It is argued that difference-making, mechanistic, dualist and inferentialist accounts of causality all struggle to yield explanatory causal explanations, but that an epistemic account of causality is more promising in this regard.
Williamson, J. (2013). Why Frequentists and Bayesians Need Each Other. Erkenntnis [Online] 78:293-318. Available at: http://dx.doi.org/10.1007/s10670-011-9317-8.
The orthodox view in statistics has it that frequentism and Bayesianism
are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are
complementary and need to mesh if probabilistic reasoning is to be carried out
Russo, F. and Williamson, J. (2012). EnviroGenomarkers: The Interplay between Mechanisms and Difference Making in Establishing Causal Claims. Medicine Studies [Online] 3:249-262. Available at: http://dx.doi.org/10.1007/s12376-012-0079-7.
According to Russo and Williamson (2007, 2011a,b), in order to establish a causal claim of the form `C is a cause of E', one needs evidence that there is an underlying mechanism between C and E as well as evidence that C makes a difference to E. This thesis has been used to argue that hierarchies of evidence, as championed by evidence-based movements, tend to give primacy to evidence of difference making over evidence of mechanism, and are flawed because the two sorts of evidence are required and they should be treated on a par.
An alternative approach gives primacy to evidence of mechanism over evidence of difference making. In this paper we argue that this alternative approach is equally flawed, again because both sorts of evidence need to be treated on a par. As an illustration of this parity we explain how scientists working in the `EnviroGenomarkers' project constantly make use of the two evidential components in a dynamic and intertwined way. We argue that such an interplay is needed not only for causal assessment but also for policy purposes.
Illari, P. and Williamson, J. (2012). What is a Mechanism? Thinking about Mechanisms across the Sciences. European Journal for Philosophy of Science [Online] 2:119-135. Available at: http://dx.doi.org/10.1007/s13194-011-0038-2.
After a decade of intense debate about mechanisms, there is still no consensus
characterization. In this paper we argue for a characterization that applies widely to
mechanisms across the sciences. We examine and defend our disagreements with the
major current contenders for characterizations of mechanisms. Ultimately, we indicate
that the major contenders can all sign up to our characterization.
Williamson, J. (2011). Mechanistic Theories of Causality. Philosophy Compass [Online] 6:421-447. Available at: http://dx.doi.org/10.1111/j.1747-9991.2011.00400.x.
Part I of this paper introduces a range of mechanistic theories of causality, including process theories and the complex-systems theories, and some of the problems they face. Part II argues that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient, and describes one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.
Osimani, B., Russo, F. and Williamson, J. (2011). Scientific Evidence and the Law: An Objective Bayesian Formalisation of the Precautionary Principle in Pharmaceutical Regulation. Journal of Philosophy, Science and Law [Online] 11. Available at: http://www.miami.edu/ethics/jpsl/.
The paper considers the legal tools that have been developed in German pharmaceutical regulation as a result of the precautionary attitude inaugurated by the Contergan decision (1970). These tools are (i) the notion of "well-founded suspicion", which attenuates the requirements for safety intervention by relaxing the requirement of a proved causal connection between danger and source, and the introduction of (ii) the reversal of proof burden in liability norms. The paper focuses on the first and proposes seeing the precautionary principle as an instance of the requirement that one should maximise expected utility. In order to maximise expected utility certain probabilities are required and it is argued that objective Bayesianism offers the most plausible means to determine the optimal decision in cases where evidence supports diverging choices.
Williamson, J. (2011). Objective Bayesianism, Bayesian Conditionalisation and Voluntarism. Synthese [Online] 178:67-85. Available at: http://dx.doi.org/10.1007/s11229-009-9515-y.
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.
Casini, L., Illari, P., Russo, F. and Williamson, J. (2011). Models for Prediction, Explanation and Control: Recursive Bayesian Networks. Theoria [Online] 26:5-33. Available at: http://www.ehu.es/ojs/index.php/THEORIA/article/view/1192/825.
The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis.
Russo, F. and Williamson, J. (2011). Generic versus Single-Case Causality: The Case of Autopsy. European Journal for Philosophy of Science [Online] 1:47-69. Available at: http://dx.doi.org/10.1007/s13194-010-0012-4.
This paper addresses questions about how the levels of causality (generic and single-case causality) are related. One question is epistemological: can relationships at one level be evidence for relationships at the other level? We present three kinds of answer to this question, categorised according to whether inference is top-down, bottom-up, or the levels are independent. A second question is metaphysical: can relationships at one level be reduced to relationships at the other level? We present three kinds of answer to this second question, categorised according to whether single-case relations are reduced to generic, generic relations are reduced to single-case, or the levels are independent. We then explore causal inference in autopsy. This is an interesting case study, we argue, because it refutes all three epistemologies and all three metaphysics. We close by sketching an account of causality that survives autopsy—the epistemic theory.
Darby, G. and Williamson, J. (2011). Imaging Technology and the Philosophy of Causality. Philosophy & Technology [Online] 24:115-136. Available at: http://dx.doi.org/10.1007/s13347-010-0010-7.
Russo and Williamson (Int Stud Philos Sci 21(2):157–170, 2007) put forward the thesis that, at least in the health sciences, to establish the claim that C is a cause of E, one normally needs evidence of an underlying mechanism linking C and E as well as evidence that C makes a difference to E. This epistemological thesis poses a problem for most current analyses of causality which, in virtue of analysing causality in terms of just one of mechanisms or difference making, cannot account for the need for the other kind of evidence. Weber (Int Stud Philos Sci 23(2):277–295, 2009) has suggested to the contrary that Giere’s probabilistic analysis of causality survives this criticism. In this paper, we look in detail at the case of medical imaging technology, which, we argue, supports the thesis of Russo and Williamson, and we respond to Weber’s suggestion, arguing that Giere’s account does not survive the criticism.
Russo, F. and Williamson, J. (2011). Epistemic Causality and Evidence-Based Medicine. History and Philosophy of the Life Sciences [Online] 33:563-582. Available at: http://www.hpls-szn.com/articles.asp?id=146&book=31.
Causal claims in biomedical contexts are ubiquitous albeit they are not always made explicit. This paper addresses the question of what causal claims mean in the context of disease. It is argued that in medical contexts causality ought to be interpreted according to the epistemic theory. The epistemic theory offers an alternative to traditional accounts that cash out causation either in terms of “difference-making” relations or in terms of mechanisms. According to the epistemic approach, causal claims tell us about which inferences (e.g., diagnoses and prognoses) are appropriate, rather than about the presence of some physical causal relation analogous to distance or gravitational attraction. It is shown that the epistemic theory has important consequences for medical practice, in particular with regard to evidence-based causal assessment.
McKay Illari, P. and Williamson, J. (2010). Function and Organization: Comparing the Mechanisms of Protein Synthesis and Natural Selection. Studies in History and Philosophy of Science Part C [Online] 41:279-291. Available at: http://dx.doi.org/10.1016/j.shpsc.2010.07.001.
In this paper, we compare the mechanisms of protein synthesis and natural selection. We identify three core elements of mechanistic explanation: functional individuation, hierarchical nestedness or decomposition, and organization. These are now well understood elements of mechanistic explanation in fields such as protein synthesis, and widely accepted in the mechanisms literature. But Skipper and Millstein have argued (2005) that natural selection is neither decomposable nor organized. This would mean that much of the current mechanisms literature does not apply to the mechanism of natural selection.
We take each element of mechanistic explanation in turn. Having appreciated the importance of functional individuation, we show how decomposition and organization should be better understood in these terms. We thereby show that mechanistic explanation by protein synthesis and natural selection are more closely analogous than they appear—both possess all three of these core elements of a mechanism widely recognized in the mechanisms literature.