EPIB 706: Doctoral Seminar
Winter 2025
About me | About class |
---|---|
james brophy | |
james.brophy@mcgill.ca | mycourses2.mcgill.ca |
Hours: by appointment | Hours: Tuesday/Thursday 8:35h-9:55h |
Office: Homeless (visiting faculty area) |
1 Course Description
EPIB 706 is a PhD-level seminar aimed at providing space for students to engage with overarching concepts critical to the theory and practice of epidemiology, as well to explore recent controversies and debates in the field. The purpose of this course to reinforce your formal methodological coursework by making space to develop and sharpen your critical thinking skills. We will review a selection of papers that range across methods, principles, arguments, and debates in epidemiology and the wider scientific community.
2 Eligibility
Registration in the PhD program in Epidemiology and successful completion of the course sequence in epidemiologic methods (EPIB 703 and EPIB 704) is required. Students who have not completed EPIB 703 and EPIB 704 must obtain the instructor’s permission to take the course.
3 Course Format
This is a discussion-based course and, quite frankly, it simply won’t work well without engagement and participation from all of us (including me). Of course, everyone has their own level of comfort speaking up, as well as varying levels of interest in some of these topics, so we (since I largely share the viewpoints of Sam Harper who has structured the course and given it for the last several years, but is presently on sabbatical, first-person plural pronouns are used) have no expectation that everyone participates equally. What we do ask is that you make a sincere effort to engage with the material, both in terms of the reading and in different forums for discussion. Learning how to respectfully express your opinion about conceptual and methodological issues, and to respectfully listen, engage, and respond to the opinions of others is a core part of being a scientist.
4 Evaluation
4.1 Reading
The assigned readings are the core of the course material, and students are expected to carefully and critically read each assignment before class. To facilitate student engagement with the reading we will use the online tool Perusall for all required readings. Perusall is a reading platform in which students (and faculty) annotate texts collaboratively alongside one another. More information on how Perusall works and how it is integrated into the course is available here (thank you Arizona State!). To access Perusall through MyCourses, navigate to Content > Readings > Perusall, and then click the “Open Link” button. This will take you to the Perusall site and automatically register you as a member of the course. If you are having any trouble accessing the readings through Perusall contact me right away. I will not be using Perusall’s grading features, but I expect you to read, post questions, respond to other students questions and answers, and to take an active role in generating productive discussion.
4.2 Writing
The discussions in the course are meant to activate your critical thinking skills, and to encourage you to synthesize your own thoughts on the material, these insights should be transferable to your area of research interest. Toward that end, over the course of the semester you will be asked to submit one original, critical essay that explores a topic of relevance to epidemiologic science. It may be a direct response to material that we read or discuss in class, or it may be an essay exploring other topics relevant to your work that demonstrate a good-faith effort to engage with the class material. These should take the form of a commentary similar (in spirit) to those we have read during the semester, and should be no longer than 1500 words. This represents approximately 5 pages of double spaced material. While this may seem unnecessarily constrained, it also has a latent objective of encouraging precise synthesis and focused writing for the essay, emphasizing the key arguments you want to make. An outline of the essay is due on March 10, 2025 and the final essay is due on April 10, 2025. I will provide examples of what I think are good pieces of writing to aspire to.
4.3 Engagement
Leading Discussion
In addition to the writing assignment, each student will be asked to lead at least one day of discussion among the topics that we will cover (and probably more than one for most of you). For that session, you will come prepared to briefly summarize the material we have read, and to prepare some discussion points to help keep the conversation moving. I have created a Google spreadsheet with the current days for each topic here. Please sign-up for a session and we can have a discussion about the readings and where to draw on other resources for the topic. Note: We have 25 sessions and 18 students this year, so not everyone is required to lead 2 sessions. You should be able to sort this out, but if the remaining slots don’t get filled I will happily assign them.
Although each class session is 1.5 hours, there will inevitably be topics that come up that we can’t fully address in class. I encourage you to use the Discussion section of MyCourses to post questions or comments there. I may also post any additional pertinent links to additional readings for those interested.
Participation
Real engagement means active participation. This can take many forms, but for some general guidance, this means:
Showing up for each class having read and engaged with the material assigned. It will help facilitate discussion if you could aim to contribute at least 2-3 points for discussion or questions about the material in Perusall, and bring those to class;
Focusing during class discussion and avoiding distractions, and being present and intellectually engaged during the discussion;
Asking questions about anything in the readings that seems unclear or objectionable (in class, outside class, online). This can include both specific ideas from the readings, as well as synthesizing or finding themes common to different readings or our discussion;
Offering respectful arguments and responses, and respectfully listening to the arguments and responses of others. Contributions should be relevant and helpful and demonstrate that you are engaging with the material being discussed at the time, and that you are well-prepared for class.
5 Grading
The course is pass-fail.
6 Course Outline (12 “questions” to consider)
A note about the outline. In an effort to make this course as dynamic and helpful to students as possible, the list of topics and readings below is subject to change. Enthusiasm (or lack thereof) for certain topics may lead us to revise, drop, add, or replace some readings or entire topics as we go. I promise to entertain any suggestions for changes, but may also disagree if I feel certain topics or readings are too important to replace.
6.1 Week 1: What is the present and future of epidemiology as a discipline?
Why are we doing this?
Because this is not a didactic course that is focused on learning methods or technical skills, and because in the past this course has often been critiqued for not providing a solid rationale for why it even exists, I owe it to you to provide some justification for the topics and readings I’ve chosen, as well as for why I think this material would be useful for your doctoral training. For each set of assigned material (the ‘What’), I’ve included a brief rationale (the ‘Why’). I hope you find it helpful.
Tuesday 2025-01-07: Course introduction
- Administrative aspects of the course.
- Round table – introductions.
- Discussion of objectives and competencies.
In the first session we will talk generally about high-level questions regarding the discipline of epidemiology as a whole. Although you are early on in your training, I think it is valuable for you to be aware of these broader discussions about where the field stands in relation to its past, and what the appropriate balance should be between descriptive, causal, or implementation questions. Having some knowledge about the intellectual history of different concepts (“risk factor epidemiology”, “consequential epidemiology”) will help you to figure out where your own work stands in relation to the discipline as a whole.
Thursday 2025-01-09: Reflections on epidemiology’s past and present
What:
Why:
The Davey Smith paper provides a bit of historical orientation to the ‘modern’ epidemiology training you are getting, as well as a critique of it in relation to epidemiology’s past. I chose the Lesko et al. paper because it focuses on the relationship between the tools of epidemiology (about which you are learning a lot in this first year) and the kinds of questions that can be answered with those tools. In particular, they focus on differences between purely descriptive questions, questions about synthesized evidence on causal relationships, and questions about specific interventions for which we want to estimate a causal effect. I also like that it was written by early career researchers whose training is in many ways similar to your own.
6.2 Week 2: What kinds of questions should we be asking?
Asking good questions is central to advancing epidemiologic knowledge, but what makes a question ‘good’? Is ‘novelty’ more important than making incremental progress? Should it matter whether a given question will produce ‘actionable’ evidence? And is it problematic if a study’s methods are not well aligned with the question it seeks to answer? Is it wasteful (or, even unethical) to produce such work?
Tuesday 2025-01-14: Does it matter whether questions and methods align?
What:
Fox MP, Murray EJ, Lesko CR, Sealy-Jefferson S. On the Need to Revitalize Descriptive Epidemiology. Am J of Epidemiol 2022;191(7):1174–9. [link] [4263 words]
Hernan MA, Hsu J, Healy B. A Second Chance to Get Causal Inference Right: A Classification of Data Science Tasks. CHANCE 2019; 32:1, 42-49. [link] [5650 words]
Why:
What is the relationship between research questions and methods? Should we just use multivariable regression for everything? Does it actually matter? Fox et al. argue for the importance and need to reinvigorate descriptive epidemiology, and make a case that this has particular consequences for both theory and methods. The paper by Hernan and colleagues tries to lay out how questions and methods should be aligned in empirical data science research. This paper was written when enthusiasm for machine learning and other empirical data science algorithms began achieving a high degree of influence, and their paper aims to try and clarify the utility of being clear about the question being asked, the methods used to answer it, and the role of expert knowledge in generating the result.
Thursday 2025-01-16: What makes a ‘good’ question?
What:
Banack H. Fox M. Questioning the Questions with Maria Glymour. SERiousEPI podcast. 2020-10-01. [link] [44 mins]
Fox MP, Edwards JK, Platt R, Balzer LB. The Critical Importance of Asking Good Questions: The Role of Epidemiology Doctoral Training Programs. Am J Epidemiol 2020;189(4):261–264. [link] [1876 words]
Why:
The paper by Fox et al. and the podcast by Banack and Fox (with apologies in advance for the volume of terribly corny epidemiology jokes contain therein, can easily skip the intro and start @ 4:30.19/.81) are meant to get you thinking about how to orient your own research around choosing questions to answer and how to think critically about the inevitable tradeoffs that come with doing research. How will you decide what questions to answer with your work, and how will you evaluate the questions others are asking?
6.3 Week 3: How important is formal causal inference?
Much of modern epidemiologic training now starts with the introduction of potential outcomes framework, as well as introducing DAGs as a way to draw and consider assumptions needed for doing causal inference. What are the implications of using these frameworks for the kinds of questions that can be asked and answered in epidemiology? Do we risk restricting ourselves to ‘formal’ methods when it comes to causal questions, or are other alternatives possible? These are fundamental questions about the nature of epidemiologic inquiry, and it is useful for you to consider the benefits and drawbacks that may come with adopting this epistemological stance. As you progress in your training, you’ll need to decide on how you will approach questions of causal inference both in your own work and in your evaluations of the wider epidemiologic literature.
Tuesday 2025-01-21: Are potential outcomes and DAGs necessary?
What:
Krieger N, Davey Smith G. The tale wagged by the DAG: broadening the scope of causal inference and explanation for epidemiology. Int J Epidemiol. 2016 12;45(6):1787–1808. [link] [15714 words]
Daniel RM, De Stavola BL, Vansteelandt S. Commentary: The formal approach to quantitative causal inference in epidemiology: misguided or misrepresented? Int J Epidemiol. 2016 12;45(6):1817–1829. [link] [10561 words]
Why:
The first set of papers for this session come from an older, but still relevant “debate” about the utility and consequences of the “formal” approach to causal inference in epidemiology, which you likely now take for granted, since that is what most ‘modern’ epi programs teach (including ours). Krieger/Davey Smith are asking critical questions of potential outcomes and DAGs (the latter was also the editor at IJE at the time, hence the ‘relaxed’ approach to word limits), and Daniel et al. defending, more or less the modern approach. Look forward to hearing your thoughts.
Thursday 2025-01-23: Are well-defined interventions needed for causal questions?
What:
Schwartz S, Gatto NM, Campbell UB. Causal identification: a charge of epidemiology in danger of marginalization. Ann Epidemiol 2016;26(10):669-673. [link]. [4069 words]
Hernan MA. Does water kill? A call for less casual causal inferences. Ann Epidemiol 2016;26(10):674-680. [link]. [5734 words]
Schwartz S, Gatto NM, Campbell UB. Heeding the call for less casual causal inferences: the utility of realized (quantitative) causal effects. Ann Epidemiol 2017;27(6):402-405. [link]. [2756 words]
Why:
The second set of readings for this week is meant to engage with a more specific critique of ‘modern’ causal inference methods, namely, the notion that ‘good’ causal questions are based on well-defined interventions. This is a more recent argument, largely associated with Miguel Hernan and linked to the idea of using ‘target trials’ to formulate questions for observational studies.
As academics are wont to do, there has been some pushback against this idea, notably by Sharon Schwartz, another long-term and thoughtful critic of epidemiology (see her nice 1999 pre-causal-inference-revolution paper in Am J Public Health on the consequences of what she called, ‘Type III error’ which is about asking the wrong question). She and other critics are pushing back against this idea and trying to understand it’s implications (again) for the kinds of causal questions we can answer (and note that I think this is a different sort of critique than was being made by Krieger/Davey Smith).
6.4 Week 4: How should we study non-manipulable exposures?
Longstanding debates about whether exposures that are not directly (or perhaps even theoretically) manipulable, such as race, ethnicity, sex, or country-of-birth, present important challenges for the counterfactual models of causal inference you have been learning about. This set of readings aims to try and clarify some of these conceptual questions and derive potential paths forward that respect the “rules” of causal inference but that can also provide evidence that may be useful for reducing differences in health across non-manipulable factors.
Tuesday 2025-01-28: Are non-manipulable exposures causes?
What:
VanderWeele TJ, Robinson WR. On the causal interpretation of race in regressions adjusting for confounding and mediating variables. Epidemiology 2014 Jul;25(4):473–84. [link] [11470 words]
Glymour MM, Spiegelman D. Evaluating Public Health Interventions: 5. Causal Inference in Public Health Research-Do Sex, Race, and Biological Factors Cause Health Outcomes? Am J Public Health 2017 Jan;107(1):81–85. [link] [3446 words]
Boyd RW et al. On Racism: A New Standard For Publishing On Racial Health Inequities. Health Affairs Blog, July 2, 2020. [link] [2378 words]
Why:
The paper by Vanderweele and Robinson tries to address the issue of non-manipulable exposures (specifically race) in a way that respects the complexity of this kind of exposure, but attempts to move forward to see whether it is helpful to reframe the question to how interventions on factors plausibly affected by race may affect racial differences in health. Meanwhile, Glymour and Spiegelman offer more of a defense of the idea that non-manipulable factors are causes and deserve the same kind of consideration and treatment as other exposures we study routinely in epidemiology. Finally, the blog post by Boyd is a relatively new piece making strong arguments for how race should be reported and interpreted in epidemiologic and clinical studies.
Thursday 2025-01-30: Case study: race in clinical treatment.
What:
Vyas DA, Eisenstein LG, Jones DS. Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms. New Engl J Med 2020;383(9):874–82. [link] [6251 words]
Manski CF. Patient-centered appraisal of race-free clinical risk assessment. Health Econ 2022;31(10):2109–14. [link] [4133 words]
Why:
Taking a bit of a break from conceptual readings on questions and causal inference, in the second session we delve into a case study of the challenges in the use of race in clinical medicine. Should we use race as a factor in recommending treatments to patients? Does it matter whether or not race is a cause of the outcome, or whether including race might affect inequalities? The two papers from Vyas et al. and Manski arrive at different arguments regarding whether or not race should be included, and I think they provide a rich set of issues to discuss that complement the arguments about how we should study non-manipulable exposures in epidemiology.
6.5 Week 5: Should we try to randomize interventions?
Okay, we’ve talked about the role of asking good questions, whether non-manipulable factors are causes and ways to investigate them, and whether we need well-specified interventions for causal questions, but let’s now turn toward more practical concerns. You want to study intervention \(X\), which is not known to be either harmful or beneficial, can be ethically and feasibly delivered, and has plausible reasons why it should affect \(Y\). What kind of design should you use? Should you try and design a randomized evaluation? What would be the benefits? What drawbacks? What implications for external generalizability or understanding the mechanisms through which it may affect \(Y\)?
Tuesday 2025-02-04: Are RCT’s special?
What:
Why:
Deaton and Cartwright’s paper provides an overview of core philosophical, conceptual, and statistical concepts of randomized trials, and a lot of comments on their benefits and drawbacks. Haushofer and Metcalf apply questions about the feasibility to our pandemic circumstances (though it seems a long time ago that they published this in May of 2020, before viable vaccines, before new strains, before second, third, omicron waves). As epidemiologists, I think you will benefit from getting into the weeds a bit about randomized designs and grappling with questions about when and where they might be appropriate.
Thursday 2025-02-06: What if we can’t randomize?
What:
Lodi S, Phillips A, Lundgren J, Logan R, Sharma S, Cole SR, et al. Effect Estimates in Randomized Trials and Observational Studies: Comparing Apples With Apples. Am J Epidemiol 2019 08;188(8):1569–1577. [link] [3044 words]
Bärnighausen T, Tugwell P, Røttingen JA, Shemilt I, Rockers P, Geldsetzer P, et al. Quasi-experimental study designs series—paper 4: uses and value. Journal of Clinical Epidemiology 2017;89:21–9. [link] [6932 words]
Why:
Despite some important strengths of randomized designs, especially for exchangeability, in some cases it just won’t be feasible or ethical to pursue a randomized design. What then? Anything goes? Just plug all the confounders into your regression and hope for the best? There are still good reasons to consider thinking conceptually about the trial you would design if feasibility and ethical issues were irrelevant, and then attempting to pursue an observational design that corresponds as closely as possible to your hypothetical “target trial”.
Lodi et al. are really focused on comparing results from an observational study to a trial that both investigate the same question, and their paper shows a practical example of how you might think about approaching this question (I would say even if you don’t have trial data). Nevertheless, this design still focuses almost entirely on regression adjustment as a way of achieving exchangeability–a rather strong assumption, and one that may be difficult to sell to critics.
Finally, Barninghausen et al. provide an overview of benefits and drawbacks of quasi-experimental strategies that, by design, control for at least some sources of unmeasured confounding. These designs are becoming more common in epidemiology and are worth considering when deciding on how you might approach a causal question when randomization is not feasible.
6.6 Week 6: Do we need representative samples?
Okay, so now suppose that we’ve decided on a question and a (randomized or non-randomized) design. Who should be in our sample? It is important that we obtain a random sample of our target population, or can we use a convenience sample? And how does this change depending on the goal of our study, i.e., to estimate a prevalence versus develop a prediction model versus estimate a causal effect?
Tuesday 2025-02-11: Should our studies be representative?
What:
Rothman KJ, Gallacher JEJ, Hatch EE. Why representativeness should be avoided. Int J Epidemiol 2013;42(4):1012-4. [link]. [2338 words]
Ebrahim S, Davey Smith G. Should we always deliberately be non-representative? Int J Epidemiol 2013;42:1022–26. [link] [3823 words]
Stamatakis E, et al. Is Cohort Representativeness Passé? Poststratified Associations of Lifestyle Risk Factors with Mortality in the UK Biobank. Epidemiology 2021;32:179–188. [link] [6921 words]
Why:
Debates about whether studies designed to estimate causal effects need to be representative, or alternatively should purposefully be designed not to be representative, have been persistent in epidemiology, but have also become more pressing given increasing concerns for generalizability. This paper by Rothman arguing that causal studies should avoid representative samples created a bit of a stir a few years ago, and has produced some additional empirical work on how much this matters. These issues are important for when you are both producing and consuming research, and it will be worthwhile to struggle a bit with them. I’ve also included a recent empirical piece that attempts to assess the quantitative consequences for a specific set of exposures and outcomes.
Thursday 2025-02-13: Case study: COVID-19 vaccine uptake
What:
- Bradley VC et al. Unrepresentative Big Surveys Significantly Overestimated US Vaccine Uptake. Nature 2021;600:695–700. [link] [11400 words]
Why:
Once again, let’s take a break and delve into a case study regarding the importance (or non-importance) of representative samples. Do the consequences actually matter? A paper by Bradley on vaccine uptake surveys provides an example to discuss in the second session.
6.7 Week 7: How should we make statistical inferences?
This question probably won’t go away over the course of your PhD training, or in the near future, so it is important to grapple with it. What are the consequences of the way we currently teach and use p-values (or 95% confidence intervals), how does it affect the way you read and interpret evidence? Should we banish the term “statistically significant” and, if so, why? How will you argue against peer reviewers and journal editors (or even your supervisors) that demand you include p-values (or, even worse, stars for levels of ‘significance’) in your revisions? This is a core issue of moving from sampled data and analysis to the kind of inferences you will make about causal effects. How will you approach it?
Thursday Tuesday 2025-02-18: How should we use p-values?
What:
- Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond “p<0.05”. The American Statistician 2019;73(sup1):1–19. [link]. [10923 words]
Why:
Wasserstein et al. provide a series of arguments for abandoning p-value thresholds, some of which you are likely to have heard before, but there is much more in this paper. They make a number of positive arguments for how we should be conducting inference, and interpreting the results of research, and they aim to try and provide solid foundations for being more thoughtful about how to interpret evidence.
Thursday 2025-02-20: What are alternatives to p-values?
What:
Greenland S, Mansournia MA, Joffe M. To curb research misreporting, replace significance and confidence by compatibility. Prev Med 2022;164:107127. [link] [4493 words]
Heuts S, Kawczynski MJ, Sayed A, Urbut SM, Albuquerque AM, Mandrola JM, Kaul S, Harrell FE Jr, Gabrio A, Brophy JM. Bayesian Analytical Methods in Cardiovascular Clinical Trials: Why, When, and How. Can J Cardiol. 2024 Nov 7:S0828-282X(24)01130-9. doi: 10.1016/j.cjca.2024.11.002. Epub ahead of print. PMID: 39521054. [Link]
Why:
Given the Wasserstein paper’s suggestion to abandon statistical significance, questions naturally arise about how we should do inference instead of using p-values. Most epidemiologists are trained to use confidence intervals rather than p-values, but it does not appear to have changed the basic problem of scientists dichotomizing evidence using arbitrary statistical rules. These papers attempt to provide some alternative avenues to explore. Greenland et al. continue their quest to promote notions of ‘compatibility’ rather than significance. I included Heut et al. paper because it shows how to provide a more nuanced interpretation of a ‘non-significant’ RCT using straightforward Bayesian inference and moreover all statistical code to reproduce its findings is provided. This should get you thinking about how you will manage your inferences in your own research
6.8 Week 8: How bad can it be?
This week’s readings move beyond inference on the main quantitative question or hypothesis of interest and are focused on concepts and methods relevant to testing and probing the assumptions needed to interpret quantitative evidence. These ideas are crucial for thinking about testing alternative explanations for observed findings, and quantifying the assumptions needed to do so.
Tuesday 2025-02-25: How can we quantify our assumptions?
What:
- Lash TL, Fox MP, MacLehose RF, Maldonado G, McCandless LC, Greenland S. Good practices for quantitative bias analysis. Int J Epidemiol 2014;43(6):1969–85. [link] [12518 words]
Why:
Lash et al. provide a long overview of good practices for such bias analysis, discussing both the motivation for why one would want to to conduct bias analysis and the mechanics of how to do so (choosing parameters, considering uncertainty). These methods are valuable for most study designs, but may be especially so for “garden variety” observational studies that must rely on basic regression adjustment to have any hope of making causal inferences.
Thursday 2025-02-27: Can bias analysis be biased?
What:
Why:
Although sensitivity analysis has been around a long time, quantitative bias analysis continues to be rare, and perhaps one explanation is that it asks a lot from researchers. Other options may include using simple bounds to answer questions about how bad things would have to be to ‘nullify’ an effect, but replacing ignoring assumptions with absurd assumptions leaves much to be desired. Lash et al. demonstrate how to do bias analysis ‘better’ and what can go wrong when it is not done with care. Gustafson, in a comment on some other papers, provides an interesting hypothetical set of scenarios for different ways of combining different study designs and strategies for accounting for bias. I hope you can come away with an appreciation and understanding of why such analyses are valuable.
6.9 Week 9: Winter Break
Tuesday 2025-03-04: No class
Thursday 2025-03-06: No class
6.10 Week 10: To whom do epidemiologic results apply?
We have spent time now thinking about developing questions, considering whether to randomize treatments, making statistical inferences about population or causal parameters, and thinking about how to address biases. This week, we are moving on to thinking about the question of to whom our study results should apply. These are core questions that come up in the context of evaluating strengths and weaknesses of studies in peer review (or perhaps grant review), and whether the results of studies may provide actionable evidence.
Tuesday 2025-03-11: How should we think about generalizing to other populations?
What:
Lesko CR, Buchanan AL, Westreich D, Edwards JK, Hudgens MG, Cole SR. Generalizing Study Results: A Potential Outcomes Perspective. Epidemiology 2017;28:553–561. [link] [6631 words]
Westreich D, Edwards JK, Lesko CR, Cole SR, Stuart EA. Target Validity and the Hierarchy of Study Designs. Am J Epidemiol 2019;188:438–443. [link] [5308 words]
Why:
Despite the clear importance of considering to whom study results should apply, there has been little formal work on what assumptions are needed to derive quantitative estimates of effects in different external populations, whether those refer to the target population or a population that is external to the target. These two papers argue that we should consider these formal approaches, and provide some guidance as to what assumptions (and potentially data) would be needed to do so.
Thursday 2025-03-13: Should we generalize to specific individuals?
What:
Khoury, MJ et al. The Intersection of Genomics and Big Data with Public Health: Opportunities for Precision Public Health. PLoS Medicine 2020;17: e1003373. [link] [7677 words]
Cooper R, Paneth N. Will precision medicine lead to a healthier population? Issues in Science and Technology 2020;36(2):64-71. [link] [6793 words]
Why:
The second part of our readings on extending results to other populations reframes the question not about transporting results to a different sample, but about how to (and whether we can) derive reliable predictions about treatment effects at the individual level. This is related to ongoing discussions about the concepts of “precision” epidemiology or precision public health, and whether these ideas are really novel or just ways of re-branding what we have always considered in public health, which is targeting when it comes to interventions. There have emerged different ‘camps’ of those more and less enthusiastic about this idea, and these two papers are meant to provide an overview of some of these issues.
6.11 Week 11: Is research (including epidemiology) reliable?
This week we are stepping away a bit from epidemiology only, and focusing on larger questions related potential problems that may be widespread across scientific research (obviously, including epidemiology). Can or should we trust most published research? Is it reliable? Is it replicable and, if not, is that really a problem? Many of these issues have come up in the context of something that has been called the “replication crisis” in science, much of which started when some high-profile lab and social science projects were found not to replicate using similar designs and methods.
Tuesday 2025-03-18: Is science broken?
What:
Oliver, J. Scientific studies. Last Week Tonight with John Oliver, Season 3, Episode 11, May 5, 2016 [link] [20 mins] Note: contains explicit and crude language
Baker M. 1,500 scientists lift the lid on reproducibility. Nature 2016:533(7604):452–4. [link] [2002 words]
Pearson H. How COVID Broke the Evidence Pipeline. Nature 2021;593:182-5 [link] [3884 words]
Why:
The video by Oliver discusses wild claims, the propensity of such claims to be blown up by the media, as well as problems with incentives. Additional evidence on these issues comes from a survey of scientists reported in Nature across a broad number of fields, as well as a report by Pearson on how this has played out in research on the pandemic.
Thursday 2025-03-20: What are some potential solutions?
What:
- Munafo MR et al. A manifesto for reproducible science. Nature Human Behaviour 2017;1:1–9. [link] [9180 words]
Why:
The second session features a long paper by Munafo et al. that is more focused on outlining and describing some potential solutions, including study pre-registration and pre-analysis plan, registered reports or ‘results-blind’ peer review, sharing of research materials, including data and code, and changing incentives around publication and grants - all core processes for modern working scientists.
6.12 Week 12: How should we put together all of the evidence?
This week we are going to talk more about how to put together and think about diverse lines of evidence to come to some sort of judgement about causal effects. Most of you will have heard (and I agree) that it a single study is unlikely to be sufficient to generate certainty about a given exposure-outcome effect. There may be special circumstances (e.g, vaccine trials for COVID-19), but generally we are starting from a place where we have to consider various lines of argument and evidence to inform our thinking.
Tuesday 2025-03-25: Can “triangulation” help?
What:
- Lawlor DA, Tilling K, Davey Smith G. Triangulation in aetiological epidemiology. Int J Epidemiol 2016;45(6):1866–86. [link] [14610 words]
Why:
You may have seen various papers talk about the concept of ‘triangulation’ in thinking about evidence. The Lawlor et al. paper focuses on trying to pull together (sometimes by design) data sources that may trade off different kinds of biases in order to see whether results are consistent across various settings, but in a formal and methodological way. I encourage you to consider whether you think it is a useful approach.
Thursday 2025-03-27: Does meta-analysis help or hurt?
What:
Hilton Boon M, Burns J, Craig P, Griebler U, Heise TL, Vittal Katikireddi S, et al. Value and Challenges of Using Observational Studies in Systematic Reviews of Public Health Interventions. Am J Public Health 2022;112(4):548–52. [link] [3899 words]
Savitz DA, Forastiere F. Do pooled estimates from meta-analyses of observational epidemiology studies contribute to causal inference? Occup Environ Med 2021;78(9):621–2. [link] [2029 words]
Why:
Meta-analysis of a systematic review has become the dominant method for summarizing epidemiologic evidence, but comes with a large set of challenges, particularly for observational studies. These two papers provide an overview of some of the challenges of how these contribute to evidence, and whether they are ultimately useful for informing causal inference about the effects of exposures.
6.13 Week 13: How should we communicate epidemiologic evidence?
This week we will be reading more about the (possible) tension between our duties as epidemiologic scientists to try and report evidence in a dispassionate way, and the sometimes pressing need for action to tackle pressing health problems when evidence is uncertain. The readings are from two seasoned epidemiologists and provide some overarching guidance thinking about how far to ‘push’ with your results, and what the benefits and drawbacks of different levels of engagement with the broader scientific community might be.
Tuesday 2025-04-01: How should we discuss epidemiologic results?
What:
Savitz DA, Wellenius GA. Characterization and Communication of Conclusions. In: Interpreting epidemiologic evidence: connecting research to applications. Oxford University Press; 2016. p. 200–10. [link] [5958 words]
Blastland M et al. Five rules for evidence communication. Nature 2020;587:362–364 [link] [2664 words]
Why:
The second session’s readings are focused more on how to grapple with and communicate uncertainty in epidemiologic findings, both to other scientists (e.g., in peer-reviewed papers) and to the public or to other stakeholders. The chapter by Savitz and Wellenius provides some high-level guidance and advice to epidemiologists about their duties to describe evidence in a dispassionate way, but also to make good faith efforts to distill findings in ways suitably tailored for different audiences. Some additional discussion of how to consider biases across different studies when synthesizing evidence is also included. The piece by Blastland et al. is more direct in providing advice to scientists on how to communicate about evidence, largely arguing that our duties are to be transparent in reporting, especially about uncertainty, rather than attempting to persuade or change opinions.
Thursday 2025-04-03: How much does “evidence” even matter?
What:
- Greenhalgh T. Miasmas, Mental Models and Preventive Public Health: Some Philosophical Reflections on Science in the COVID-19 Pandemic. Interface Focus 2021;11: 1-8. [link] [7602 words]
Why:
Greenhalgh uses the twin examples of the 19th century cholera epidemics in England and the present pandemic to talk about the role of ‘extra-scientific’ values in shaping how we view and decide on what kind of ‘evidence’ is compelling, and ties this to current (and ongoing) debates regarding the use of evidence in the COVID-19 pandemic.
6.14 Week 14: What is to be done?
Tuesday 2025-04-08: Can we change scientific culture?
We’ve discussed multiple theoretical and methodological issues thus far, many of which stem from, and cannot be separated from, the fact that science is conducted by humans. This creates a kind of cultural inertia in science that has produced important advances, but also creates problems. Can we change the culture to improve the way epidemiology is working today? This last session asks you to consider this question.
What:
Why:
These two papers aim to argue for adopting larger scale innovations in social sciences (including epidemiology). Mathur and Fox make a strong case for greater openness and transparency, specifically in epidemiology. Lakens (a psychologist) makes a case that ethical review boards have an obligation to consider whether proposed research is capable of producing reliable evidence, and suggests they should have the power to intervene to stop work that has a low probability of providing reliable evidence.
Thursday 2025-04-10: A few ancillary topics and final wrap up
- Possible topics to discuss cognitive biases, researcher degrees of freedom, combatting medical misinformation
- Final discussion.
- Course review and feedback.
Additional optional readings:
Greenland, S. “Invited Commentary: The Need for Cognitive Science in Methodology.” Am J Epidemiol 186, no. 6 (Sep 15 2017): 639-45. https://doi.org/10.1093/aje/kwx259. Link
Silberzahn, R., E. L. Uhlmann, D. P. Martin, P. Anselmi, F. Aust, E. Awtrey, Š. Bahník, et al. “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results.” Advances in Methods and Practices in Psychological Science 1, no. 3 (2018): 337-56. [Link]
7 Academic Integrity
The Department of Epidemiology and Biostatistics has asked instructors to remind students of McGill University regulations regarding academic integrity and plagiarism. These are excerpted below.
7.1 Academic offences
The integrity of University academic life and of the degrees the University confers is dependent upon the honesty and soundness of the teacher- student learning relationship and, as well, that of the evaluation process. Conduct by any member of the University community that adversely affects this relationship or this process must, therefore, be considered a serious offence. McGill University values academic integrity. Therefore all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures (see http://www.mcgill.ca/integrity for more information).
L’université McGill attache une haute importance à l’honnêteté académique. Il incombe par conséquent à tous les étudiants de comprendre ce que l’on entend par tricherie, plagiat et autres infractions académiques, ainsi que les conséquences que peuvent avoir de telles actions, selon le Code de conduite de l’étudiant et des procédures disciplinaires (pour de plus amples renseignements, veuillez consulter le site http://www.mcgill.ca/integrity).
7.2 Plagarism
- No student shall, with intent to deceive, represent the work of another person as his or her own in any academic writing, essay, thesis, research report, project or assignment submitted in a course or program of study or represent as his or her own an entire essay or work of another, whether the material so represented constitutes a part or the entirety of the work submitted.
- No student shall, with intent to deceive, represent the work of another person as his or her own in any academic writing, essay, thesis, research report, project or assignment submitted in a course or program of study or represent as his or her own an entire essay or work of another, whether the material so represented constitutes a part or the entirety of the work submitted.
- Upon demonstration that the student has represented and submitted another person’s work as his or her own, it shall be presumed that the student intended to deceive; the student shall bear the burden of rebutting this presumption by evidence satisfying the person or body hearing the case that no such intent existed, notwithstanding Article 22 of the Charter of Student Rights.
- Upon demonstration that the student has represented and submitted another person’s work as his or her own, it shall be presumed that the student intended to deceive; the student shall bear the burden of rebutting this presumption by evidence satisfying the person or body hearing the case that no such intent existed, notwithstanding Article 22 of the Charter of Student Rights.
- No student shall contribute any work to another student with the knowledge that the latter may submit the work in part or whole as his or her own. Receipt of payment for work contributed shall be cause for presumption that the student had such knowledge; the student shall bear the burden of rebutting this presumption by evidence satisfying the person or body hearing the case that no such intent existed (notwithstanding Article 22 of the Charter of Students’ Rights).
7.3 Cheating
No student shall:
- In the course of an examination obtain or attempt to obtain information from another student or unauthorized source or give or attempt to give information to another student or possess, use or attempt to use any unauthorized material;
- Represent or attempt to represent oneself as another or have or attempt to have oneself represented by another in the taking of an examination, preparation of a paper or other similar activity;
- Represent or attempt to represent oneself as another or have or attempt to have oneself represented by another in the taking of an examination, preparation of a paper or other similar activity;
- Submit in any course or program of study, without both the knowledge and approval of the person to whom it is submitted, all or a substantial portion of any academic writing, essay, thesis, research report, project or assignment for which credit has previously been obtained or which has been or is being submitted in another course or program of study in the University or elsewhere;
- Submit in any course or program of study, without both the knowledge and approval of the person to whom it is submitted, all or a substantial portion of any academic writing, essay, thesis, research report, project or assignment for which credit has previously been obtained or which has been or is being submitted in another course or program of study in the University or elsewhere;
- Submit in any course or program of study any academic writing, essay, thesis, research report, project or assignment containing a statement of fact known by the student to be false or a reference to a source which reference or source has been fabricated.
Downloaded and excerpted from A Handbook on Student Rights and Responsibilities, 2010. Available on-line at http://www.mcgill.ca/students/srr/academicrights/integrity/cheating
8 Language Rights
“In accord with McGill University’s Charter of Students’ Rights, students in this course have the right to submit in English or in French any written work that is to be graded. This does not apply to courses in which acquiring proficiency in a language is one of the objectives.”
« Conformément à la Charte des droits de l’étudiant de l’Université McGill, chaque étudiant a le droit de soumettre en français ou en anglais tout travail écrit devant être noté (sauf dans le cas des cours dont l’un des objets est la maîtrise d’une langue). »