For October’s discussion meetup, we will be discussing John Ioannidis’s 2005 paper “Why Most Published Research Findings Are False.” In addition to outright fraud (Ariely, Wansink, Francesca Gino (lawsuit!), etc.), there are quite a few more subtle and pernicious ways in which seemingly compelling results can be indistinguishable from, well, nothing.
Bad incentives and bad uses of statistics can result in lots of fooled researchers. So, how do we avoid fooling ourselves? Can we trust any science or should we throw it all away and get into essential oils instead? Let’s discuss!
PS: EA Chicago will be hosting a discussion on the new California AI safety bill on Wednesday, 10/2 at 6pm. Learn more here.
PPS: We have a one-on-one meetup form! Review the spreadsheet and then reach out to people via email! Meet up for coffee, set up family playdates if you have kids, etc!
READINGS/MEDIA
- “Why Most Published Research Findings Are False” by John Ioannidis (this is a technical but reasonably accessible paper): https://journals.plos.org/plosmedicine/article/file?type=printable&id=10.1371/journal.pmed.0020124
- John Ioannidis on Peter Attia’s podcast: https://peterattiamd.com/johnioannidis/
- “Specification Curve Analysis” by Uri Simonsohn, Joseph Simmons & Leif Nelson (a more technical paper): https://urisohn.com/sohn_files/wp/wordpress/wp-content/uploads/Paper-Specification-curve-2018-11-02.pdf
- “p-hacked Hypotheses are Deceptively Robust” by Uri Simonsohn (blog post — the archives here are great): https://datacolada.org/48