Joe Bak-Coleman
Research Scientist at the University of Washington based in Brooklyn. Also: SFI External Applied Fellow, Harvard BKC affiliate. Collective Behavior, Statistics, etc..
- It's interesting to see so many academics take the position they "only learned later" what Epstein had done. Do people not google funders? As early as August 4th, 2006, his wikipedia page included discussion of the charges, and they've been a fixture since. en.wikipedia.org/w/index.php?...
- "Steven hawking was in a wheelchair and we thought he was pretty smart" is a wild argument to make in a scientific article. www.nature.com/articles/d41...
- There's something truly absurd about a paper blaming science itself for gaps in trust, led by an author whose work was retracted for failing to follow the exact recommendations in the piece. www.pnas.org/doi/10.1073/...
- Comment dit-on “get his ass” en Francais? www.bbc.com/news/article...
- It’s somehow a breath of fresh air to see the obvious stated… peer review requires peers!
- New draft: "Decline effects, statistical artifacts, and a meta-analytic paradox". In this manuscript I show how a common practice in meta-analysis (eg the 2015 Open Science Collaboration) creates artifactual signatures of poor scientific behavior. PDF: raw.githubusercontent.com/richarddmore... 1/x
-
View full threadI think we find you're pretty sunk in the aggregate as well, depending a bit on how you're measuring replicability. We find a pretty smooth (and predicted) function of replicability dependent solely on replication sample size. In effect, if they'd run OSC 2015 at N=2000, they'd have found high rep
- In a sense they did this with ML5, which recovers significance for a few of the studies (about exactly what our model would expect, tho noisy).
- It's distinct, but our model here effectively (I think) would find (finds?) the same phenomenon. When we calculate magnitude error, we code in the typical signing of effects such that it's expected to see declines even in the absence of poor scientific behavior. osf.io/preprints/so...
- It really gets gnarly as a lot of these projects determine sample size based on the original (signed) effect size and wind up with minuscule samples. The crux of our paper is that it's unsurprising we see low replicability in these conditions---and is anticipated even absent QRPs etc.
- 🧪 It's been 10 years since Dorothy Bishop and I published a commentary in Nature about the risks of transparency. doi.org/10.1038/529459a 1/10
- Good thread. The norms around sharing of data came through so fast that I wonder what fields will have to learn the hard way about risks.