Hello subscribers, I have just released Hackademics II, which is essentially a report and analysis of the reproducibility/replication crisis in the social sciences. Hopefully you will learn some things even if you follow this topic closely, I know I did. There is a backstory to the creation of this piece that I wanted to share with the readers of this blog. In this first season, as you may know, I’ve covered a lot of topics, about war, death, God, music, and gender. Many of these, in my mind, were far more controversial than something as geeky as replication in science, which to my mind is really about the epistemology of the statistical sciences.
But I was wrong. Of all the stories I have done this season, including what is to come, this was the piece that has created the most turmoil, before I even finished collecting all of the interviews. As a result, I’ve tried to be really careful in the episode. But even given that, I know I am going to make mistakes in it, and if so, I will correct them and re-upload the new audio. I was really surprised by all this blowback. I didn’t expect it or understand it. I just wanted to say that if you listen to it, it is a good faith attempt at trying to uncover the epistemological issues I found most interesting about the current state of the statistical sciences. That was the only goal. I also came away from the story hopeful that, once the dust settles, the whole of the social and medical sciences will be a stronger.
Awesome! You continue to impress and entertain Barry 🙂 I took a few courses during my undergrad with Michael Ruse who wrote “But is it Science?” And I remember taking his phil sci course that used that book thinking, “Whoa! I always thought psych etc were legitimate sciences (viz. the results they reported were “facts” that were just as “real” as the physical sciences).” It was a shock (honestly the best word for it) that still reverberates with me today 25 yrs later…and bubbles closer to the surface when I teach psych soc anthro as I do on occasion.
Keep up the good works!!
I believe you made a slight error when describing the implications of the p-value in the “50 Shades of Grey” study. Given the p-value of 0.01, you said that “they could infer that they had a 99% chance that they didn’t get their results by chance.”
Let H=the hypothesis that your politics affects your color perception, and ~H=there is no connection, and any perceived connection is the result of chance. A p-value of 0.01 means P(data|~H)=0.01. This means that either a) ~H is false, or b) or ~H is true and a low probability event occurred. When you say “they could infer that they had a 99% chance that they didn’t get their results by chance” it sounds like you are saying that P(H|data)=0.99. But in order to move from P(data|~H)=0.01 to P(H|data)=0.99, you need to factor in other information via the use of Bayes’ Theorem. I think you have assumed that P(data|~H)=P(~H|data). This is a common mistake, even among professionals
You’re right about that Ron. I’m going to go back and change that. This mistake is at the heart of the issue. In fact, I need to be a little clearer in that part what I intended to mean there. I was trying to mean there was this mistaken idea that this is what is meant by p<.01. In fact, the episode is about the mistaken interpretation of p-values, but you're right, it comes across incorrectly. I'll change that tomorrow morning.
Cool! Great episode, by the way. I think the first Hackademics episode actually did job of explaining what a p-value is and is not.