PsychologyBlogger

The 20% Statistician

A blog on statistics, methods, philosophy of science, and open science. Understanding 20% of statistics will improve 80% of your inferences.
Home PageAtom Feed
language
Psychology
Published
Author Daniel Lakens

[This is a re-post from my old blog, where this appeared March 8, 2014]  Several people have been reminding us that we need to perform well powered studies. It’s true this is a problem, because low power reduces the informational value of studies (a paper Ellen Evers and I wrote about this, has now appeared in Perspectives on Psychological Science, and is available here). If you happen to have a very large sample, good for you.

Psychology
Published
Author Daniel Lakens

This blog post is now included in the paper "Sample size justification" available at PsyArXiv. Observed power (or post-hoc power) is the statistical power of the test you have performed, based on the effect size estimate from your data. Statistical power is the probability of finding a statistical difference from 0 in your test (aka a ‘significant effect’), if there is a true difference to be found.

Psychology
Published
Author Daniel Lakens

Psychology journals should require, as a condition for publication, that data supporting the results in the paper are accessible in an appropriate public archive. I hope that in the near future, the ‘should’ in the previous sentence will disappear, and that data sharing has become a requirement. Many journals already have requirements to share data, but often not in a public database.

Psychology
Published
Author Daniel Lakens

I like p-curve analyses. They are a great tool to evaluate whether sets of studies have evidential value or not. In a recent paper, Simonsohn, Nelson, & Simmons (2014) show how p-curve analyses can correct for publication bias using only significant results. That’s pretty cool. However, trim-and-fill is notoriously bad at unbiased effect size estimation.

Psychology
Published
Author Daniel Lakens

I think it is important people report confidence intervals to provide an indication of the uncertainty in the point estimates they report. However, I am not too enthusiastic about the current practice to report 95% confidence intervals. I think there are good reasons to consider alternatives, such as reporting 99.9% confidence intervals instead.

Psychology
Published
Author Daniel Lakens

Excellent news this morning: The negotiations between Dutch universities and Elsevier about new contracts for access to the scientific literature have broken down (report in Dutch by VSNU and De Volkskrant). This means universities are finally taking a stand: We want scientific knowledge to be freely available as open access to anyone who wants to read it, and we believe publishers are making more than enough money as it is. Moving towards fully

Psychology
Published
Author Daniel Lakens

Yesterday Mike McCullough posted an interesting question on Twitter. He had collected some data, observed a p = 0.026 for his hypothesis, but he wasn't happy. Being aware that higher p -values do not always provide strong support for H1, he wanted to set a new goal and collect more data.

Psychology
Published
Author Daniel Lakens

[a slightly adapted version of this blog post is now in press at QJEP: see https://osf.io/ycag9/ for the manuscript and R scripts] In this blog, I'll explain how p-hacking will not lead to a peculiar prevalence of p-values just below .05 (e.g., in the 0.045-0.05 range) in the literature at large, but will instead lead to a difficult to identify increase in the Type 1 error rate across the 0.00-0.05 range.

Psychology
Published
Author Daniel Lakens

I'd like to gratefully acknowledge the extremely helpful comments and suggestions by Anton Kühberger and his co-authors on an earlier draft of this blog post, who patiently answered my questions and shared additional analyses. I'd also like to thank Marcel van Assen, Christina Bergmann, JP de Ruiter, and Uri Simonsohn for comments and suggestions.