PsychologyBlogger

The 20% Statistician

A blog on statistics, methods, philosophy of science, and open science. Understanding 20% of statistics will improve 80% of your inferences.
Home PageAtom Feed
language
Error ControlPowerStatisticsPsychology
Published
Author Daniel Lakens

TL;DR: Don’t like one-sided tests? Distribute your alpha level unequally (i.e., 0.04 vs 0.01) across two tails to still benefit from an increase in power. My two unequal tails in a 0.04/0.01 ratio (picture by my wife). This is a follow-up to my previous post, where I explained how you can easily become 20% more efficient when you aim for 80% power, by using a one-sided test.

Bayesian StatisticsConfidence IntervalsNHSTRStatisticsPsychology
Published
Author Daniel Lakens

I've created an easy to use R script that will import your data, and performs and writes up a state-of-the-art dependent or independent t-test.

Psychology
Published
Author Daniel Lakens

Because the true size of effects is uncertain, determining the sample size for a study is a challenge. A-priori power analysis is often recommended, but practically impossible when effect sizes are very uncertain. One situation in which effect sizes are by definition uncertain is a replication study where the goal is to establish whether a previously observed effect can be reproduced.

Effect SizesMeta-analysisP-curveStatisticsPsychology
Published
Author Daniel Lakens

A meta-analysis of 90 studies on precognition by Bem, Tressoldi, Rabeyron, & Duggan has been circulating recently. I have looked at this meta-analysis of precognition experiments for an earlier blog post. I had a very collaborative exchange with the authors, which was cordial and professional, and led the authors to correct the mistakes I pointed out and answer some questions I had.

Psychology
Published
Author Daniel Lakens

This blog post is presented in collaboration with a new interactive visualization of the distribution of p-values created by Kristoffer Magnusson (@RPsychologist) based on code by JP de Ruiter (@JPdeRuiter). Question 1 : Would you be inclined to interpret a p -value between 0.16- 0.17 as support for the presence of an effect, assuming the power of the study was 50%? Write down your answer – we will come back to this question

Psychology
Published
Author Daniel Lakens

Throughout the history of psychological science, there has been a continuing debate about which statistics are used and how these statistics are reported. I distinguish between reporting statistics, and interpreting statistics. This is important, because a lot of the criticism on the statistics researchers use comes from how statistics are interpreted, not how they are reported.

Psychology
Published
Author Daniel Lakens

[This is a re-post from my old blog, where this appeared March 8, 2014]  Several people have been reminding us that we need to perform well powered studies. It’s true this is a problem, because low power reduces the informational value of studies (a paper Ellen Evers and I wrote about this, has now appeared in Perspectives on Psychological Science, and is available here). If you happen to have a very large sample, good for you.