A critical discussion of the role of case-to-condition ratios in Qualitative Comparative Analysis (QCA).
A critical discussion of the role of case-to-condition ratios in Qualitative Comparative Analysis (QCA).
The post critiques the paper “Uncertainty limits the use of power analysis”. It highlights issues with power analysis because of uncertainty deriving from sampling variability and fluctuating population effect sizes. While acknowledging valid points, I believe the paper’s conclusions are overly dismissive and argue for a refined approach to power and sample size estimation, if one accepts that uncertainty is a problem.
The debate between Yann LeCun and Elon Musk, reported in Nature, questions whether science necessitates publishing results. My position is that science depends on how you produce your knowledge, not necessarily requiring publication. Only if you face the public, you need to publish about your work.
Is Open Science passé? is the question asked by Xenia Schmalz in this blogpost. I recommend reading it before I share brief thoughts on some points that are raised. I wish an open science movement was not needed anymore, but I agree this is most likely not the answer to the leading question. Neither has the open science movement failed; progress toward more transparent and credible science is simply slow.
The article “Resolving empirical controversies with mechanistic evidence” discusses the potential of using evidence about mechanisms to resolve statistical disagreements and aid in choosing the correct quantitative model. While there are challenges and uncertainties in this approach, it emphasizes the value of theorizing about mechanisms and collecting evidence about them, especially in disciplines like economics.
Sourcely, an AI company, promises to streamline research by finding, summarizing, and adding credible sources in minutes. While this sounds appealing, skepticism arises as using such a tool may prioritize citing over genuine research. Initial tests revealed limited functionality, leaving doubts about its practical value in the research process.
This post summarizes some (late) thoughts on the short article The data revolution in social science needs qualitative research by Grigoropoulou and Small, published in Nature Human Behavior. This is an excellent article that systemizes the ways in which qualitative research should complement big data/computational social science (CSS) and gives example of work that has done this already (I understand big data/CSS to be the focus here).
The LSE Impact blog has a post from May 2021 raising some reservations about the idea of ‘Slow Science’. The ‘Slow Science’ idea hasn’t really picked up in academia, as far as I can tell. The post presents some good thoughts about why the “slowness-idea” is problematic in general. I agree that slowness is not a value in itself. Sometimes, developments and events like a pandemic demand it to do research faster than one would do it otherwise.
In place of a generic blog post, I am reposting a short Twitter thread here. The thread is a response to an opinion piece on the Times Higher Education website titled Pay researchers for results, not plans. (Posts on the THE website require registration of an account that includes a couple of free reads.) I copy-paste thread into this post. If you prefer to read it on Threadreader, you find it here.
Some ideas about how the peer review process in academic publishing can be used as a device for preregistering parts of an empirical analysis.
The bi-annual publication of the APSA Section on Qualitative Methods and Multi-Method Research has a highly interesting and controversial symposium on confirmation bias in process tracing in its current issue.