I think we’ve all had enough of the Impact Factor as a way of measuring the quality of journals.
I think we’ve all had enough of the Impact Factor as a way of measuring the quality of journals.
[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.] This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem.
You’ll remember that in the last installment (before Matt got distracted and wrote about archosaur urine), I proposed a general schema for aggregating scores in several metrics, terming the result an LWM or Less Wrong Metric.
I said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less Wrong Metrics, or LWMs for short, which I think would at least be an improvement on the present ubiquitous use of impact factors and H-indexes. What is an LWM?
Like Stephen Curry, we at SV-POW! are sick of impact factors. That’s not news. Everyone now knows what a total disaster they are: how they are signficantly correlated with retraction rate but not with citation count; how they are higher for journals whose studies are less statistically powerful; how they incentivise bad behaviour including p-hacking and over-hyping.
As has now been widely reported, NISO have a $200K grant from the Alfred P Sloan Foundation to develop standards for AltMetrics. Why? If there’s one consistent lesson from standardisation processes, it’s that standards which codify existing practice do well, while those that try to invent new practice in the form of a standard do badly.