The Chronicle of Higher Education ’s piece on Sci-Hub contains a disturbing claim that I’ve not seen elsewhere. I’ll quote: If this is true, then it certainly undermines the narrative of Sci-Hub as hero.
The Chronicle of Higher Education ’s piece on Sci-Hub contains a disturbing claim that I’ve not seen elsewhere. I’ll quote: If this is true, then it certainly undermines the narrative of Sci-Hub as hero.
So, Sci-Hub is the talk of the town. Everyone’s talking about it. I spent Friday afternoon at Manchester University library, giving a couple of taks about open access, and hearing several others about copyright. It was fascinating being a room full of librarians, all of them aware that Sci-Hub is out there, all of them torn between disapproval and excitement. As Martin Eve said on Twitter: Me, I’m not so sure whether I can condone it or not.
In the world of novel-writing, people spend their own time creating art — writing. Creative works come into being, and their copyright is (at least initially) owned by the creators.
As a long-standing proponent of preprints, it bothers me that of all PeerJ’s preprints, by far the one that has had the most attention is Terrell et al. (2016)’s Gender bias in open source: Pull request acceptance of women versus men.
New paper out in Biology Letters: Hone, D.W.E., Farke, A.A., and Wedel, M.J. 2016. Ontogeny and the fossil record: what, if anything, is an adult dinosaur? Biology Letters 2016 12 20150947; DOI: 10.1098/rsbl.2015.0947. The idea that dinosaurs had unusual life histories is not new.
Thirteen years ago, Kenneth Adelman photographed part of the California coastline from the air. His images were published as part of a set of 12,000 in the California Coastal Records Project. One of those photos showed the Malibu home of the singer Barbra Streisand. In one of the most ill-considered moves in history, Streisand sued Adelman for violation of privacy.
[Note: Mike asked me to scrape a couple of comments on his last post – this one and this one – and turn them into a post of their own. I’ve edited them lightly to hopefully improve the flow, but I’ve tried not to tinker with the guts.] This is the fourth in a series of posts on how researchers might better be evaluated and compared. In the first post, Mike introduced his new paper and described the scope and importance of the problem.
You’ll remember that in the last installment (before Matt got distracted and wrote about archosaur urine), I proposed a general schema for aggregating scores in several metrics, terming the result an LWM or Less Wrong Metric.
{.size-large .wp-image-12788 .aligncenter loading=“lazy” attachment-id=“12788” permalink=“http://svpow.com/2016/01/28/yes-folks-birds-and-crocs-can-pee/ostrich-peeing/” orig-file=“https://svpow.files.wordpress.com/2016/01/ostrich-peeing.jpg” orig-size=“1280,720” comments-opened=“1”
I said last time that my new paper on Better ways to evaluate research and researchers proposes a family of Less Wrong Metrics, or LWMs for short, which I think would at least be an improvement on the present ubiquitous use of impact factors and H-indexes. What is an LWM?