A shorter version of this blog post in Dutch has been published earlier on the website of Nederlandse UNESCO Commissie.
A shorter version of this blog post in Dutch has been published earlier on the website of Nederlandse UNESCO Commissie.
Today CWTS released the Leiden Ranking 2025. We in fact released two editions of the ranking, the Traditional Edition and the Open Edition. The Leiden Ranking Traditional Edition is the name we use for the Leiden Ranking based on data from the Web of Science database. The Leiden Ranking Open Edition is the name of a new edition of the Leiden Ranking launched last year based on open data from the OpenAlex database.
International research mobility is widely seen as a strategic tool to strengthen national research capacity and global competitiveness. Programmes like the Fulbright Program (US), the Sandwich Doctorate Programme (Brazil), Erasmus Programmes (EU), and China Scholarship Council (China) illustrate this global commitment.
In most scientific disciplines, the standard way to share new research findings is to publish a research article in a peer-reviewed journal. Increasingly, however, the standard approach to scientific publishing is complemented by alternative approaches, the most significant one being the publication of non-peer-reviewed research articles on preprint servers.
Generative AI (genAI) is now ubiquitous in research. The field of academic publishing is struggling with an overwhelming mass of genAI slop and where to draw the line between acceptable and unacceptable genAI use. GenAI not only offers new opportunities, as is widely touted, but also creates numerous new problems and challenges, not least for responsible research. What does it mean to use genAI responsibly? Is it even possible?
Generative Artificial Intelligence (GAI) tools like ChatGPT are increasingly finding their way into research and scholarly publishing. This trend brings a pressing challenge: how do academics clearly disclose the use of AI in their research workflows? Right now, many disclosures are either too vague (e.g. "We used ChatGPT to improve clarity") or missing entirely.
Academia today finds itself in a paradox. The ‘publish or perish’ mantra has spiralled into an uncalled race, where the finish line is quantity , and not quality . In this obsession to stack CVs with publication credits, research quality and integrity often suffer. Between 2018 and 2022, research articles witnessed a 22.78 per cent growth to 5.14 million. Yet concerns over research integrity persist.
Leiden Madtrics readers might already be familiar with what some have called a “great project” but a “terrible acronym”: GLOBAL, the Guidance List for the repOrting of Bibliometric AnaLyses. Last summer, we invited bibliometricians to join the GLOBAL Delphi study to co-develop a reporting guideline for bibliometric analyses.
(The Spanish version of this blog post is available here). The project, funded by Canada’s International Development Research Centre (IDRC), is led by the Latin American Forum for Scientific Evaluation (FOLEC) of the Latin American Council of Social Sciences (CLACSO), in collaboration with the Centre for Science and Technology Studies (CWTS) at Leiden University and SIRIS Academic. International funding circuits are not neutral;
Why rethinking PhD trajectories matters now In the last few years, we have witnessed a growing interplay between the Open Science (OS) movement and the reform of research assessment through global initiatives such as CoARA. In the Netherlands, national efforts like Recognition &
A preprint is a research article that is made openly available on a preprint server, typically before submission to a peer-reviewed journal. Preprinting enables new scientific knowledge to be shared in a timely and open way. It often takes many months or even years for an article to pass through the peer review process of a journal.