Google Scholar turned 20 last month and Nature wrote a piece with the title "Can Google Scholar survive the AI revolution?" and quoted as saying This led me to think, how and when do I use Google Scholar vs other "AI search tools"?
Google Scholar turned 20 last month and Nature wrote a piece with the title "Can Google Scholar survive the AI revolution?" and quoted as saying This led me to think, how and when do I use Google Scholar vs other "AI search tools"?
I have a controversial and perhaps somewhat surprising (to some) view.
Summary Classification of Academic Search Tools by Skill and Performance : This post explores a framework for categorizing academic search tools based on their skill cap (the expertise needed to use them effectively) and performance cap (the potential quality of results they can yield), drawing parallels to gaming strategies. Trade-off : Tools like Google Scholar and Web Scale Discovery services (e.g., Primo, Summon) are seen as
Audio Overviews in Google NotebookLM is making waves online. When I first tried it, it was a "wow" moment for me. The last time I felt that way was trying Perplexity.ai in late 2022 and realizing that search engines could now return answers (with citations) instead of just potentially relevant links and I realized this would be a huge paradigm shift.
Source Ex Libris surprised us by suddenly releasing Primo Research Assistant to production on September 9, 2024 (when the earlier timeline was 4Q 2024 with some believing it might even be delayed). Despite the fact that there are so many RAG (retrieval augmented generation) academic search systems today that generate answers from search, this is still quite a significant event to be worth covering in my blog. Why?
IP and ethical issues surrounding the use of content in Large Language Models (LLMs) have sparked significant debate, but I’ve mostly stayed out of it as this isn’t my area of expertise, and while there’s much to discuss and many legal opinions to consider, ultimately, the courts will decide what’s legal. However, for those interested in exploring this topic further, I recommend Peter Schoppert’s AI & Copyright substack.
I recently watched a librarian give a talk about their experiments teaching prompt engineering. The librarian drawing from the academic literature on the subject (there are lots!), tried to leverage "prompt engineering principles" from one such paper to craft a prompt and used it in a Retrieval Augmented Generation (RAG) system, more specifically, Statista's brand new "research AI" feature.
I have been writing about what I call citation based literature mapping tools for over 5 years now, starting from Citation Gecko in 2018. This trend really intensified in 2020, with the overall victory of the open citation movement making citation linking data effectively "free", and tools like Connected papers, Researchrabbit, LitMaps and more really started to get some traction.
Can Semantic Search be more interpretable? COLBERT, SPLADE might be the answer but is it enough?
Warning : I am not a information retrieval researcher, so take my blog post with a pinch of salt In my last blog post, I described a simplified description of a framework for infomation retrieval from the paper -