InformatikEnglischGhost

Research Graph

Research Graph
Research Graph
StartseiteRSS-Feed
language
Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction Vision-enabled AI models have rapidly evolved to become essential tools across numerous applications, from content moderation to image analysis and multimodal reasoning. Cohere's recent entry into this space with their Aya Vision model promises to deliver competitive capabilities in the increasingly crowded market of multimodal AI systems.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht
Autor Yao Chen

Introduction In April 2025, Meta AI released Llama 4, the latest iteration of its open-weight large language models (LLM) family. Building on the success of its predecessors, Llama 4 introduced groundbreaking features like native multimodal capabilities, an innovative Mixture of Experts (MoE) architecture, and an unprecedented context window of up to 10 million tokens.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction Large Language Models (LLMs) have transformed how we build intelligent applications, but implementing production-ready AI systems often requires navigating complex infrastructure, managing model deployments, and building custom interfaces. Many developers face a challenging choice: use simple but limiting no-code platforms that hide the complexity, or build everything from scratch with code-heavy frameworks.

Persistent IdentifiersTocInformatikEnglisch
Veröffentlicht

Introduction In today’s digital research landscape, finding, accessing, and connecting scholarly information has become increasingly complex. How do we reliably identify research outputs, the people who create them, and the organisations where this work takes place? This is where persistent identifiers (PIDs) come into play.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction The AI landscape is evolving at breakneck speed, and Google is firmly in the race with its Gemini family of models. The latest iteration, Gemini 2.5 Pro, recently became available in preview, promising significant advancements in reasoning, coding, multimodality, and an enormous context window.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction Generated Knowledge Prompting is a prompt engineering technique designed to enhance the performance of large language models (LLMs) by leveraging their ability to generate relevant knowledge dynamically. By first generating useful knowledge related to the task, the model can better understand the context and provide more accurate, adaptable, and contextually rich answers.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction Large language models like GPT-4 have become remarkably capable at solving a wide range of problems, but they still face a fundamental limitation: they generate text token by token, making decisions linearly without the ability to explore multiple paths or backtrack when needed.

Persistent IdentifiersTocInformatikEnglisch
Veröffentlicht

Introduction In today’s digital research landscape, connecting the dots between various research outputs has become increasingly challenging. Each day, thousands of new papers are published, datasets are created, and research grants are awarded. Yet despite this increase in information, finding meaningful connections between these elements can feel like searching for specific stars in a vast galaxy without a map.

Persistent IdentifiersTocInformatikEnglisch
Veröffentlicht

Introduction Imagine a world where every piece of research—every article, dataset, researcher, and institution—is seamlessly connected, no matter where it resides or how the digital landscape shifts. This isn’t a distant dream; it’s the reality being forged by persistent identifiers (PIDs). These unassuming strings of characters are revolutionizing how we create, share, and build upon knowledge.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht

Introduction Large language models (LLMs) like ChatGPT have transformed how we use artificial intelligence, excelling at tasks like writing essays, answering questions, and even holding conversations. But when it comes to complex reasoning—think solving math problems, tackling commonsense puzzles, or working with symbols—these models often hit a wall.

Artificial IntelligenceTocInformatikEnglisch
Veröffentlicht
Autor Yao Chen

Introduction The rapid advancement of Large Language Models (LLMs) has revolutionised artificial intelligence, enabling powerful text understanding and generation capabilities. However, despite their strengths, LLMs often struggle with complex reasoning tasks requiring deep abstraction. A new paradigm known as Meta Prompting (MP) emerges as a promising technique to enhance LLM efficiency and cognitive depth.