InformatikEnglischGhost

Research Graph

Research Graph
Research Graph
StartseiteRSS-Feed
language
TensorflowKerasRecurrent-neural-networkInformatikEnglisch
Veröffentlicht
Autor Wenyi Pi

Understanding Sequential Data Modelling with Keras for Time Series Prediction Author Wenyi Pi ( ORCID : 0009–0002–2884–2771) Introduction Recurrent Neural Networks (RNNs) are a special type of neural networks that are suitable for learning representations of sequential data like text in Natural Language Processing (NLP). We will walk through a complete example of using RNNs for time series prediction, covering

Artificial-intelligenceLarge-language-modelsNaturallanguageprocessingInformatikEnglisch
Veröffentlicht

Understanding the Power and Applications of Natural Language Processing Author Dhruv Gupta ( ORCID: 0009–0004–7109–5403) Introduction We are living in the era of generative AI. In an era where you can ask AI models almost anything, they will most certainly have an answer to the query. With the increased computational power and the amount of textual data, these models are bound to improve their performance.

Prompt-engineeringLarge-language-modelsArtificial-intelligenceInformatikEnglisch
Veröffentlicht

Prompt Engineering — Part 2 Using intelligence to use artificial Intelligence: A deep dive into Prompt Engineering Author Dhruv Gupta (ORCID: 0009–0004–7109–5403 ) Introduction In the previous article we discussed what prompt engineering and some of the techniques used for prompt engineering.

Artificial-intelligenceRecurrent-neural-networkDeep-learningInformatikEnglisch
Veröffentlicht
Autor Wenyi Pi

Understanding how RNNs work and its applications Author Wenyi Pi ( ORCID : 0009–0002–2884–2771) Introduction In the ever-evolving landscape of artificial intelligence (AI), bridging the gap between humans and machines has seen remarkable progress. Researchers and enthusiasts alike have tirelessly worked across numerous aspects of this field, bringing about amazing advancements.

Large-language-modelsLong-textsInformation-processingInformatikEnglisch
Veröffentlicht

Solutions to Enhance LLM Performance in Long Contexts Author · Qingqin Fang ( ORCID: 0009–0003–5348–4264) Introduction In the era of AI breakthroughs, large language models (LLMs) are not just advancements; they are revolutions, transforming how we interact with technology, from casual conversations with chatbots to the intricate mechanisms behind sophisticated data analysis tools.