In this paper, we present a summary report of Kapa.ai's recent exploration of OpenAI's o3-mini and other inference models in the Retrieval-Augmented Generation (RAG) system. Kapa.ai is an AI assistant powered by a large-scale language model (LLM) that...
Abstract Information retrieval systems are critical for efficient access to large document collections. Recent approaches utilize Large Language Models (LLMs) to improve retrieval performance through query augmentation, but typically rely on expensive supervised learning or distillation techniques that require significant computational resources and manually labeled data. In ...
Large reasoning models exploit vulnerabilities when given the opportunity. Research has shown that these exploits can be detected by using large language models (LLMs) to monitor their chains-of-thought (CoT). Punishing models for "bad thoughts" does not prevent most misbehavior, but rather allows them to hide their intentions. ...
Enable Builder Smart Programming Mode, unlimited use of DeepSeek-R1 and DeepSeek-V3, smoother experience than the overseas version. Just enter the Chinese commands, even a novice programmer can write his own apps with zero threshold.
Background Recently, a paper entitled Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning (arxiv.org/pdf/2503.09516) has attracted much attention. The paper proposes a way to use reinforcement learning to train large language...
The GraphRAG project aims to extend the range of questions that AI systems can answer on private datasets by exploiting implicit relationships in unstructured text. A key advantage of GraphRAG over traditional vector RAG (or "semantic search") is its ability to answer global queries over entire datasets, such as...
If you have read Jina's last classic article "Design and Implementation of DeepSearch/DeepResearch", then you may want to dig deeper into some details that can significantly improve the quality of answers. This time, we will focus on two details: extracting optimal text segments from long web pages: how to utilize late-chun...
Gemma 3 Key Information Summary I. Key Metrics Parameters Details Model size 100 million to 27 billion parameters in four versions: 1B, 4B, 12B, 27B Architecture Transformer-based decoder-specific architecture inherited from Gemma 2 with several improvements Multimodal capabilities Support for text and image...
1. Background and Issues With the rapid development of Artificial Intelligence (AI) technologies, especially the advancement of diffusion modeling, AI has been able to generate very realistic portrait images. For example, technologies like InstantID require only one photo to generate multiple new images with the same identity features. This kind of technology though...
NoLiMA, released in February 2025, is a Large Language Model (LLM) method for assessing long text comprehension. Unlike traditional Needle-in-a-Haystack (NIAH) tests, which rely on keyword matching, NoLiMA is characterized by carefully designed questions and key messages that force...
The field of generative AI is currently evolving rapidly, with new frameworks and technologies emerging. Therefore, readers need to be aware that the content presented in this paper may be time-sensitive. In this paper, we will take an in-depth look at the two dominant frameworks for building LLM applications, LangChain and LangGraph, and analyze their strengths and weaknesses,...
Understanding the three key concepts of MCP Server, Function Call, and Agent is essential in the burgeoning field of Artificial Intelligence (AI), especially Large Language Modeling (LLM). They are the cornerstones of an AI system, and each has a unique and interrelated role to play. A deeper understanding of it...
Introduction Have you ever wondered how the chatbots we use today, such as OpenAI's models, determine whether a question is safe and should be answered? In fact, these Large Reasoning Models (LRMs) already have the ability to perform safety checks, which...
Recently found an open source project, it provides a good RAG ideas, it will DeepSeek-R1 reasoning ability combined with Agentic Workflow applied to RAG retrieval Project address https://github.com/deansaco/r1-reasoning-rag.git project by combining the DeepSeek...
In recent years, the field of Artificial Intelligence has made significant progress in its reasoning capabilities. After OpenAI demonstrated the powerful inference potential of large-scale language models (LLMs) last year, organizations such as Google DeepMind, Alibaba, DeepSeek, and Anthropic have been quick to follow suit, using reinforcement learning (RL) techniques to train...
In recent years, with the rapid development of large-scale language modeling (LLM), the capability of Multi-Agent Systems (MAS) has been significantly improved. These systems are not only capable of automating tasks, but also exhibit near-human reasoning capabilities. However, traditional MAS architectures are often accompanied by ...
Large-scale language modeling (LLM) is playing an increasingly important role in the field of artificial intelligence. In order to better understand and apply LLMs, we need to gain a deeper understanding of their core concepts. In this paper, we will focus on three key concepts, namely Token, Maximum Output Length, and Context Length, to help readers clear the understanding barriers so as to...
Recently, the terms Autonomous AI (AI), AI Agents, and Agents have been popping up a lot. Frankly, despite being data analysts and scientists, industry players have been a bit resistant to these AI-related trends and buzzwords in the past...
In recent years, Artificial Intelligence (AI) technologies have triggered a profound change in the field of programming. From v0 and bolt.new to programming tools that integrate Agent technology such as Cursor and Windsurf, AI Coding shows great potential to play a key role in the software development process, especially in rapid proto...
In the age of AI-assisted programming, we want AI to generate code that is not just static text, but can be parsed, edited, previewed, and even executed. This demand has given rise to a new interaction paradigm - Artifact. In this article, we will analyze Artifact from theoretical concepts to practical implementation....