AI Personal Learning
and practical guidance
TRAE

Articles by Yang Fan

Gemini 免费开放 Deep Research(深度研究)工具-首席AI分享圈

Gemini opens Deep Research tool for free

Yesterday, Google shared the Gemini 2.0 Flash native image generation and editing capabilities, and today, the Deep Research tool in Gemini, which has been paid for, is now free to use. Many people still don't know what Deep Research is, or have heard a lot of introductions...

AI News
PM Agent:自动记录会议并生成需求文档的AI产品经理工具-首席AI分享圈

PM Agent: AI product manager tool that automatically records meetings and generates requirements documents

General Introduction PM Agent is an artificial intelligence tool by OpenGig, designed for product managers and development teams. It automatically records meetings and transforms them into detailed requirements documents, helping users save time on manual organization. This tool is suitable for those who need to quickly plan product development,...

使用 Ollama+LangChain 实现本地 Agent-首席AI分享圈

Implementing a Local Agent with Ollama+LangChain

Introduction ReAct (Reasoning and Acting) is a framework that combines reasoning and action to enhance the performance of intelligences in complex tasks. The framework enables intelligences to accomplish tasks more effectively in dynamic environments by tightly integrating logical reasoning with practical action. Source : ReAct: ...

使用 Ollama+LlamaIndex 搭建本地 RAG 应用-首席AI分享圈

Building a Local RAG Application with Ollama+LlamaIndex

Introduction This document will detail how to use the LlamaIndex framework to build a local RAG (Retrieval-Augmented Generation) application. By integrating LlamaIndex, it is possible to build a RAG system in a local environment that combines the capabilities of Retrieval and Generation to improve the efficiency of information retrieval...

Building a Local RAG Application with Ollama+LangChain

This tutorial assumes that you are already familiar with the following concepts: Chat Models Chaining runnables Embeddings Vector stores Retrieval-augmented generation Many popular projects such as llama.cpp , Ollama , and llamafile have shown that running a large language model in a local environment is a good idea. A local environment for running large language models...

Ollama 本地部署模型接入 Dify-首席AI分享圈

Ollama Local Deployment Model Access Dify

Dify supports access to large-scale language model inference and embedding capabilities deployed by Ollama. Quick Access Download Ollama Access Ollama installation and configuration, view Ollama local deployment tutorials. Run Ollama and chat with Llama ollama run llama3.1 Launch into ...

Ollama 在 LangChain 中的使用 - JavaScript 集成-首席AI分享圈

Ollama in LangChain - JavaScript Integration

Introduction This document describes how to use Ollama in a JavaScript environment to integrate with LangChain to create powerful AI applications.Ollama is an open source deployment tool for large language models, while LangChain is a framework for building language model-based applications. By combining...

Ollama 在 LangChain 中的使用 - Python 集成-首席AI分享圈

Ollama in LangChain - Python Integration

Introduction This document describes how to use Ollama in a Python environment to integrate with LangChain to create powerful AI applications.Ollama is an open source deployment tool for large language models, while LangChain is a framework for building language model-based applications. By combining these two...

en_USEnglish