AI Personal Learning
and practical guidance
TRAE

Local Deep Research: a locally run tool for generating in-depth research reports

General Introduction

Local Deep Research is an open source AI research assistant designed to help users conduct deep research and generate detailed reports for complex problems. It supports local operation, allowing users to accomplish research tasks without relying on cloud services. The tool combines Local Large Language Modeling (LLM) with a variety of search functions covering sources such as academic databases, Wikipedia, web content, and more. Users can quickly generate comprehensive reports with citations through simple installation and configuration. The project emphasizes privacy protection and flexibility, making it suitable for academic research, technology exploration, or personal knowledge management.

Local Deep Research: a locally run tool for generating in-depth research reports-1


 

Function List

  • Supports local large language modeling to protect data privacy.
  • Automatically select appropriate search tools such as Wikipedia, arXiv, PubMed, etc.
  • Generate detailed reports with structured sections and citations.
  • Provides a quick summary function that generates brief answers in seconds.
  • Supports local document search, combined with web search for comprehensive analysis.
  • Provides web interface and command line interface for flexible operation.
  • Supports multi-language search for global users.

 

Using Help

Installation process

Local Deep Research requires the installation of the Python environment and related dependencies. The following are the detailed installation steps:

  1. clone warehouse
    Run the following command in the terminal to clone the project locally:

    git clone https://github.com/LearningCircuit/local-deep-research.git
    cd local-deep-research
  1. Installation of dependencies
    Use Python's package management tools to install the required libraries:

    pip install -e .
    

    If you need browser automation features, install Playwright:

    playwright install
    
  2. Installation of local models (Ollama)
    Local Deep Research supports the adoption of Ollama Run a local large language model. Visit https://ollama.ai to download and install Ollama, then pull the recommendation model:

    ollama pull gemma3:12b
    

    Ensure that the Ollama service is running in the background.

  3. Configuring SearXNG (optional)
    For best search results, it is recommended to self-host the SearXNG search service. Run the following command to start SearXNG:

    docker pull searxng/searxng
    docker run -d -p 8080:8080 --name searxng searxng/searxng
    

    In the project root directory of the .env file to configure the SearXNG address:

    SEARXNG_INSTANCE=http://localhost:8080
    SEARXNG_DELAY=2.0
    
  4. starter kit
    • web interface: Run the following command to launch the web version, visit http://127.0.0.1:5000:
      ldr-web
      
    • command-line interface: Run the following command to start the command line version:
      ldr
      

Operation of the main functions

1. Generation of quick summaries

The Quick Summary feature is for users who need a quick answer. Open the web interface and enter a research question, such as "Recent advances in fusion energy". Click on the "Quick Summary" button and the tool will return a brief answer with key information and sources in a few seconds. If using the command line, run it:

from local_deep_research import quick_summary
results = quick_summary(query="核聚变能源的最新进展", search_tool="auto", iterations=1, questions_per_iteration=2, max_results=30)
print(results["summary"])

The results will be output as text with a short summary and reference links.

2. Generation of detailed reports

The detailed report feature is suitable for users who need a comprehensive analysis. After entering a question in the web interface, select the "Generate Report" option. The tool performs multiple rounds of search and analysis and generates a report in Markdown format with table of contents, chapters and citations. The report generation time depends on the complexity of the question and the number of search rounds (2 by default). Example of command line operation:

from local_deep_research import generate_report
report = generate_report(query="核聚变能源的最新进展", search_tool="auto", iterations=2)
print(report)

The generated report is saved locally, usually in the project root directory under the path examples Folder.

3. Local document retrieval

Users can upload private documents (e.g. PDF, TXT) to a specified folder to be analyzed by the Retrieval Augmented Generation (RAG) function. Configure the document path:

DOC_PATH=/path/to/your/documents

Select "Local Documents" as the report source in the web interface and the tool will generate a report combining local documents and web search. Command Line Operation:

results = quick_summary(query="分析我的文档中的AI趋势", report_source="local")

4. Multilingual search

The tool supports multi-language search for non-English questions. Enter a question in Chinese or another language into the web interface and the tool will automatically adapt the search tool to return relevant results. For example, if you type "latest breakthroughs in quantum computing", the tool will search both Chinese and English sources.

Configuration parameters

Users can modify the config.py maybe .env The file adjusts the parameters:

  • search_tool: Select Search Tool (default) auto).
  • iterations: Set the number of study rounds (default 2).
  • max_results: Maximum number of results per search round (default 50).
  • max_filtered_results: Number of filtered results (default 5).

caveat

  • Ensure a stable internet connection for searching external resources.
  • Local model performance is hardware dependent and GPU acceleration is recommended.
  • Regularly update Ollama and project code for the latest features.

 

application scenario

  1. academic research
    Students and researchers can use Local Deep Research to quickly gather information from academic papers and web pages to generate cited reports. For example, when researching "Breakthroughs in Quantum Computing," the tool retrieves the latest papers from arXiv and PubMed and generates a structured report.
  2. Technology Exploration
    Technology enthusiasts can explore emerging technology trends such as "Blockchain in the Supply Chain". The tool combines local documentation and web searches to provide comprehensive analysis.
  3. Personal knowledge management
    Users can upload private notes or documents to organize their knowledge base with external information. For example, organizing a personal report on "AI Development Predictions for 2025".

 

QA

  1. Does Local Deep Research require networking?
    The local model runs without an internet connection, but the search functions (e.g. Wikipedia, arXiv) require an internet connection. Users can choose to use only local documents.
  2. What large language models are supported?
    Ollama-hosted models are supported by default, such as gemma3:12b. Users can access this information through the config.py Configure other models such as vLLM or LMStudio.
  3. How can the quality of reporting be improved?
    Increase search rounds (iterations) and number of results (max_results), or use a more robust model. Ensuring that the problem is clearly described can also help improve accuracy.
  4. Are Windows systems supported?
    Yes, the project provides a Windows one-click installer to simplify the configuration process. Visit the GitHub repository to download.
May not be reproduced without permission:Chief AI Sharing Circle " Local Deep Research: a locally run tool for generating in-depth research reports
en_USEnglish