⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops
940
Stars
71
Forks
7
Watchers
21
Issues
LangCorn 是一个 API 服务器,能够让你利用 FastAPI 的强大功能,轻松地服务于 LangChain 模型和流水线,实现稳健且高效的 LLMops 体验。
使用 pip 即可轻松安装 LangCorn:
pip install langcorn
LLM chain 示例 ex1.py
import os
from langchain import LLMMathChain, OpenAI
os.environ["OPENAI_API_KEY"] = os.environ.get("OPENAI_API_KEY", "sk-********")
llm = OpenAI(temperature=0)
chain = LLMMathChain(llm=llm, verbose=True)
启动你的 LangCorn FastAPI 服务器:
langcorn server examples.ex1:chain
[INFO] 2023-04-18 14:34:56.32 | api:create_service:75 | Creating service
[INFO] 2023-04-18 14:34:57.51 | api:create_service:85 | lang_app='examples.ex1:chain':LLMChain(['product'])
[INFO] 2023-04-18 14:34:57.51 | api:create_service:104 | Serving
[INFO] 2023-04-18 14:34:57.51 | api:create_service:106 | Endpoint: /docs
[INFO] 2023-04-18 14:34:57.51 | api:create_service:106 | Endpoint: /examples.ex1/run
INFO: Started server process [27843]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8718 (Press CTRL+C to quit)
或者使用以下方式:
python -m langcorn server examples.ex1:chain
运行多个 chain:
python -m langcorn server examples.ex1:chain examples.ex2:chain
[INFO] 2023-04-18 14:35:21.11 | api:create_service:75 | Creating service
[INFO] 2023-04-18 14:35:21.82 | api:create_service:85 | lang_app='examples.ex1:chain':LLMChain(['product'])
[INFO] 2023-04-18 14:35:21.82 | api:create_service:85 | lang_app='examples.ex2:chain':SimpleSequentialChain(['input'])
[INFO] 2023-04-18 14:35:21.82 | api:create_service:104 | Serving
[INFO] 2023-04-18 14:35:21.82 | api:create_service:106 | Endpoint: /docs
[INFO] 2023-04-18 14:35:21.82 | api:create_service:106 | Endpoint: /examples.ex1/run
[INFO] 2023-04-18 14:35:21.82 | api:create_service:106 | Endpoint: /examples.ex2/run
INFO: Started server process [27863]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8718 (Press CTRL+C to quit)
导入必要的包并创建你的 FastAPI 应用:
from fastapi import FastAPI
from langcorn import create_service
app:FastAPI = create_service("examples.ex1:chain")
多个 chain 的配置:
from fastapi import FastAPI
from langcorn import create_service
app:FastAPI = create_service("examples.ex2:chain", "examples.ex1:chain")
或者:
from fastapi import FastAPI
from langcorn import create_service
app: FastAPI = create_service(
"examples.ex1:chain",
"examples.ex2:chain",
"examples.ex3:chain",
"examples.ex4:sequential_chain",
"examples.ex5:conversation",
"examples.ex6:conversation_with_summary",
"examples.ex7_agent:agent",
)
运行你的 LangCorn FastAPI 服务器:
uvicorn main:app --host 0.0.0.0 --port 8000
现在,你的 LangChain 模型和流水线已经可以通过 LangCorn API 服务器进行访问了。
自动生成 FastAPI 文档。 在线示例 托管于 Vercel。

可以通过指定 auth_token 来添加静态 API Token 认证:
python langcorn server examples.ex1:chain examples.ex2:chain --auth_token=api-secret-value
或者:
app:FastAPI = create_service("examples.ex1:chain", auth_token="api-secret-value")
POST http://0.0.0.0:3000/examples.ex6/run
X-LLM-API-KEY: sk-******
Content-Type: application/json
{
"history": "string",
"input": "What is brain?",
"memory": [
{
"type": "human",
"data": {
"content": "What is memory?",
"additional_kwargs": {}
}
},
{
"type": "ai",
"data": {
"content": " Memory is the ability of the brain to store, retain, and recall information. It is the capacity to remember past experiences, facts, and events. It is also the ability to learn and remember new information.",
"additional_kwargs": {}
}
}
]
}
响应:
{
"output": " The brain is an organ in the human body that is responsible for controlling thought, memory, emotion, and behavior. It is composed of billions of neurons that communicate with
Langcorn 是一个基于 FastAPI 构建的开源 API 服务框架,专门用于将 LangChain 模型和处理流程自动转化为高效的 Web 服务。该项目简化了从 LLM 应用开发到生产部署的流程,为大语言模型应用提供了稳健的工程化落地解决方案。
支持一键部署 LangChain 模型与工作流,自动生成标准化的 RESTful API 接口。
内置高性能 FastAPI 框架,通过异步处理机制确保服务的高并发与低延迟响应。
提供开箱即用的身份验证功能,支持自定义 API 密钥管理与请求参数覆盖。
具备良好的扩展性,允许开发者自定义处理逻辑并灵活管理对话记忆。
该项目适用于需要快速构建 LLM 原型及生产级 API 服务的开发者,特别适合希望将 LangChain 应用部署在 Vercel 等 Serverless 环境中的团队。