先需要安装Ollama及本地大模型,Ollama安装方法在这里:
https://blog.csdn.net/qq_28171389/article/details/140068915
网上很多代码是过期的, 有LangChain版本问题,最新版的v0.3与v0.2有很大的区别,有的类库删除或分类组合到其他类库了。比如from langchain.llms import Ollama
已经改为from langchain_community.chat_models import ChatOllama
了,在使用前先看帮助文档
说明见:https://python.langchain.com/docs/versions/v0_3/
langchain-ollama: 用于集成 Ollama 模型到 LangChain 框架中
langchain: LangChain的核心库,提供了构建 AI 应用的工具和抽象
langchain-community: 包含了社区贡献的各种集成和工具
Pillow: 用于图像处理,在多模态任务中会用到
faiss-cpu: 用于构建简单 RAG 检索器
langchain_chroma: 是一个以开发人员生产力和幸福感为重点的AI 原生开源向量数据库
最新版通过以下命令安装:
pip install langchain-ollama langchain langchain-community Pillow faiss-cpu
简单测试
from langchain_ollama import ChatOllama
model = ChatOllama(model="llama3.1", temperature=0.7)
messages = [("human", "你好呀"),
]
for chunk in model.stream(messages):print(chunk.content, end='', flush=True)
模板方式调用
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import Ollamaprompt_template = "请写一首关于{product}的诗,我希望是七言律诗"ollama_llm = Ollama(model="qwen2:latest")
llm_chain = LLMChain(llm = ollama_llm,prompt = PromptTemplate.from_template(prompt_template)
)
result=llm_chain.invoke("春天")
print(result)
解析图片元素
import base64
from io import BytesIOfrom IPython.display import HTML, display
from PIL import Image
from langchain_core.messages import HumanMessage
from langchain_ollama import ChatOllama
from langchain_core.output_parsers import StrOutputParser# 将PIL图像转换为Base64编码字符串
def convert_to_base64(pil_image):buffered = BytesIO()pil_image.save(buffered, format="JPEG")img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")return img_str# 显示Base64编码字符串为图像
def plt_img_base64(img_base64):image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />'display(HTML(image_html))file_path = r"E:\ComfyUI\img\22.png"
pil_image = Image.open(file_path)
image_b64 = convert_to_base64(pil_image)llm = ChatOllama(model="llava", temperature=0)def prompt_func(data):text = data["text"]image = data["image"]image_part = {"type": "image_url", "image_url": f"data:image/jpeg;base64,{image}"}content_parts = [{"type": "text", "text": text}, image_part]return [HumanMessage(content=content_parts)]chain = prompt_func | llm | StrOutputParser()
query_chain = chain.invoke({"text": u"请说一下这个图片有哪些元素?", "image": image_b64})
print(query_chain)#角色调教,指定身分
from langchain_community.chat_models import ChatOllama
ollama = ChatOllama(base_url='http://127.0.0.1:11434',model="qwen2")from langchain_core.messages import HumanMessage, SystemMessage
content = "你好,你是谁?"
print("问:"+content)
messages = [SystemMessage(content="你是一个情感小助手,你叫TT"),HumanMessage(content=content),
]
ai_msg=ollama.invoke(messages)
print(ai_msg.content)
直接用 ollama
import ollama
response = ollama.chat(model='llama3.1', messages=[{'role': 'user','content': '为什么天空是蓝色的?',},
])
print(response['message']['content'])stream = ollama.chat(model='llama3.1',messages=[{'role': 'user', 'content': '为什么天空是蓝色的?'}],stream=True,
)for chunk in stream:print(chunk['message']['content'], end='', flush=True)
分析本地文件
from langchain_community.chat_models import ChatOllama
from langchain_community.document_loaders import PyPDFLoader
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import FAISS
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough# 使用PyPDFLoader加载pdf文件内容
loader = PyPDFLoader("D:/11.pdf")
# 加载并切割,默认splitter为:RecursiveCharacterTextSplitter
pages = loader.load_and_split()
# 使用OllamaEmbeddings进行编码,本地ollama部署了phi3模型
ollama_embeddings = OllamaEmbeddings(model="qwen2")
# 使用FAISS作为vector store,将文档内容使用phi3模型embedding编码后存入。
faiss_index = FAISS.from_documents(pages, ollama_embeddings)
# 使用Maximum Marginal Relevance search (MMR)算法搜索4个相近上下文
retriever = faiss_index.as_retriever(search_type="mmr", search_kwargs={"k": 4}) def format_docs(docs):return "\n\n".join(doc.page_content for doc in docs) # LCEL语法构造chain,并调用获得答案
prompt = hub.pull("rlm/rag-prompt")
example_messages = prompt.invoke({"context": "filler context", "question": "filler question"}).to_messages()
llm = ChatOllama(model="qwen2")
rag_chain = ({"context": retriever | format_docs, "question": RunnablePassthrough()}| prompt| llm| StrOutputParser()
)for chunk in rag_chain.stream(u"这个文件写了些什么?"):print(chunk, end="", flush=True)