生成式人工智能以其创造与上下文相关内容的能力彻底改变了技术,开创了人工智能可能性的新时代。其核心是检索增强生成(RAG),将信息检索与LLM相结合,从外部文档中产生智能、知情的响应。
本文将深入研究使用ChromaDB构建RAG驱动的LLM应用程序,ChromaDB以其对大型数据集的高效处理而闻名。
一、环境准备
要构建基于RAG的LLM应用程序,需要准备如下环境配置:
- python(下载地址:https://www.python.org/downloads/)
- OpenAI API Key(获取地址:https://platform.openai.com/signup)
以及对Python和web API的基本理解。
二、代码实现
2.1 创建并导航到项目目录
在终端中,创建一个新目录并导航到该目录:
mkdir rag_lmm_applicationcd rag_lmm_application
2.2 创建虚拟环境
虚拟环境可以隔离不同的python环境,创建命令如下所示:
python -m venv venv
激活虚拟环境。对于Mac/Linux用户,请使用:
source venv/bin/activate
对于Windows用户:
venvScriptsactivate
2.3 安装所需的包
安装基本库:
pip install -r requirements.txt
PS:确保requirements.txt文件中包含所有必要的依赖项。
通过上述步骤,环境已经准备就绪,下面开始使用ChromaDB构建最先进的RAG聊天应用程序。
2.4 加载和处理文档
下面使用LangChain来加载各种文档格式,如PDF、DOCX和TXT,这对于外部数据访问、确保高效的数据处理以及为后续阶段保持统一的数据准备至关重要。代码如下:
# loading PDF, DOCX and TXT files as LangChain Documentsdef load_document(file): import os name, extension = os.path.splitext(file) ? if extension == '.pdf': from langchain.document_loaders import PyPDFLoader print(f'Loading {file}') loader = PyPDFLoader(file) elif extension == '.docx': from langchain.document_loaders import Docx2txtLoader print(f'Loading {file}') loader = Docx2txtLoader(file) elif extension == '.txt': from langchain.document_loaders import TextLoader loader = TextLoader(file) else: print('Document format is not supported!') return None ? data = loader.load() return data
数据分块对RAG系统非常重要,对数据进行分块以便于嵌入,这确保可以保留高效的上下文和信息检索。代码如下:
# splitting data in chunksdef chunk_data(data, chunk_size=256, chunk_overlap=20): from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap) chunks = text_splitter.split_documents(data) return chunks
2.5 使用OpenAI和ChromaDB创建嵌入
使用OpenAI大模型来创建嵌入,并将它们高效地存储在ChromaDB中,可以快速检索信息,代码如下所示:
# create embeddings using OpenAIEmbeddings() and save them in a Chroma vector storedef create_embeddings(chunks): embeddings = OpenAIEmbeddings() vector_store = Chroma.from_documents(chunks, embeddings) ? # if you want to use a specific directory for chromadb # vector_store = Chroma.from_documents(chunks, embeddings, persist_directory='./mychroma_db') return vector_store
2.6 使用Streamlight构建聊天界面
Streamlit的简单性在我们的RAG LLM应用程序中大放异彩,可以毫不费力地将用户输入链接到后端处理。通过Streamlit的初始化和布局设计,用户可以上传文档和管理数据。后端处理这些输入,直接在Streamlit界面中返回响应,展示了前端和后端操作的无缝集成。下面是一个代码片段来说明设置:
def ask_and_get_answer(vector_store, q, k=3):from langchain.chains import RetrievalQA from langchain.chat_models import ChatOpenAI ? llm = ChatOpenAI(model='gpt-3.5-turbo', temperature=1) retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k}) chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever) ? answer = chain.run(q) return answer ? # calculate embedding cost using tiktoken def calculate_embedding_cost(texts): import tiktoken enc = tiktoken.encoding_for_model('text-embedding-ada-002') total_tokens = sum([len(enc.encode(page.page_content)) for page in texts]) # print(f'Total Tokens: {total_tokens}') # print(f'Embedding Cost in USD: {total_tokens / 1000 * 0.0004:.6f}') return total_tokens, total_tokens / 1000 * 0.0004 ? # clear the chat history from streamlit session state def clear_history(): if 'history' in st.session_state: del st.session_state['history'] ? ? if __name__ == "__main__": import os ? # loading the OpenAI api key from .env from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv(), override=True) ? st.image('img.png') st.subheader('LLM Question-Answering Application ??') with st.sidebar: # text_input for the OpenAI API key (alternative to python-dotenv and .env) api_key = st.text_input('OpenAI API Key:', type='password') if api_key: os.environ['OPENAI_API_KEY'] = api_key ? # file uploader widget uploaded_file = st.file_uploader('Upload a file:', type=['pdf', 'docx', 'txt']) ? # chunk size number widget chunk_size = st.number_input('Chunk size:', min_value=100, max_value=2048, value=512, on_change=clear_history) ? # k number input widget k = st.number_input('k', min_value=1, max_value=20, value=3, on_change=clear_history) ? # add data button widget add_data = st.button('Add Data', on_click=clear_history) ? if uploaded_file and add_data: # if the user browsed a file with st.spinner('Reading, chunking and embedding file ...'): ? # writing the file from RAM to the current directory on disk bytes_data = uploaded_file.read() file_name = os.path.join('./', uploaded_file.name) with open(file_name, 'wb') as f: f.write(bytes_data) ? data = load_document(file_name) chunks = chunk_data(data, chunk_size=chunk_size) st.write(f'Chunk size: {chunk_size}, Chunks: {len(chunks)}') ? tokens, embedding_cost = calculate_embedding_cost(chunks) st.write(f'Embedding cost: ${embedding_cost:.4f}') ? # creating the embeddings and returning the Chroma vector store vector_store = create_embeddings(chunks) ? # saving the vector store in the streamlit session state (to be persistent between reruns) st.session_state.vs = vector_store st.success('File uploaded, chunked and embedded successfully.')
上面的代码显示了如何在Streamlit中创建文本输入字段并处理用户输入。通过这种设置,用户可以无缝直观地与人工智能应用程序交互。
2.7 检索答案和增强用户交互
我们的RAG聊天应用程序利用了LangChain的RetrievalQA和ChromaDB,通过从ChromaDB的嵌入式数据中提取的相关、准确的信息有效地响应用户查询,体现了先进的Generative AI功能。
下面的代码片段展示了Streamlit应用程序的实现:
# user's question text input widgetq = st.text_input('Ask a question about the content of your file:') if q: # if the user entered a question and hit enter if 'vs' in st.session_state: # if there's the vector store (user uploaded, split and embedded a file) vector_store = st.session_state.vs st.write(f'k: {k}') answer = ask_and_get_answer(vector_store, q, k) ? # text area widget for the LLM answer st.text_area('LLM Answer: ', value=answer) ? st.divider() ? # if there's no chat history in the session state, create it if 'history' not in st.session_state: st.session_state.history = '' ? # the current question and answer value = f'Q: {q} A: {answer}' ? st.session_state.history = f'{value} {"-" * 100} {st.session_state.history}' h = st.session_state.history ? # text area widget for the chat history st.text_area(label='Chat History', value=h, key='history', height=400)
此代码集成了Streamlit中的用户输入和响应生成。使用ChromaDB的矢量数据,可以获得准确的答案,增强聊天应用程序的交互性,并提供信息丰富的人工智能对话。
三、结论
本文,我们探讨了使用OpenAI、ChromaDB和Streamlit构建LLM应用程序的复杂性,介绍了设置环境、处理文档、创建和存储嵌入以及构建用户友好的聊天界面,展示了RAG和ChromaDB的强大组合。
要运行应用程序,请在终端中执行以下命令:
streamlit run ./chat_with_documents.py
现在,可以通过导航到http://localhost:8501来测试该应用程序。
参考文献:
[1] https://medium.com/@oladimejisamuel/unlocking-the-power-of-generativeai-building-a-cutting-edge-rag-chat-application-with-chromadb-c5c994ccc584