Introduction to AutoLLM.
As we all know there are many ways to utilize LLM technology to chat with our data, most popular approach is Langchain, low-code Flowise or Langflow etc. They are great but there are little complicated.
AutoLLM created by SafeVideo.ai, just made easy to build a chatbot on private data. Just add a .env file which contains OpenAI key, then write 5 lines of code and you’re done. we will look deep into it later.
Crazy part is we can also use local LLMs that are deployed by Ollama and 100+ other third-party LLMs support built in.
AutoLLM also support multiple vector database support by default LanceDB is used to store embeddings of our docs and easy way to inference through API using FastAPI.
Let’s create a simple chatbot using AutoLLM with Quick Start.
- Use Colab or create new conda/pipenv environment, use python 3.8 or above version.
- Install AutoLLM package using pip using following:
pip install autollm
- Open a folder in VS Code or any of editor of your choice.
- Create a folder in working directory like data and copy the data with which you want to chat.
- Create a python file like main.py and add following lines:# Import things from AutoLLM Package
from autollm import AutoQueryEngine, read_files_as_documents
# Import data:
documents = read_files_as_documents(input_dir="./data")
# Create Query Engine:
query_engine = AutoQueryEngine.from_default(documents=documents)
# Ask your questions and store response:
response = query_engine.query(
"Who are tom and jerry in this story?"
)
# Show output:
print(response.response)
Let’s test Chatbot by running python file and here is output:
Tom and Jerry are one of most popular cartoon, Tom is a cat and Jerry is a crazy mouse.
Great output, let’s create inference to Chatbot using FastAPI:
# Import things from AutoLLM Package
from autollm import AutoQueryEngine, read_files_as_documents, AutoFastAPI
import uvicorn
# Import data:
documents = read_files_as_documents(input_dir="./data")
# Create Query Engine:
query_engine = AutoQueryEngine.from_defaults(documents=documents)
# Create API inference:
app = AutoFastAPI.from_query_engine(query_engine)
# Running the inference engine:
uvicorn.run(app, host="0.0.0.0", port=8000)
# Output:
# INFO: Started server process [12345]
# INFO: Waiting for application startup.
# INFO: Application startup complete.
# INFO: Uvicorn running on http://http://0.0.0.0:8000/
You can now connect your web app or test API using postman by sending request on localhost:8000 or 0.0.0.0:8000
That’s it (:
To learn more, here are some links to AutoLLM:
If you need any assistance, you can ping me on any of links found in my Link in Bio, Ibrahim Zaman.