https://github.com/k3nnethfrancis/leo
Overview
Leo is a discord bot built on top of OpenAI’s GPT series. It is designed with the intention of giving DAOs an edge in their operations.
Core Features
Leo has 3 main features [discord commands]:
- /chat: opens up a chat thread with the GPT-3.5-turbo model (ChatGPT). More or less the same thing as using chat.openai.com but inside the discord server.
- /ask: takes a question as input and responds based off of documents that have been loaded into the system. Only responds with answers to queries. This is different from /chat in a couple key ways:
- It responds to the question directly based off what is in the loaded documents. If it cannot find the answer in the docs, it will say “i don’t know”.
- It does not open up a thread. Context from previous interactions are therefor not saved or used in subsequent interactions (in contrast, /chat uses everything in the thread for context, given it has not exceeded the context length of the model).
- /onboard: responds to introductions in the target channel with project recommendations. takes a limit as input to dictate the number of messages to reply to. Will only reply once and will not reply to itself. Uses a second GPT model to determine which messages are intros and should be replied to.
Training
Leo currently uses OpenAI’s GPT series for base models. To date, there has not been any further fine-tuning applied to leo. There are several reason for this:
- Fine-tuning is not available for GPT-3.5/4 models
- Fine-tuning is not needed to get the performance needed for leo’s current features
- For the vast majority of use cases where we would want information not trained into the base model to be present in the model, we can use document embeddings, which is a much cheaper and efficient way of doing this (/ask uses embeddings and is not fine-tuned).
That said, there may be good reasons to fine-tune leo in the future such as:
- We want leo to respond in a certain way
- We want to swap to open source models, which would likely need some fine-tuning to match performance with GPT models
- We want to improve Leo’s capabilities in some way
More on this is discussed in the future plans section.
Vision
I view Leo as an early attempt at creating digital assistants for organizations. State of the art AI models now make this possible, along with a growing list of potential use cases as we discover all of the new things we can do. For example, can we design a bot that can assist us in conducting research? Could we design a bot that can assist us in managing the DAO?