Skip to content

Blog

OpenAI Assistants: Limited, but Incredible

12.22.2023 | AI LLM OpenAI assistants RAG | Kaylynn Watson
OpenAI Assistants: Limited, but Incredible

Introduction

In the ever-evolving realm of AI development, OpenAI has ventured into the domain of Retrieval Augmented Generation (RAG) with its latest offering, Assistants. Assistants with Retrieval represents OpenAI's attempt to harness the power of RAG — a technique widely embraced and refined within open-source libraries to augment an AI’s knowledge with custom information. Think ChatGPT, but with the ability to ask it about custom information. While it doesn't yet lead the pack in creating AI-driven knowledge bases, its inception marks a significant step towards more sophisticated and nuanced AI applications. As developers, our dive into this beta tool is not just about evaluating its current capabilities, but also understanding its place in the broader context of RAG's evolution and its potential to reshape our approaches to LLMs.

Incredible Accuracy for Incredible Ease of Use

OpenAI's RAG tool is an awesome prospect for developers eager to explore the nuances of retrieval-augmented models. It serves as a canvas for experimentation, offering a glimpse into the future of sophisticated AI applications. Here's why it's a valuable experimental platform:

  • Ease of Use: Its user-friendly nature invites developers to explore and experiment with minimal setup. This allows more people to try out more use cases to learn where LLM retrieval could improve their workflows.
  • Respectable Accuracy: We swapped out our gpt-3.5-turbo model in our custom chatbot with an OpenAI Assistant, and saw a slightly-lower, but similar level of accuracy. This is so cool! Being able to spin up a custom chatbot in a couple of hours with ~75% accuracy rate is incredible! For more information about our custom chatbot, visit our website: focusedlabs.io/ai.

Understanding the Limitations: A Beta Analysis

In its beta stage, this tool has its limitations. The current landscape of RAG in open-source libraries such as Langchain sets a high benchmark, and OpenAI's iteration is an ambitious stride into this territory. However, with ambition comes the teething problems of any beta technology. Here's what developers should be aware of:

  1. Source Citation Feature: A vital feature for gaining user trust, source citation, is not yet functional, indicating future improvements but also a present gap.
  2. Document Limitations: An assistant supports only 20 documents, each up to 512 MB, which is not scalable for most enterprise datasets. For smaller data sets, while the file size itself is not limiting, combining multiple smaller files into 1 larger file is an anti-pattern resulting in loss of structure and context.
  3. Lack of Customization: While the simple interface facilitates quick setup times, high abstraction levels mean less control for developers to adjust and optimize the tool for specific use cases.
  4. Polling Over Streaming: Without streaming capabilities, developers are left with polling methods, which hamper real-time efficiency.

Tips for Maximizing Your Experience

Navigating a beta tool requires a mix of patience and strategy. To make the most of your journey with OpenAI's RAG tool, keep these tips in mind:

  • Data Management: Efficiently handle your limited document space by converting to and concatenating data wherever possible. We recommend using *.txt formats for ease and compression.
  • File Type Performance: Experiment with different file types to discover which yields the best results for your specific case. For us, *.txt returned more accurate results over PDFs.
  • Use with Other Libraries: Langchain supports Open AI Assistants. While Assistants may not be the leader on their own, they are still powerful when combined with other techniques. I also recommend using LlamaHub’s loaders to help integrate with various data sources.

Conclusion

OpenAI's entry into the RAG space is a testament to the ongoing evolution of AI and machine learning technologies. While this particular tool may not be ready for production, it offers a valuable learning curve for developers keen on the future of retrieval-augmented models. By engaging with it critically and creatively, we can contribute to its growth and simultaneously expand our own understanding of where RAG can take us in the realm of AI development.

 

Share