top of page

The one (or two, or …) AI to rule them all

Whether monolithic AGI or systems-style AI built from components will ultimately dominate is still very much an  open question and something we’re paying close attention to. Berkeley AI Research recently wrote about the concept they term “Compound AI”. The premise is surprisingly simple, yet (in our view) quite unsurprising! The main idea is that in order to get the most out of the latest language, code and image models, you need to lego-brick various components together to build a performant system that really does what you need it to do. We’re really not at the point yet of having a single model or even interface that can solve complex business or enterprise (or consumer) problems. “Agent RAG” is another term we’ve seen that captures a subset of these ideas. Put another way, few if any of the existing off-the-shelf models or tools will be sufficient to really solve the problem you want to address with an AI system. By bringing together a variety of purpose-built tools and models, you’ll be forming a system that does what you really want it to do.


There are an incredible variety and amount of pre-trained models, tools and frameworks available today, as well as hosted models & services from cloud providers. Many are trained for specific tasks such as general language modeling, instruction following or text-to-image production. Even in the language space, there are a variety of tools and components that make up some of the most performant systems, the “RAG” paradigm being the most well known and combining a neural retrieval system with a generative LLM. But, RAG is only one such compound system, and there are a variety of configuration and optimization variants just in RAG solutions alone. Other possible compound systems include dynamically generated prompts, iterative prompt optimization, fact-checking filters and more. 


Optimization is always problem-specific…


Optimization is almost always an ongoing process, and is very specific to the problem and goals of the system. Yes, many pretrained models are optimized for particular tasks and some are even optimized for, well, better general performance…but, the highest value applications are usually very specific and can only be measured end to end. If you’re building a financial analyst agent, you can really only evaluate the quality based on the end goals you care about: is it analyst accuracy? Investment ROI? Errors in predictions? 


You may think that having to determine which components to use and chain together sounds like a lot of work, and that can be true. Conversely, the compositional nature of today’s AI tools is really an incredible development that offers a high degree of control over cost, performance and the specifics of the compound solution. Luckily, there are also a number of tools and frameworks looking to join the sub-components and make it easier to fit them together. The BAIR blog post names a number of these, including Databricks AI Gateway for a ‘model router’, LlamaIndex for RAG and other tool-use compositing as well as DSPy for prompt and model optimization based on a ‘golden set’ or evaluation set you have (although DSPy works without one).  


What does ‘Compound AI’ mean for you and your business? Compound AI re-emphasizes the big unresolved question of how the AI shift will play out: in favor of Big-Tech consolidators? Or, with truly democratized, a-la-carte Open Source solutions? Compound AI essentially says, “we still don’t know” and also, that no one has built the one-model-to-rule-them all. Until that’s been achieved, the incredible compositionality of commercial and open source/weight models and tools mean that this is a great time to be building AI solutions, for yourself or for others. If you’d like to talk about how Compound AI ideas may play into your AI strategy or planning, please reach out to us at info@theblue.earth - we love to brainstorm AI solutions against the backdrop of all the tools and models available now. 

0 comments

Recent Posts

See All

What can you do with Semantic Clustering?

What’s behind the huge amount of attention going to LLMs like ChatGPT, Llama and Gemini? Rich text representations that go way beyond “straight” encodings like ASCII and Unicode. Rich text representa

bottom of page