Andrew Ng, the co-founder of Google Brain and a renowned figure in the field of artificial intelligence, recently declared that AI agents are the biggest development in AI this year, surpassing even the rumored groundbreaking GPT-5 language model in significance. We at Kortical couldn't agree more!
Take a look at our latest video where Andy Gray, CEO & Co-Founder of Kortical, explaining what an AI agent is, how to build one that can automate a job and how that's likely to change the world of business.
An AI agent is a digital worker that knows how to perform specific tasks and is connected to the necessary tools to execute those tasks effectively.
One of the most popular examples of an AI agent is ChatGPT. Contrary to popular opinion, ChatGPT is more than just a large language model (LLM). It is capable of using various tools to accomplish tasks, such as browsing the web, generating images, and solving mathematical problems using a math solver. ChatGPT achieves this through an AI agent implementation, which allows it to interact with different tools and systems. However, ChatGPT is a generalist AI agent, meaning it can handle a wide range of tasks. By narrowing down the scope of an AI agent's capabilities to a specific job or industry, it becomes possible to automate entire job categories efficiently.
Consider our Shopify AI agent designed for customer support with Shopify. This agent can understand the conversation, handle order tracking queries and other customer enquiries by combining and LLM and seamlessly integrating with the company's backend systems, such as order management and knowledge bases.
When a customer asks about the status of their order, the AI agent can access real-time information and provide a precise answer, such as the current location of the package and the delivery service being used. This demonstrates how the AI agent not only responds to the customer's question but also solves their problem by retrieving relevant information from the real world. By automating 80% of customer queries, this AI agent is highly effective and brings significant value to the business.
When using large language models (LLMs) like ChatGPT for customer support, they can sometimes provide responses that sound helpful but are actually incorrect or made up.
This happens because LLMs don't have specific knowledge about a particular business or context. Instead, they use the most likely words to create a response that seems reasonable, even if it's not accurate. This is known as "hallucinating."
In the example below, ChatGPT is asked to role play as a customer support. The answer sounded helpful, but actually “NEW10” discount doesn’t exist.
One way to solve this problem is by using a technique called prompt injection. This means adding relevant information, like frequently asked questions (FAQs) and their answers, directly into the instructions given to the LLM. By providing the correct answer within the instructions, the LLM is more likely to give an accurate response when a user asks the same question.
However, in the real world, this approach is often not feasible. Knowledge bases can be incredibly large, containing thousands of documents, pages, and products. It's simply not possible to include all of this information in a single set of instructions for the LLM. Attempting to do so would quickly become unmanageable, and the LLM would likely struggle to work reliably with such a vast amount of information.
A more advanced technique called retrieval augmented generation can help overcome the challenges of dealing with large amounts of information. When a user asks a question, the AI agent uses a special process to find the most relevant information from a large collection of documents. It then includes this information in the instructions given to the LLM, along with the user's question. This helps the LLM generate a response that is not only reasonable but also accurate and specific to the business or context.
While there are many details involved in making this technique work well, it is a powerful way to improve the performance of AI agents by providing them with the right information at the right time.
While retrieval augmented generation is a powerful technique for providing AI agents with relevant information, it's important to recognise that real-world jobs often involve more than just information retrieval.
Consider the example of a train station attendant. Their responsibilities might include selling tickets, helping people navigate the station, locating amenities, and managing lost and found items. These tasks require interaction with dynamic databases, different backend systems, and a variety of other tools and processes that go beyond simply retrieving information.
To create an AI agent capable of handling the complexity of a real-world job, we need to combine the capabilities of LLMs with specialised knowledge, processes, and tools specific to each task.
LLMs provide a foundation of common sense, reasoning, communication, creativity, and general knowledge. However, to enable the AI agent to perform the various tasks required for a job, we must equip it with:
By integrating these specialised skills with the LLM's base capabilities, we can create an AI agent that can perform real work.
While it might seem like a good idea to simply include all the necessary information and instructions in a single prompt, this approach has its limitations. Even with the increased capacity of newer models like Gemini 1.5, which can handle up to a million tokens, cramming everything into a single prompt doesn't necessarily lead to better performance.
The biggest challenge in outsourcing a real job to an AI agent is maintaining task coherence. When presented with too many instructions, LLMs can lose track of the task at hand and fail to respond appropriately. This limitation in the LLM's "train of thought" is currently the most significant barrier to creating AI agents capable of handling real-world jobs.
Consider a simple example of finding the fastest route between two stations on a train network. If we have a straightforward network with just two lines and a few stations, the AI agent can easily determine the quickest path. For instance, if we have:
Line 1: Stop F - Stop A - Stop E
Line 2: Stop S - Stop A - Stop L
And it takes 1 minute to travel between each station, the AI can correctly identify that the fastest way from Stop F to Stop L is to go from Stop F to Stop A (1 minute) and then from Stop A to Stop L (1 minute), for a total of 2 minutes.
However, if we increase the complexity of the network by adding more lines and stations, the AI may struggle to provide the correct answer, even though the prompt is still well within the token limit. This is because the AI loses track of the task when faced with too many instructions and variables.
To overcome this challenge, we need to break down the more complex tasks into smaller, more manageable steps. By providing the AI with bite-sized tasks and the necessary tools, specialised knowledge, and processes for each step, we can ensure that it remains focused and provides accurate responses.
In the train network example, we could break down the task by asking the AI to find the fastest route between two lines at a time. By focusing on a smaller subset of the network, the AI can correctly determine the quickest path and total travel time for each pair of lines. We can then combine these results to find the overall fastest route.
When creating an AI task, it's essential to:
For a train timetable task, this might involve providing access to a REST API for real-time train schedules, a knowledge base of passenger alerts and station information, and a process that guides the AI to:
By creating simple, self-contained AI tasks with clear instructions and the necessary tools, we can ensure that AI agents provide accurate and helpful responses, even when dealing with complex, real-world scenarios.
To create a more robust and reliable AI agent, we can chain multiple prompts together and implement guardrails to ensure the accuracy and validity of the information being processed. This approach allows us to break down a complex task into smaller, more manageable steps, while also incorporating validation checks along the way.
In the train timetable example, we can start by creating separate prompts to extract specific pieces of information from the user's request, such as the departure location and arrival location. After each extraction step, we can implement a guardrail to validate the information against a fixed list of supported train stations. This ensures that the AI agent is working with valid input and prevents it from processing requests for unsupported locations.
Similarly, we can create a prompt to extract any time-related information from the user's request, such as "I need to travel in a week's time" or "I want to leave in two weeks." We can then validate this information to ensure that the specified time is in the future and within a reasonable timeframe.
Once we have extracted and validated the necessary information, we can make an API call to determine the fastest route between the departure and arrival locations. The response from the API, along with the previously extracted information, can then be fed into a final prompt that generates a personalised and informative reply for the customer.
This reply might include details such as the recommended route, estimated travel time, and any relevant passenger alerts or station information. By combining the outputs of multiple prompts and incorporating external data sources, we can create a more comprehensive and useful response for the user.
As a final step, we can include a reflection prompt at the end of the chain. This prompt encourages the AI agent to evaluate its own performance and consider whether it has made the correct use of the available information. This self-reflection step can help identify areas for improvement and ensure that the AI agent is providing accurate and relevant responses.
Some examples of reflection prompts might include:
By incorporating reflection prompts, we can create a feedback loop that allows the AI agent to continuously learn and refine its responses over time.
Chaining prompts and implementing guardrails offers several benefits when creating AI agents for complex, real-world tasks:
By combining these techniques, we can create more robust, reliable, and effective AI agents that are better equipped to handle the challenges of real-world applications.
The KorticalChat AI Agent Framework is a comprehensive solution that combines the power of individual AI tasks with an intelligent routing layer to create a seamless and efficient conversational experience. At first glance, interacting with an AI agent built using this framework may feel just like chatting with any other conversational AI or chatbot through a standard chat interface. However, beneath this simple surface lies a complex network of layers, tasks, and tools, much like an iceberg hiding its true depth and intricacy beneath the water's surface.
At the core of the KorticalChat AI Agent Framework are AI tasks. Each task is designed to handle a specific aspect of a larger job, such as providing train timetable information, managing lost and found inquiries, or handling ticket sales. These tasks are constructed using a combination of:
By carefully crafting these tasks and equipping them with the necessary tools and guardrails, we can create AI agents that are capable of handling complex queries and providing accurate, context-specific responses.
Sitting above the individual AI tasks is the AI routing layer. This layer is responsible for analysing incoming user queries and directing them to the most appropriate task for processing.
For example, if a user asks about train timetables, the routing layer will send the query to the task specifically designed to handle train timetable inquiries.
This routing layer ensures that each user query is handled by the most relevant and capable AI task, improving the overall efficiency and effectiveness of the AI agent.
One of the most powerful features of the AI Agent Framework is that it allows developers to create complex AI agents by combining simpler ones. This is called composability.
Think of it like building with Lego bricks. Each Lego brick is a simple piece, but when you put them together, you can create complex structures. Similarly, with AI Agents, each AI task is like a Lego brick. Once a task has been created and tested, it can be easily added to other tasks to make them more powerful agents capable of handling new situations.
For instance, let's say you have an AI agent that helps customers with product support. You could create a separate AI agent that specialises in handling user feedback and improving the system based on that feedback. With composability, you can easily plug this feedback agent into your product support agent, making it better at its job over time.
By combining simpler AI tasks and agents in this way, developers can create AI agents that are "turtles all the way down." This means that even the most complex AI agents are ultimately made up of smaller, simpler components, just like how even the largest Lego structures are built from individual bricks.
The KorticalChat AI Agent Framework demonstrates the remarkable potential of AI agents to perform real work and automate complex tasks that currently require human intervention. By leveraging the capabilities of current large language models (LLMs) and the composability of AI tasks and agents, this framework enables the creation of AI agents that can reason about real jobs, use tools in different contexts, and ultimately automate entire segments of labor.
The effectiveness of the KorticalChat AI Agent Framework is not merely theoretical; it is already being applied in various industries to automate complex, real-world work. Some notable examples include:
These real-world applications demonstrate the immense potential of the KorticalChat AI Agent Framework to transform the way we work, by enabling AI agents to take on difficult, cognitively demanding tasks that were previously the exclusive domain of humans.
One of the key challenges in implementing an AI Agent framework is managing costs. As AI agents become more complex, with multiple chained prompts and interactions with large language models (LLMs) like GPT-4, the cost per reply can quickly escalate, potentially rendering many use cases economically unviable.
A potential solution to this problem might be to self-host an LLM, but this approach comes with its own set of challenges. For example, running a high-performance model like Falcon 180B, with 180 billion parameters, it has comparable performance to Google's PaLM 2 (Bard) and is not far behind GPT-4, would require a substantial investment in hardware, such as eight A100 80GB GPUs, which could cost around $20,000/month. Additionally, as technology advances and new models emerge, this hardware may need to be upgraded or replaced within a few years, further increasing the overall cost of self-hosting.
To address these cost challenges, the KorticalChat AI Agent Framework takes a more nuanced approach. While creating a great solution that is cheaper than human labor can be difficult, the key is to use the right model for the right task. This involves carefully optimising each AI task and prompt, ensuring that the most cost-effective model is used while still maintaining an acceptable level of performance.
By reserving the more expensive, high-performance LLMs for complex tasks and utilising cheaper, fine-tuned models for simpler tasks, the overall cost of the AI agent can be significantly reduced. This approach also involves leveraging custom-built models where appropriate, which can provide substantial cost savings compared to using general-purpose LLMs.
To illustrate this approach, consider the task of sentiment analysis, which involves determining whether a customer is satisfied or dissatisfied based on their interactions with the AI agent. Rather than relying on expensive GPT-4 calls for this task, the KorticalChat team was able to build a custom model using the Kortical platform, which is approximately 300 times cheaper to run.
By being smart about model selection, fine-tuning, and custom model development, the KorticalChat AI Agent Framework can significantly reduce costs while still delivering high-quality results. This optimisation process may require additional effort compared to simply creating the initial solution, but the potential cost savings make it well worth the investment.
As AI agents become more sophisticated and cost-effective, they are poised to revolutionise the way businesses operate. Whilst there are concerns about the impact on jobs, it's important to recognise that AI agents are not sentient beings with wants and dreams of their own. Instead, they are powerful tools that can automate a wide range of thought labour tasks, such as pattern matching, natural language interpretation, summarisation, and annotation.
Businesses that fail to AI agent technology risk being left behind by competitors who can offer services at a fraction of the cost. The key to success lies in understanding how to commercialise this new intelligent agent technology effectively and figuring out the killer applications for AI agents in various industries.
To stay ahead of the curve, businesses must embrace AI agents and explore how they can be leveraged to improve efficiency, reduce costs, and create new opportunities. As with any disruptive technology, it may take time to fully realise the potential of intelligent agents, but those who invest early and adapt quickly will be well-positioned to thrive in the new AI-driven landscape.
The world's experts in AI agents are only about a year ahead of the general public, so there's still time for businesses to catch up and become leaders in this space. By understanding the capabilities and limitations of the AI models, breaking down complex jobs into manageable tasks, and optimising performance whilst managing costs, businesses can harness the power of AI agents to drive innovation and growth.
At KorticalChat, we specialise in making it easy for companies to leverage the power of large language models (LLMs) and build AI agents that can transform their operations. Our AI Agent Framework simplifies the process of creating, deploying, and managing AI agents, allowing businesses to focus on their core competencies while reaping the benefits of AI-driven automation.
Whether you're looking to streamline customer support, optimise internal processes, or gain a competitive edge in your industry, KorticalChat can help you unlock the full potential of AI agents. Our team of experienced AI experts will work closely with you to understand your unique needs and develop a customised solution that delivers measurable results.
Ready to take your business to the next level with AI agents? Try to Build Your Own AI Agent or contact us today on the form below to schedule a consultation and discover how KorticalChat can help you harness the power of AI agents.