Getting started with the Griptape Framework

In this post, we’ll walk you through the Griptape Framework and explain how to get started in building your first LLM-powered applications with Griptape.

Getting set up

Griptape is a Python framework and we’ll assume that you already have python set up on your machine. If not, we recommend you take a look at the longer version of this getting started guide in the Griptape Trade School, which includes instructions for getting Python set up. 

Once you have Python installed on the machine that you are going to be using for your first experiments with Griptape, the next step is to create a Python Virtual Environment. A Python virtual environment allows you to create an isolated environment for your project, which can help prevent conflicts with other projects or system dependencies. It also makes it easier to manage and track the specific dependencies required for your project.

Let’s create a Python virtual environment and install Griptape into it. 

Using pip

mkdir griptape-getting-started
cd griptape-getting-started
virtualenv .
source bin/activate
pip install 'griptape[all]'

Using Poetry (https://python-poetry.org/) - we recommend using this approach

mkdir griptape-getting-started
cd griptape-getting-started
poetry init
poetry add 'griptape[all]'
poetry shell

The Griptape Framework is modular, giving fine-grained control over which parts of the framework you wish to install. This allows you to optimize the installation size, or to load only the specific functionality that you need. To simplify things here, we have installed all of the Griptape Framework extras, but you can choose to install only the core of the framework by installing or adding griptape rather than griptape[all]. If you want to install specific extras, you can do this by adding the core dependencies with install or add griptape. You can then install the specific extras that you need with a command like: 

poetry add "griptape[drivers-prompt-anthropic,drivers-vector-mongodb]"

For now, you have everything installed, so let’s move on to test the installation by setting up a simple agent that will let us interact with an LLM from our code.

Setting up credentials for an LLM Provider

To make it easy to get started, the Griptape Framework has some defaults set. The default LLM provider is OpenAI and the default model is OpenAI's GPT-4o model. We’ll show you how to change these defaults later.

To use the default provider and model, you’ll need an OpenAI API key. If you don’t have one of these and don’t know how to create one, you can head over to the longer version of this guide in the Griptape Trade School where there is a guide to creating an OpenAI API Key.

Once you have your OpenAI API key, you need to set the OPENAI_API_KEY environment variable as this is where the Griptape Framework will look for the API key. You can set this environment variable using whatever approach you like.

There is a simple way to manage environment variables in Python that unifies the approach across different operating systems, python-dotenv. This package allows you to store environment variables in a file within your project and import them into your code at runtime.

There are three steps to set up python-dotenv.

1. Install the python-dotenv package with pip install or poetry add python-dotenv

2. Create a file named .env in your project folder with the following contents. If you are using git, you'll want to add this file to your .gitignore file to prevent it, and the secrets that it contains, from being checked into your repository. 

OPENAI_API_KEY=your_openai_api_key_here

3. Import the library and load the environment variables by adding the following code to your applications

from dotenv import load_dotenv

load_dotenv() # Load the environment variables

Once your OPENAI_API_KEY environment variable is set, you’re ready to move on to creating your first agent with the Griptape Framework.

Creating your first agent with the Griptape Framework

The simplest way to get started with Griptape is to create an agent that interacts with an LLM on your behalf. Let’s try the following Python code to get started:

from griptape.structures import Agent

agent = Agent()

agent.run("What is a good question to ask a large language model")

You’ll see that information is provided to you as the agent operates, but in this simple example, we don’t do anything with the output. If you want to interact with the output, you can assign the value of agent.run() to a variable and then work with the data stored within that variable.

To try that, let’s run this code instead:

from griptape.structures import Agent‍

agent = Agent()‍

agent.run("What's a good thing to ask a large language model?")
result = agent.run("pick one of those examples and provide a response")

print(result.output.value)

In this example you are storing the results of the second agent.run() in the variable result and then printing the output attribute. 

Conversation Memory

There is something else going on here as well. We kept the first call to the run method and generated a list of good questions to ask a large language model. We added a second call with the parameter ‘'pick one of those examples and provide a response” and the agent did this for us. 

This might lead you to ask ‘how did this happen?’ Each of the runs was a separate call to the API provided by the LLM, OpenAI in this case, so how did state get maintained across the two calls? The answer is that Griptape Agents have memory. This Conversation Memory, to give it its full name, allows Griptape to keep track of the conversation across runs. 

Agents, in common with all other types of Griptape Structure, are created with ConversationMemory by default. Agents are one of the types of Structures that Griptape supports. We will explore other types of Structure including WorkFlows in Pipelines in future getting started posts. 

Generalizing Agents with Parameters 

Let’s try this code

from griptape.structures import Agent

agent = Agent(
    input="Write me a {{ args[0] }} about {{ args[1] }} and {{ args[2] }}",
)

agent.run("Sonnet", "Skateboards", "Artificial Intelligence")

This approach allows us to generalize a prompt and use it with multiple different input parameters, simplifying prompt reuse.

Using different model providers

Now let’s experiment with different model providers. 

Under the hood, Griptape uses Prompt Drivers to make API calls to LLM providers. So earlier when we said that the default LLM provider is OpenAI and the default model is the OpenAI GPT-4o model, it would have been more accurate for us to say that the default Prompt Driver is the OpenAiChatPromptDriver with the model parameter set to gpt-4o.

Manually configuring an agent with these defaults would look something like this

from griptape.drivers import OpenAiChatPromptDriver
from griptape.structures import Agent‍

agent = Agent(
	prompt_driver=OpenAiChatPromptDriver(model="gpt-4o"),
    )‍

agent.run("What is a good question to ask a large language model")

Here you can see that we import the OpenAiChatPromptDriver and set the model parameter to gpt-4o. From here, you can probably guess how to switch the model. Let’s say you wanted to use gpt-4o-mini instead. To do this you would change the model attribute when you create the instance of the agent.

from griptape.drivers import OpenAiChatPromptDriver
from griptape.structures import Agent‍

agent = Agent(
	prompt_driver=OpenAiChatPromptDriver(model="gpt-4o-mini"),
    )‍

agent.run("What is a good question to ask a large language model")

If you do this, you should notice the improvement in response speed with the lower latency model.

To change to a different model provider there are a few extra steps.

  1. Import the prompt driver for the LLM provider that you wish to use
  2. Update the prompt_driver attribute when you create an instance of Agent
  3. Set the model and other any other required parameters for the driver
  4. If applicable, set an environment variable with the credentials for the provider

So, to use Anthropic Claude 3 Opus rather than OpenAI GPT-4o you would add this import statement to your code

from griptape.drivers import AnthropicPromptDriver

Then you would create your agent with 

prompt_driver=AnthropicPromptDriver(model="claude-3-opus-20240229")

And lastly, set the ANTHROPIC_API_KEY in your environment to your Anthropic API key, or if you're using python-dotenv, add ANTHROPIC_API_KEY=your_anthropic_api_key_here to the .env file in your project directory.

Your code would look like this:

from griptape.drivers import AnthropicPromptDriver
from griptape.structures import Agent‍

agent = Agent(    
	prompt_driver=AnthropicPromptDriver(model="claude-3-opus-20240229"),
    )‍

agent.run("What is a good question to ask a large language model")

Using Ollama doesn’t require an API key to be set in an environment variable, but you do need to have access to Ollama. Running Ollama locally on your machine is perfect for testing. For Ollama with llama 3.2 your code would look like this:

from griptape.drivers import OllamaPromptDriver
from griptape.structures import Agent‍

agent = Agent(    
	prompt_driver=OllamaPromptDriver(model="llama3.2"),
    )‍
    
agent.run("What is a good question to ask a large language model")

Using Tools

Let’s wrap up with a quick look at tools. Tools give large language models the capacity to invoke external APIs, access data sets, or otherwise expand their capabilities. Tools can offer a dramatic boost to the abilities of LLMs, making it possible for them to complete tasks that can be difficult or even impossible for LLMs to achieve without them. 

The Tools section of the Griptape Framework documentation provides a variety of ‘official’ tools together with guidance on how to get started in building your own custom tools. For now, let’s get started with some of the official tools to give you an idea of how they can extend an LLM’s capabilities.

The wikipedia page on the Roman Empire is one of the largest, and is over 230,000 words in length. Let’s extract some specific facts, a list of notable cities in the Roman Empire, from that page using Griptape in combination with the Web Scraper tool. For this example, we’re going to use Anthropic Opus 3. Note that we have also turned on DEBUG level logging so that you can observe how the LLM interacts with the tool. Setting logging to ERROR removes the INFO messages that you saw earlier, which is useful once you move your application into production.

import logging
from griptape.configs import Defaults
from griptape.structures import Agent
from griptape.tools import WebScraperTool, QueryTool
from griptape.drivers import AnthropicPromptDriver
from griptape.configs.logging import JsonFormatter


logger = logging.getLogger(Defaults.logging_config.logger_name)
logger.setLevel(logging.DEBUG)
logger.handlers[0].setFormatter(JsonFormatter())

agent = Agent(
    prompt_driver=AnthropicPromptDriver(model="claude-3-opus-20240229"),
    tools=[WebScraperTool(off_prompt=True)],
)

agent.run(
    "Tell me about some notable cities mentioned in this article https://en.wikipedia.org/wiki/Roman_Empire"
)

If you run this code, should find that the model correctly identifies several notable cities in the Roman Empire including Rome, Constantinople, Alexandria, Antioch and  more. It’s important to note that as you can see in the DEBUB logging, these results come from the Wikipedia article, not from general knowledge encoded in the model during training.

There’s lots more that you can do with tools, but this should give you a taste of their power and potential in extending the capabilities of LLMs.

How can you get started?

That wraps up this introduction to the Griptape Framework. If you want to learn more we recommend you check out the following resources

The Griptape Framework on Github

Griptape Docs, including the Docs for the Griptape Framework

Find more long form getting started content on the Griptape Trade School and if you have any questions, please join us on the Griptape Discord