Griptape Framework v0.22.0

We’ve been hard at work over the New Year and are excited to announce the release of Griptape v0.22.0, available now! This is a big update with a ton of new features, so let’s dive in.

Breaking Changes

There are a few changes that aren’t compatible with code written for earlier versions of Griptape. Let’s discuss how you’ll need to update existing projects to work with v0.22.0.

First: we’ve updated the naming convention for Drivers, Engines, Tasks, and Tools to support the new functionality surrounding image generation. If you’re using any of the following, be sure to update your code:

  • AmazonBedrockStableDiffusionImageGenerationModelDriver is now BedrockStableDiffusionImageGenerationModelDriver.
  • AmazonBedrockTitanImageGenerationModelDriver is now BedrockTitanImageGenerationModelDriver.
  • ImageGenerationTask is now PromptImageGenerationTask.
  • ImageGenerationEngine is now PromptImageGenerationEngine.
  • ImageGenerationTool is now PromptImageGenerationClient.

Additionally, we renamed BedrockTitanEmbeddingDriver to AmazonBedrockTitanEmbeddingDriver to match the naming convention used by other Embedding Drivers.

Finally, there’s been an update to the way we provide input to Tasks. For Tasks that take text, the field has been renamed from input_template to input.

New Features

BedrockLlamaPromptModelDriver

With the new BedrockLlamaPromptModelDriver, you can now use a Meta Llama 2 model hosted by Amazon Bedrock as the underlying LLM powering a Griptape Structure. Here’s an example showing an Agent configured to use this Model Driver:

from griptape.structures import Agent
from griptape.drivers import AmazonBedrockPromptDriver, BedrockLlamaPromptModelDriver

agent = Agent(
    prompt_driver=AmazonBedrockPromptDriver(
        model="meta.llama2-13b-chat-v1",
        prompt_model_driver=BedrockLlamaPromptModelDriver(),
    ),
    task_memory=None,
)

agent.run(
    "Write an article about impact of high inflation to GDP of a country"
)

Claude 2.1 Support

Griptape now supports Anthropic Claude 2.1 in the AnthropicPromptDriver (for models hosted by Anthropic) and BedrockClaudePromptModelDriver (for models hosted by Amazon Bedrock). Specify claude-2.1 in the model field to use this version of the model:

import os

from griptape.structures import Agent
from griptape.drivers import AnthropicPromptDriver

agent = Agent(
    prompt_driver=AnthropicPromptDriver(
        model="claude-2.1",
        api_key=os.environ["ANTHROPIC_API_KEY"],
    ),
    task_memory=None,
)

agent.run("Where is the best place to see cherry blossoms in Japan?")

Multi-Tokenizer Support in OpenAiChatPromptDriver

Griptape now supports overriding the Tokenizer field in the OpenAiChatPromptDriver, enabling the use of drop-in third-party services that use OpenAI’s SDK. In this example, we configure the OpenAiChatPromptDriver to use a properly configured HuggingFaceTokenizer. Then, we configure the OpenAiChatPromptDriver to use a TogetherAI credential and API endpoint:

import os

from griptape.structures import Agent
from griptape.drivers import OpenAiChatPromptDriver
from griptape.tokenizers import HuggingFaceTokenizer
from transformers import AutoTokenizer

agent = Agent(
    prompt_driver=OpenAiChatPromptDriver(
        api_key=os.environ["TOGETHERAI_API_KEY"],
        base_url=os.environ["TOGETHERAI_BASE_URL"],
        model="mistralai/Mixtral-8x7B-Instruct-v0.1",
        tokenizer=HuggingFaceTokenizer(
            tokenizer=AutoTokenizer.from_pretrained(
                "mistralai/Mixtral-8x7B-Instruct-v0.1"
            )
        ),
    ),
    task_memory=None,
)

agent.run(
    "What is the trunk circumference of a mature cedar tree?"
)

AmazonSageMakerEmbeddingDriver

The new AmazonSageMakerEmbeddingDriver and two new Embedding Model Drivers, SageMakerHuggingFaceEmbeddingModelDriver and SageMakerTensorFlowHubEmbeddingModelDriver, allow developers to generate embeddings using HuggingFace and TensorFlowHub endpoints hosted on Amazon SageMaker. This example shows how to configure an Embedding Driver to use a HuggingFace model deployed via SageMaker:

import os

from griptape.drivers import AmazonSageMakerEmbeddingDriver, SageMakerHuggingFaceEmbeddingModelDriver

driver = AmazonSageMakerEmbeddingDriver(
    model=os.environ["SAGEMAKER_HUGGINGFACE_MODEL"],
    embedding_model_driver=SageMakerHuggingFaceEmbeddingModelDriver(),
)

embeddings = driver.embed_string("Hello world!")

# display the first 3 embeddings
print(embeddings[:3])

Image-to-Image Functionality

In v0.21.0, we added the ability to generate images from text prompts. Griptape v0.22.0 introduces tons of new image-to-image generation functionality, providing several ways to generate images from reference images: variation, inpainting, and outpainting. These functionalities are exposed through new Tasks and Tools and are supported by several new Engines, Drivers, and Model Drivers.

Image Variation

Use the new VariationImageGenerationTask or VariationImageGenerationClient to create new images using a prompt and a reference image. In these examples, we’ll use the following image of a mountain at sunrise, generated previously by Stable Diffusion, as a reference:

Variation image input

In the code below, we build a BedrockStableDiffusionImageGenerationModelDriver that uses Stable Diffusion’s origami style preset, one of several supported by the model. For a complete list of supported style presets, see the Stable Diffusion API documentation.

import os

import boto3

from griptape.artifacts import TextArtifact
from griptape.loaders import ImageLoader
from griptape.drivers import AmazonBedrockImageGenerationDriver, BedrockStableDiffusionImageGenerationModelDriver
from griptape.engines import VariationImageGenerationEngine
from griptape.tasks import VariationImageGenerationTask
from griptape.structures import Agent


driver = AmazonBedrockImageGenerationDriver(
    image_generation_model_driver=BedrockStableDiffusionImageGenerationModelDriver(
        style_preset="origami",
    ),
    session=boto3.Session(
        profile_name=os.environ["AWS_PROFILE_NAME"],
        region_name=os.environ["AWS_PROFILE_NAME"],
    ),
    model="stability.stable-diffusion-xl-v0",
)

engine = VariationImageGenerationEngine(
    image_generation_driver=driver
)

prompt = TextArtifact("A mountain scene at sunrise")
image = ImageLoader().load("path/to/mountain.png")

task = VariationImageGenerationTask(
    input=(prompt, image),
    image_generation_engine=engine,
    output_dir="images/",
)

Agent(tasks=[task]).run()

Note how inputs to the VariationImageGenerationTask are specified: unlike most Tasks, image-to-image generation Tasks require multiple input Artifacts, provided as a tuple. For the VariationImageGenerationTask, the first element of the tuple must be the prompt and the second must be the reference image.

Here’s the result, an Origami interpretation of our mountain scene:

Inpainting image input

Image Inpainting

The InpaintingImageGenerationTask and InpaintingImageGenerationClient allow developers to create new images by specifying which areas within a reference image should be updated using a mask image.

In this example, we’ll use the familiar mountain image as a reference. The mask image contains a black rectangle in the area that corresponds to the top half of the large peak in the center of the image — this will limit the model to only updating that region of the image.

Inpainting mask input

Here’s the code, where we create an AmazonBedrockImageGenerationDriver configured for Stable Diffusion by a BedrockStableDiffusionImageGenerationModelDriver, pass the Driver to an InpaintingImageGenerationEngine, then use this engine to create an InpaintingImageGenerationTask. The Task accepts an input tuple that contains three elements: (prompt, image, mask).

import os

import boto3

from griptape.artifacts import TextArtifact
from griptape.loaders import ImageLoader
from griptape.drivers import AmazonBedrockImageGenerationDriver, \
    BedrockStableDiffusionImageGenerationModelDriver
from griptape.engines import InpaintingImageGenerationEngine
from griptape.tasks import InpaintingImageGenerationTask
from griptape.structures import Agent


driver = AmazonBedrockImageGenerationDriver(
    image_generation_model_driver=BedrockStableDiffusionImageGenerationModelDriver(),
    session=boto3.Session(
        profile_name=os.environ["AWS_PROFILE_NAME"],
        region_name=os.environ["AWS_REGION_NAME"],
    ),
    model="stability.stable-diffusion-xl-v0",
)

engine = InpaintingImageGenerationEngine(
    image_generation_driver=driver
)

prompt = TextArtifact("A majestic castle perched atop a mountain")
image = ImageLoader().load("path/to/mountain.png")
mask = ImageLoader().load("path/to/mountain-mask.png")

task = InpaintingImageGenerationTask(
    input=(prompt, image, mask),
    image_generation_engine=engine,
    output_dir="images/",
)

Agent(tasks=[task]).run()

Similarly to the Variation Task, the Inpainting Task requires a tuple of input artifacts. This Task requires three inputs: the first element is the prompt, the second is the reference image, and the third is the mask image.

After running the Task, we can take a look at our result. A castle was added to the top of the mountain in the area specified by the mask, but the rest of the image remains the same.

Image generated through inpainting

Image Outpainting

Finally, use the OutpaintingImageGenerationTask and OutpaintingImageGenerationClient to create a new image by generating an image outside the bounds of a mask. One example of outpainting is to extend the bounds of an existing image beyond its borders.

We’ll begin with a scaled-down version of the same input image, with whitespace added to the margins. This whitespace is where new image data will be added by the model.

Outpainting image input

Here’s our mask input. Note that in this case, we’ve specified that the generation should happen in the margins, leaving the initial image largely untouched.

Outpainting mask input

Now, the code. We’re still using Stable Diffusion via Bedrock, so we’ll only need to update the Engine (to an OutpaintingImageGenerationEngine) and the Task (to an OutpaintingImageGenerationTask).

import os 

import boto3

from griptape.artifacts import TextArtifact
from griptape.loaders import ImageLoader
from griptape.drivers import AmazonBedrockImageGenerationDriver, BedrockStableDiffusionImageGenerationModelDriver
from griptape.engines import OutpaintingImageGenerationEngine
from griptape.tasks import OutpaintingImageGenerationTask
from griptape.structures import Agent


driver = AmazonBedrockImageGenerationDriver(
    image_generation_model_driver=BedrockStableDiffusionImageGenerationModelDriver(
    ),
    session=boto3.Session(
        profile_name=os.environ["AWS_PROFILE_NAME"],
        region_name=os.environ["AWS_REGION_NAME"],
    ),
    model="stability.stable-diffusion-xl-v0",
)

engine = OutpaintingImageGenerationEngine(
    image_generation_driver=driver
)

prompt = TextArtifact("a photograph of a mountain scene, outdoors, taken from a nearby peak")
image = ImageLoader().load("/path/to/mountain-outpainting-input.png")
mask = ImageLoader().load("/path/to/mountain-outpainting-mask.png")

task = OutpaintingImageGenerationTask(
    input=(prompt, image, mask),
    image_generation_engine=engine,
    output_dir="images/",
)

Agent(tasks=[task]).run()

Just as with the Inpainting Task, the Outpainting Task requires an input tuple with three elements: the prompt, the reference image, and the mask.

After running the agent, success! Stable Diffusion has imagined the rest of the image, creating believable continuations of the mountain and cloud patterns provided in the input image. Take a look:

Image generated through outpainting

Other Changes and Improvements

  • Fixed an issue where the MongoDbAtlasVectorStore didn’t properly use the namespace input when executing a query.
  • Fixed many type errors throughout the Framework.
  • Removed an unused section of the ToolTask system prompt template, saving tokens!
  • Fixed an issue where Structure execution args were being cleared after a run, which prevented inspection of the Structure’s input.
  • Handled a previously unhandled exception in SqlClient.

For an exhaustive list of new features, changes, and improvements, take a look at our Github release page.

Wrapping Up

As always, many thanks to the contributors who have volunteered their time to suggest, test, and even implement the changes we’re releasing. There’s a lot more on the way, so stay tuned here, on Github, and in Discord for news on what’s next. Can’t wait to see what you build with Griptape!