Griptape v0.21.0
Hey Griptapers! We’re super excited to share the latest update of Griptape, version v0.21.0
! This update packs a punch with new features, significant improvements, and some important changes. Let’s jump into what’s new and what you can expect from this release.
Breaking Changes
First up, let’s address an important breaking change. We’ve relocated the ProxycurlClient
Tool to its own dedicated repository, aligning with our new Tool Contributing Guidelines. We hope this Tool can serve as a useful reference for anyone looking to build their own Tool; we’re excited to see what the community creates!
We’ve also renamed certain Hugging Face files for improved consistency across the codebase. If you’re importing from any of these files directly, please update them accordingly:
hugging_face_hub_prompt_driver.py
->hugging_face_hub_prompt_driver.py
hugging_face_hub_prompt_driver.py
->huggingface_hub_prompt_driver.py
hugging_face_pipeline_prompt_driver.py
->huggingface_pipeline_prompt_driver.py
hugging_face_tokenizer.py
->huggingface_tokenizer.py
New Features
Image Generation Drivers
The star feature of v0.21.0
is the introduction of Image Generation Drivers. These Drivers let you generate images from text prompts using various image generation backends. Currently, we support Drivers for OpenAI DALL-E, Azure OpenAI DALL-E, Leonardo AI, Amazon Bedrock Stable Diffusion, and Amazon Bedrock Titan Generator.
Check out this amazing example from team member shhlife, demonstrating how to use OpenAI DALL-E with a Griptape Workflow for creating diverse image variations in parallel:
Here are some of the results:
Hugging Face Hub Embedding Driver
Another exciting addition in v0.21.0
is the HuggingFaceHubEmbeddingDriver
. This Driver enables you to create text embeddings with the plethora of models available on Hugging Face Hub under the “Feature Extraction” Task. Here’s an example showing how you can use this Embedding Driver:
Hugging Face Hub Streaming
We’ve also upgraded our HuggingFaceHubPromptDriver
to utilize the new Inference Client, enabling additional Hugging Face endpoints and a straightforward implementation of streaming. Enable it as you would with any other Driver!
Chat Streaming
The Chat
utility has also been upgraded to include streaming functionality! Now the responses from the LLM are displayed in incremental chunks as opposed to all at once; giving a more fluid chat feeling. If you’re using a Prompt Driver with stream=True
, the Chat
utility will automatically stream the results back. A big shoutout to mattma1970 for this contribution! Here’s how you can try it:
If you’ve overridden the Prompt Driver, you will need to pass stream=True
to the Driver instead.
Simple Tokenizer
This release also introduces the SimpleTokenizer
, a new Tokenizer designed for compatibility with LLM providers lacking dedicated tokenization APIs. We’ve integrated SimpleTokenizer
in both BedrockTitanTokenizer
and BedrockJurassicTokenizer
assigning a characters_per_token
ratio of 6
.
Prompt Inspection
A frequently requested feature was the ability to inspect the prompt built by Griptape before it’s sent to the LLM. You can now utilize the StartPromptEvent
’s fields prompt
and prompt_stack
to see the fully rendered prompt string and the list of pre-render messages, respectively.
Tool Creation Course
Are you interested in developing your own Griptape Tool? If so, check out our new Griptape Trade School course! And when you’re ready, the Tool Template and Proxycurl Client are great jumping off points.
Other Improvements
Besides the standout features, we’ve made several fixes and enhancements:
- Added support for Python
3.12
, making Griptape compatible with Python versions >=3.9
. - Fixed an issue with deserializing Summary Conversation Memory, making it ideal for long-running conversations.
- Introduced a dedicated documentation page for Tokenizers.
For a full rundown of changes, take a look at our Github releases.
Wrapping Up
It’s been an exhilarating month at Griptape, from launching new framework modalities to preparing for the Griptape Cloud private preview and even receiving a mention at the re:Invent keynote! We can’t wait to see what the community creates with these new features, and we’ve got plenty more in store. To stay updated or just drop by for a chat, join us on Discord!