Skip to content

models - correct tool result content #154

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 2, 2025

Conversation

pgrayy
Copy link
Member

@pgrayy pgrayy commented May 30, 2025

Description

For the OpenAI, LiteLLM, LlamaAPI, and Ollama model providers, we are json.dump'ing the tool result contents. This PR updates the formatting to more closely match the allowed specifications. This should improve the quality of responses and also help to avoid the confusing json dump errors that are raised when results contain binary data.

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update
  • Other (please describe): Enhancing the formatting of the tool result contents for each model provider.

Testing

  • hatch fmt --linter
  • hatch fmt --formatter
  • hatch test --all
  • Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli
  • hatch run test-lint
  • python test_models.py: Ran the following script with each model:
messages = [
    {
        "role": "user",
        "content": [
            {
                "text": "What is the current time and weather in New York."
            },
        ],
    },
    {
        "role": "assistant",
        "content": [
            {
                "toolUse": {
                    "input": {
                        "city": "New York",
                    },
                    "name": "current_time",
                    "toolUseId": "t1",
                },
            },
            {
                "toolUse": {
                    "input": {
                        "city": "New York",
                    },
                    "name": "current_weather",
                    "toolUseId": "w1",
                },
            },
        ],
    },
    {
        "role": "user",
        "content": [
            {
                "toolResult": {
                    "content": [
                        {
                            "text": "12:31",
                        },
                    ],
                    "status": "success",
                    "toolUseId": "t1",
                },
            },
            {
                "toolResult": {
                    "content": [
                        {
                            "text": "Sunny",
                        },
                    ],
                    "status": "success",
                    "toolUseId": "w1",
                },
            },
        ],
    },
]
for chunk in model.converse(messages):
    print(chunk)

Everything is working as expected.

Checklist

  • I have read the CONTRIBUTING document
  • I have added tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@pgrayy pgrayy requested a review from a team as a code owner May 30, 2025 19:55
Unshure
Unshure previously approved these changes Jun 2, 2025
@pgrayy pgrayy merged commit 76cd7ba into strands-agents:main Jun 2, 2025
10 checks passed
jer96 pushed a commit to lukehau/sdk-python that referenced this pull request Jun 3, 2025
@pgrayy pgrayy deleted the models-recurse-tool-results branch June 10, 2025 14:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants