Skip to content

release: 3.0.0-beta.1 #30

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 12 commits into from
Jul 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
6 changes: 3 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
lint:
timeout-minutes: 10
name: lint
runs-on: ${{ github.repository == 'stainless-sdks/gradientai-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
runs-on: ${{ github.repository == 'stainless-sdks/gradient-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4
Expand All @@ -36,7 +36,7 @@ jobs:
run: ./scripts/lint

build:
if: github.repository == 'stainless-sdks/gradientai-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)
if: github.repository == 'stainless-sdks/gradient-python' && (github.event_name == 'push' || github.event.pull_request.head.repo.fork)
timeout-minutes: 10
name: build
permissions:
Expand Down Expand Up @@ -76,7 +76,7 @@ jobs:
test:
timeout-minutes: 10
name: test
runs-on: ${{ github.repository == 'stainless-sdks/gradientai-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
runs-on: ${{ github.repository == 'stainless-sdks/gradient-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
steps:
- uses: actions/checkout@v4
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# This workflow is triggered when a GitHub release is created.
# It can also be run manually to re-publish to PyPI in case it failed for some reason.
# You can run this workflow by navigating to https://www.github.com/digitalocean/gradientai-python/actions/workflows/publish-pypi.yml
# You can run this workflow by navigating to https://www.github.com/digitalocean/gradient-python/actions/workflows/publish-pypi.yml
name: Publish PyPI
on:
workflow_dispatch:
Expand Down Expand Up @@ -28,4 +28,4 @@ jobs:
run: |
bash ./bin/publish-pypi
env:
PYPI_TOKEN: ${{ secrets.GRADIENT_AI_PYPI_TOKEN || secrets.PYPI_TOKEN }}
PYPI_TOKEN: ${{ secrets.GRADIENT_PYPI_TOKEN || secrets.PYPI_TOKEN }}
4 changes: 2 additions & 2 deletions .github/workflows/release-doctor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ jobs:
release_doctor:
name: release doctor
runs-on: ubuntu-latest
if: github.repository == 'digitalocean/gradientai-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')
if: github.repository == 'digitalocean/gradient-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')

steps:
- uses: actions/checkout@v4
Expand All @@ -18,4 +18,4 @@ jobs:
run: |
bash ./bin/check-release-environment
env:
PYPI_TOKEN: ${{ secrets.GRADIENT_AI_PYPI_TOKEN || secrets.PYPI_TOKEN }}
PYPI_TOKEN: ${{ secrets.GRADIENT_PYPI_TOKEN || secrets.PYPI_TOKEN }}
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.1.0-beta.4"
".": "3.0.0-beta.1"
}
4 changes: 2 additions & 2 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 170
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-015417b36365dfcb32166e67379c38de8bf5127c33dff646097a819a7b4dc588.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradient-015417b36365dfcb32166e67379c38de8bf5127c33dff646097a819a7b4dc588.yml
openapi_spec_hash: d7d811c13cc79f15d82fe680cf425859
config_hash: 3ad1734779befb065101197f2f35568c
config_hash: 77ddef130940a6ad8ea6c6f66aee8757
318 changes: 169 additions & 149 deletions CHANGELOG.md

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ $ pip install -r requirements-dev.lock

Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
result in merge conflicts between manual patches and changes from the generator. The generator will never
modify the contents of the `src/do_gradientai/lib/` and `examples/` directories.
modify the contents of the `src/gradient/lib/` and `examples/` directories.

## Adding and running examples

Expand All @@ -62,7 +62,7 @@ If you’d like to use the repository from source, you can either install from g
To install via git:

```sh
$ pip install git+ssh://[email protected]/digitalocean/gradientai-python.git
$ pip install git+ssh://[email protected]/digitalocean/gradient-python.git
```

Alternatively, you can build from source and install the wheel file:
Expand Down Expand Up @@ -120,7 +120,7 @@ the changes aren't made through the automated pipeline, you may want to make rel

### Publish with a GitHub workflow

You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/digitalocean/gradientai-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/digitalocean/gradient-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.

### Publish manually

Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright 2025 Gradient AI
Copyright 2025 Gradient

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand Down
104 changes: 52 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
> Use with care in production environments and keep an eye on releases for updates or breaking changes.

<!-- prettier-ignore -->
[![PyPI version](https://img.shields.io/pypi/v/do_gradientai.svg?label=pypi%20(stable))](https://pypi.org/project/do_gradientai/)
[![PyPI version](https://img.shields.io/pypi/v/gradient.svg?label=pypi%20(stable))](https://pypi.org/project/gradient/)
[![Docs](https://img.shields.io/badge/Docs-8A2BE2)](https://gradientai.digitalocean.com/getting-started/overview/)

The Gradient Python library provides convenient access to the Gradient REST API from any Python 3.8+
Expand All @@ -25,7 +25,7 @@ The full API of this library can be found in [api.md](api.md).

```sh
# install from PyPI
pip install --pre do_gradientai
pip install --pre gradient
```

## Usage
Expand All @@ -39,18 +39,18 @@ The full API of this library can be found in [api.md](api.md).

```python
import os
from do_gradientai import GradientAI
from gradient import Gradient

api_client = GradientAI(
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
client = Gradient(
api_key=os.environ.get("GRADIENT_API_KEY"), # This is the default and can be omitted
)
inference_client = GradientAI(
inference_client = Gradient(
inference_key=os.environ.get(
"GRADIENTAI_INFERENCE_KEY"
"GRADIENT_INFERENCE_KEY"
), # This is the default and can be omitted
)
agent_client = GradientAI(
agent_key=os.environ.get("GRADIENTAI_AGENT_KEY"), # This is the default and can be omitted
agent_client = Gradient(
agent_key=os.environ.get("GRADIENT_AGENT_KEY"), # This is the default and can be omitted
agent_endpoint="https://my-agent.agents.do-ai.run",
)

Expand Down Expand Up @@ -92,20 +92,20 @@ print(agent_response.choices[0].message.content)

While you can provide an `api_key`, `inference_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `GRADIENTAI_API_KEY="My API Key"`, `GRADIENTAI_INFERENCE_KEY="My INFERENCE Key"` to your `.env` file
to add `GRADIENT_API_KEY="My API Key"`, `GRADIENT_INFERENCE_KEY="My INFERENCE Key"` to your `.env` file
so that your keys are not stored in source control.

## Async usage

Simply import `AsyncGradientAI` instead of `GradientAI` and use `await` with each API call:
Simply import `AsyncGradient` instead of `Gradient` and use `await` with each API call:

```python
import os
import asyncio
from do_gradientai import AsyncGradientAI
from gradient import AsyncGradient

client = AsyncGradientAI(
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
client = AsyncGradient(
api_key=os.environ.get("GRADIENT_API_KEY"), # This is the default and can be omitted
)


Expand Down Expand Up @@ -135,19 +135,19 @@ You can enable this by installing `aiohttp`:

```sh
# install from PyPI
pip install --pre do_gradientai[aiohttp]
pip install --pre gradient[aiohttp]
```

Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:

```python
import asyncio
from do_gradientai import DefaultAioHttpClient
from do_gradientai import AsyncGradientAI
from gradient import DefaultAioHttpClient
from gradient import AsyncGradient


async def main() -> None:
async with AsyncGradientAI(
async with AsyncGradient(
api_key="My API Key",
http_client=DefaultAioHttpClient(),
) as client:
Expand All @@ -171,9 +171,9 @@ asyncio.run(main())
We provide support for streaming responses using Server Side Events (SSE).

```python
from do_gradientai import GradientAI
from gradient import Gradient

client = GradientAI()
client = Gradient()

stream = client.chat.completions.create(
messages=[
Expand All @@ -192,9 +192,9 @@ for completion in stream:
The async client uses the exact same interface.

```python
from do_gradientai import AsyncGradientAI
from gradient import AsyncGradient

client = AsyncGradientAI()
client = AsyncGradient()

stream = await client.chat.completions.create(
messages=[
Expand Down Expand Up @@ -224,9 +224,9 @@ Typed requests and responses provide autocomplete and documentation within your
Nested parameters are dictionaries, typed using `TypedDict`, for example:

```python
from do_gradientai import GradientAI
from gradient import Gradient

client = GradientAI()
client = Gradient()

completion = client.chat.completions.create(
messages=[
Expand All @@ -243,18 +243,18 @@ print(completion.stream_options)

## Handling errors

When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradient.APIConnectionError` is raised.

When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
response), a subclass of `gradient.APIStatusError` is raised, containing `status_code` and `response` properties.

All errors inherit from `do_gradientai.APIError`.
All errors inherit from `gradient.APIError`.

```python
import do_gradientai
from do_gradientai import GradientAI
import gradient
from gradient import Gradient

client = GradientAI()
client = Gradient()

try:
client.chat.completions.create(
Expand All @@ -266,12 +266,12 @@ try:
],
model="llama3.3-70b-instruct",
)
except do_gradientai.APIConnectionError as e:
except gradient.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except do_gradientai.RateLimitError as e:
except gradient.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except do_gradientai.APIStatusError as e:
except gradient.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
Expand Down Expand Up @@ -299,10 +299,10 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
You can use the `max_retries` option to configure or disable retry settings:

```python
from do_gradientai import GradientAI
from gradient import Gradient

# Configure the default for all requests:
client = GradientAI(
client = Gradient(
# default is 2
max_retries=0,
)
Expand All @@ -325,16 +325,16 @@ By default requests time out after 1 minute. You can configure this with a `time
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:

```python
from do_gradientai import GradientAI
from gradient import Gradient

# Configure the default for all requests:
client = GradientAI(
client = Gradient(
# 20 seconds (default is 1 minute)
timeout=20.0,
)

# More granular control:
client = GradientAI(
client = Gradient(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)

Expand All @@ -360,10 +360,10 @@ Note that requests that time out are [retried twice by default](#retries).

We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.

You can enable logging by setting the environment variable `GRADIENT_AI_LOG` to `info`.
You can enable logging by setting the environment variable `GRADIENT_LOG` to `info`.

```shell
$ export GRADIENT_AI_LOG=info
$ export GRADIENT_LOG=info
```

Or to `debug` for more verbose logging.
Expand All @@ -385,9 +385,9 @@ if response.my_field is None:
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,

```py
from do_gradientai import GradientAI
from gradient import Gradient

client = GradientAI()
client = Gradient()
response = client.chat.completions.with_raw_response.create(
messages=[{
"role": "user",
Expand All @@ -401,9 +401,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
print(completion.choices)
```

These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
These methods return an [`APIResponse`](https://github.com/digitalocean/gradient-python/tree/main/src/gradient/_response.py) object.

The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradient-python/tree/main/src/gradient/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.

#### `.with_streaming_response`

Expand Down Expand Up @@ -473,10 +473,10 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c

```python
import httpx
from do_gradientai import GradientAI, DefaultHttpxClient
from gradient import Gradient, DefaultHttpxClient

client = GradientAI(
# Or use the `GRADIENT_AI_BASE_URL` env var
client = Gradient(
# Or use the `GRADIENT_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
Expand All @@ -496,9 +496,9 @@ client.with_options(http_client=DefaultHttpxClient(...))
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.

```py
from do_gradientai import GradientAI
from gradient import Gradient

with GradientAI() as client:
with Gradient() as client:
# make requests here
...

Expand All @@ -515,7 +515,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an [issue](https://www.github.com/digitalocean/gradientai-python/issues) with questions, bugs, or suggestions.
We are keen for your feedback; please open an [issue](https://www.github.com/digitalocean/gradient-python/issues) with questions, bugs, or suggestions.

### Determining the installed version

Expand All @@ -524,8 +524,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
You can determine the version that is being used at runtime with:

```py
import do_gradientai
print(do_gradientai.__version__)
import gradient
print(gradient.__version__)
```

## Requirements
Expand Down
Loading