Skip to content

Commit 984df72

Browse files
release: 0.1.0-alpha.13 (#19)
* feat(api): manual updates Add GRADIENTAI_AGENT_KEY * feat(api): share chat completion chunk model between chat and agent.chat * codegen metadata * feat(api): manual updates * chore(internal): codegen related update * chore(internal): bump pinned h11 dep * chore(package): mark python 3.13 as supported * fix(parsing): correctly handle nested discriminated unions * codegen metadata * chore(readme): fix version rendering on pypi * fix(client): don't send Content-Type header on GET requests * feat: clean up environment call outs * release: 0.1.0-alpha.13 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 6d1a924 commit 984df72

23 files changed

+309
-180
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.1.0-alpha.12"
2+
".": "0.1.0-alpha.13"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 76
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-e8b3cbc80e18e4f7f277010349f25e1319156704f359911dc464cc21a0d077a6.yml
1+
configured_endpoints: 77
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/digitalocean%2Fgradientai-391afaae764eb758523b67805cb47ae3bc319dc119d83414afdd66f123ceaf5c.yml
33
openapi_spec_hash: c773d792724f5647ae25a5ae4ccec208
4-
config_hash: 1c936b3bd798c3fcb25479b19efa999a
4+
config_hash: 0bd094d86a010f7cbd5eb22ef548a29f

CHANGELOG.md

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,30 @@
11
# Changelog
22

3+
## 0.1.0-alpha.13 (2025-07-15)
4+
5+
Full Changelog: [v0.1.0-alpha.12...v0.1.0-alpha.13](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.12...v0.1.0-alpha.13)
6+
7+
### Features
8+
9+
* **api:** manual updates ([bd6fecc](https://github.com/digitalocean/gradientai-python/commit/bd6feccf97fa5877085783419f11dad04c57d700))
10+
* **api:** manual updates ([c2b96ce](https://github.com/digitalocean/gradientai-python/commit/c2b96ce3d95cc9b74bffd8d6a499927eefd23b14))
11+
* **api:** share chat completion chunk model between chat and agent.chat ([d67371f](https://github.com/digitalocean/gradientai-python/commit/d67371f9f4d0761ea03097820bc3e77654b4d2bf))
12+
* clean up environment call outs ([64ee5b4](https://github.com/digitalocean/gradientai-python/commit/64ee5b449c0195288d0a1dc55d2725e8cdd6afcf))
13+
14+
15+
### Bug Fixes
16+
17+
* **client:** don't send Content-Type header on GET requests ([507a342](https://github.com/digitalocean/gradientai-python/commit/507a342fbcc7c801ba36708e56ea2d2a28a1a392))
18+
* **parsing:** correctly handle nested discriminated unions ([569e473](https://github.com/digitalocean/gradientai-python/commit/569e473d422928597ccf762133d5e52ac9a8665a))
19+
20+
21+
### Chores
22+
23+
* **internal:** bump pinned h11 dep ([6f4e960](https://github.com/digitalocean/gradientai-python/commit/6f4e960b6cb838cbf5e50301375fcb4b60a2cfb3))
24+
* **internal:** codegen related update ([1df657d](https://github.com/digitalocean/gradientai-python/commit/1df657d9b384cb85d27fe839c0dab212a7773f8f))
25+
* **package:** mark python 3.13 as supported ([1a899b6](https://github.com/digitalocean/gradientai-python/commit/1a899b66a484986672a380e405f09b1ae94b6310))
26+
* **readme:** fix version rendering on pypi ([6fbe83b](https://github.com/digitalocean/gradientai-python/commit/6fbe83b11a9e3dbb40cf7f9f627abbbd086ee24a))
27+
328
## 0.1.0-alpha.12 (2025-07-02)
429

530
Full Changelog: [v0.1.0-alpha.11...v0.1.0-alpha.12](https://github.com/digitalocean/gradientai-python/compare/v0.1.0-alpha.11...v0.1.0-alpha.12)

README.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# GradientAI Python API library
22

3-
[![PyPI version](<https://img.shields.io/pypi/v/c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python.svg?label=pypi%20(stable)>)](https://pypi.org/project/c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python/)
3+
<!-- prettier-ignore -->
4+
[![PyPI version](https://img.shields.io/pypi/v/c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python.svg?label=pypi%20(stable))](https://pypi.org/project/c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python/)
45

56
The GradientAI Python library provides convenient access to the GradientAI REST API from any Python 3.8+
67
application. The library includes type definitions for all request params and response fields,
@@ -73,7 +74,7 @@ client = AsyncGradientAI(
7374

7475

7576
async def main() -> None:
76-
completion = await client.agents.chat.completions.create(
77+
completion = await client.chat.completions.create(
7778
messages=[
7879
{
7980
"role": "user",
@@ -104,18 +105,17 @@ pip install --pre c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python[aiohttp]
104105
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
105106

106107
```python
107-
import os
108108
import asyncio
109109
from gradientai import DefaultAioHttpClient
110110
from gradientai import AsyncGradientAI
111111

112112

113113
async def main() -> None:
114114
async with AsyncGradientAI(
115-
api_key=os.environ.get("GRADIENTAI_API_KEY"), # This is the default and can be omitted
115+
api_key="My API Key",
116116
http_client=DefaultAioHttpClient(),
117117
) as client:
118-
completion = await client.agents.chat.completions.create(
118+
completion = await client.chat.completions.create(
119119
messages=[
120120
{
121121
"role": "user",
@@ -139,7 +139,7 @@ from gradientai import GradientAI
139139

140140
client = GradientAI()
141141

142-
stream = client.agents.chat.completions.create(
142+
stream = client.chat.completions.create(
143143
messages=[
144144
{
145145
"role": "user",
@@ -160,7 +160,7 @@ from gradientai import AsyncGradientAI
160160

161161
client = AsyncGradientAI()
162162

163-
stream = await client.agents.chat.completions.create(
163+
stream = await client.chat.completions.create(
164164
messages=[
165165
{
166166
"role": "user",
@@ -192,7 +192,7 @@ from gradientai import GradientAI
192192

193193
client = GradientAI()
194194

195-
completion = client.agents.chat.completions.create(
195+
completion = client.chat.completions.create(
196196
messages=[
197197
{
198198
"content": "string",
@@ -221,7 +221,7 @@ from gradientai import GradientAI
221221
client = GradientAI()
222222

223223
try:
224-
client.agents.chat.completions.create(
224+
client.chat.completions.create(
225225
messages=[
226226
{
227227
"role": "user",
@@ -272,7 +272,7 @@ client = GradientAI(
272272
)
273273

274274
# Or, configure per-request:
275-
client.with_options(max_retries=5).agents.chat.completions.create(
275+
client.with_options(max_retries=5).chat.completions.create(
276276
messages=[
277277
{
278278
"role": "user",
@@ -303,7 +303,7 @@ client = GradientAI(
303303
)
304304

305305
# Override per-request:
306-
client.with_options(timeout=5.0).agents.chat.completions.create(
306+
client.with_options(timeout=5.0).chat.completions.create(
307307
messages=[
308308
{
309309
"role": "user",
@@ -352,7 +352,7 @@ The "raw" Response object can be accessed by prefixing `.with_raw_response.` to
352352
from gradientai import GradientAI
353353

354354
client = GradientAI()
355-
response = client.agents.chat.completions.with_raw_response.create(
355+
response = client.chat.completions.with_raw_response.create(
356356
messages=[{
357357
"role": "user",
358358
"content": "What is the capital of France?",
@@ -361,7 +361,7 @@ response = client.agents.chat.completions.with_raw_response.create(
361361
)
362362
print(response.headers.get('X-My-Header'))
363363

364-
completion = response.parse() # get the object that `agents.chat.completions.create()` would have returned
364+
completion = response.parse() # get the object that `chat.completions.create()` would have returned
365365
print(completion.choices)
366366
```
367367

@@ -376,7 +376,7 @@ The above interface eagerly reads the full response body when you make the reque
376376
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
377377

378378
```python
379-
with client.agents.chat.completions.with_streaming_response.create(
379+
with client.chat.completions.with_streaming_response.create(
380380
messages=[
381381
{
382382
"role": "user",

api.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Shared Types
22

33
```python
4-
from gradientai.types import APILinks, APIMeta, ChatCompletionTokenLogprob
4+
from gradientai.types import APILinks, APIMeta, ChatCompletionChunk, ChatCompletionTokenLogprob
55
```
66

77
# Agents
@@ -65,12 +65,12 @@ Methods:
6565
Types:
6666

6767
```python
68-
from gradientai.types.agents.chat import AgentChatCompletionChunk, CompletionCreateResponse
68+
from gradientai.types.agents.chat import CompletionCreateResponse
6969
```
7070

7171
Methods:
7272

73-
- <code title="post /chat/completions">client.agents.chat.completions.<a href="./src/gradientai/resources/agents/chat/completions.py">create</a>(\*\*<a href="src/gradientai/types/agents/chat/completion_create_params.py">params</a>) -> <a href="./src/gradientai/types/agents/chat/completion_create_response.py">CompletionCreateResponse</a></code>
73+
- <code title="post /chat/completions?agent=true">client.agents.chat.completions.<a href="./src/gradientai/resources/agents/chat/completions.py">create</a>(\*\*<a href="src/gradientai/types/agents/chat/completion_create_params.py">params</a>) -> <a href="./src/gradientai/types/agents/chat/completion_create_response.py">CompletionCreateResponse</a></code>
7474

7575
## EvaluationMetrics
7676

@@ -260,7 +260,7 @@ Methods:
260260
Types:
261261

262262
```python
263-
from gradientai.types.chat import ChatCompletionChunk, CompletionCreateResponse
263+
from gradientai.types.chat import CompletionCreateResponse
264264
```
265265

266266
Methods:

pyproject.toml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python"
3-
version = "0.1.0-alpha.12"
3+
version = "0.1.0-alpha.13"
44
description = "The official Python library for GradientAI"
55
dynamic = ["readme"]
66
license = "Apache-2.0"
@@ -24,6 +24,7 @@ classifiers = [
2424
"Programming Language :: Python :: 3.10",
2525
"Programming Language :: Python :: 3.11",
2626
"Programming Language :: Python :: 3.12",
27+
"Programming Language :: Python :: 3.13",
2728
"Operating System :: OS Independent",
2829
"Operating System :: POSIX",
2930
"Operating System :: MacOS",
@@ -38,7 +39,7 @@ Homepage = "https://github.com/digitalocean/gradientai-python"
3839
Repository = "https://github.com/digitalocean/gradientai-python"
3940

4041
[project.optional-dependencies]
41-
aiohttp = ["aiohttp", "httpx_aiohttp>=0.1.6"]
42+
aiohttp = ["aiohttp", "httpx_aiohttp>=0.1.8"]
4243

4344
[tool.rye]
4445
managed = true

requirements-dev.lock

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -48,15 +48,15 @@ filelock==3.12.4
4848
frozenlist==1.6.2
4949
# via aiohttp
5050
# via aiosignal
51-
h11==0.14.0
51+
h11==0.16.0
5252
# via httpcore
53-
httpcore==1.0.2
53+
httpcore==1.0.9
5454
# via httpx
5555
httpx==0.28.1
5656
# via c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python
5757
# via httpx-aiohttp
5858
# via respx
59-
httpx-aiohttp==0.1.6
59+
httpx-aiohttp==0.1.8
6060
# via c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python
6161
idna==3.4
6262
# via anyio

requirements.lock

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,14 +36,14 @@ exceptiongroup==1.2.2
3636
frozenlist==1.6.2
3737
# via aiohttp
3838
# via aiosignal
39-
h11==0.14.0
39+
h11==0.16.0
4040
# via httpcore
41-
httpcore==1.0.2
41+
httpcore==1.0.9
4242
# via httpx
4343
httpx==0.28.1
4444
# via c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python
4545
# via httpx-aiohttp
46-
httpx-aiohttp==0.1.6
46+
httpx-aiohttp==0.1.8
4747
# via c63a5cfe-b235-4fbe-8bbb-82a9e02a482a-python
4848
idna==3.4
4949
# via anyio

src/gradientai/_base_client.py

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -529,6 +529,15 @@ def _build_request(
529529
# work around https://github.com/encode/httpx/discussions/2880
530530
kwargs["extensions"] = {"sni_hostname": prepared_url.host.replace("_", "-")}
531531

532+
is_body_allowed = options.method.lower() != "get"
533+
534+
if is_body_allowed:
535+
kwargs["json"] = json_data if is_given(json_data) else None
536+
kwargs["files"] = files
537+
else:
538+
headers.pop("Content-Type", None)
539+
kwargs.pop("data", None)
540+
532541
# TODO: report this error to httpx
533542
return self._client.build_request( # pyright: ignore[reportUnknownMemberType]
534543
headers=headers,
@@ -540,8 +549,6 @@ def _build_request(
540549
# so that passing a `TypedDict` doesn't cause an error.
541550
# https://github.com/microsoft/pyright/issues/3526#event-6715453066
542551
params=self.qs.stringify(cast(Mapping[str, Any], params)) if params else None,
543-
json=json_data if is_given(json_data) else None,
544-
files=files,
545552
**kwargs,
546553
)
547554

src/gradientai/_client.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,13 +56,15 @@ class GradientAI(SyncAPIClient):
5656
# client options
5757
api_key: str | None
5858
inference_key: str | None
59+
agent_key: str | None
5960
agent_domain: str | None
6061

6162
def __init__(
6263
self,
6364
*,
6465
api_key: str | None = None,
6566
inference_key: str | None = None,
67+
agent_key: str | None = None,
6668
agent_domain: str | None = None,
6769
base_url: str | httpx.URL | None = None,
6870
timeout: Union[float, Timeout, None, NotGiven] = NOT_GIVEN,
@@ -88,6 +90,7 @@ def __init__(
8890
This automatically infers the following arguments from their corresponding environment variables if they are not provided:
8991
- `api_key` from `GRADIENTAI_API_KEY`
9092
- `inference_key` from `GRADIENTAI_INFERENCE_KEY`
93+
- `agent_key` from `GRADIENTAI_AGENT_KEY`
9194
"""
9295
if api_key is None:
9396
api_key = os.environ.get("GRADIENTAI_API_KEY")
@@ -97,6 +100,10 @@ def __init__(
97100
inference_key = os.environ.get("GRADIENTAI_INFERENCE_KEY")
98101
self.inference_key = inference_key
99102

103+
if agent_key is None:
104+
agent_key = os.environ.get("GRADIENTAI_AGENT_KEY")
105+
self.agent_key = agent_key
106+
100107
self.agent_domain = agent_domain
101108

102109
if base_url is None:
@@ -200,6 +207,7 @@ def copy(
200207
*,
201208
api_key: str | None = None,
202209
inference_key: str | None = None,
210+
agent_key: str | None = None,
203211
agent_domain: str | None = None,
204212
base_url: str | httpx.URL | None = None,
205213
timeout: float | Timeout | None | NotGiven = NOT_GIVEN,
@@ -236,6 +244,7 @@ def copy(
236244
client = self.__class__(
237245
api_key=api_key or self.api_key,
238246
inference_key=inference_key or self.inference_key,
247+
agent_key=agent_key or self.agent_key,
239248
agent_domain=agent_domain or self.agent_domain,
240249
base_url=base_url or self.base_url,
241250
timeout=self.timeout if isinstance(timeout, NotGiven) else timeout,
@@ -290,13 +299,15 @@ class AsyncGradientAI(AsyncAPIClient):
290299
# client options
291300
api_key: str | None
292301
inference_key: str | None
302+
agent_key: str | None
293303
agent_domain: str | None
294304

295305
def __init__(
296306
self,
297307
*,
298308
api_key: str | None = None,
299309
inference_key: str | None = None,
310+
agent_key: str | None = None,
300311
agent_domain: str | None = None,
301312
base_url: str | httpx.URL | None = None,
302313
timeout: Union[float, Timeout, None, NotGiven] = NOT_GIVEN,
@@ -322,6 +333,7 @@ def __init__(
322333
This automatically infers the following arguments from their corresponding environment variables if they are not provided:
323334
- `api_key` from `GRADIENTAI_API_KEY`
324335
- `inference_key` from `GRADIENTAI_INFERENCE_KEY`
336+
- `agent_key` from `GRADIENTAI_AGENT_KEY`
325337
"""
326338
if api_key is None:
327339
api_key = os.environ.get("GRADIENTAI_API_KEY")
@@ -331,6 +343,10 @@ def __init__(
331343
inference_key = os.environ.get("GRADIENTAI_INFERENCE_KEY")
332344
self.inference_key = inference_key
333345

346+
if agent_key is None:
347+
agent_key = os.environ.get("GRADIENTAI_AGENT_KEY")
348+
self.agent_key = agent_key
349+
334350
self.agent_domain = agent_domain
335351

336352
if base_url is None:
@@ -434,6 +450,7 @@ def copy(
434450
*,
435451
api_key: str | None = None,
436452
inference_key: str | None = None,
453+
agent_key: str | None = None,
437454
agent_domain: str | None = None,
438455
base_url: str | httpx.URL | None = None,
439456
timeout: float | Timeout | None | NotGiven = NOT_GIVEN,
@@ -470,6 +487,7 @@ def copy(
470487
client = self.__class__(
471488
api_key=api_key or self.api_key,
472489
inference_key=inference_key or self.inference_key,
490+
agent_key=agent_key or self.agent_key,
473491
agent_domain=agent_domain or self.agent_domain,
474492
base_url=base_url or self.base_url,
475493
timeout=self.timeout if isinstance(timeout, NotGiven) else timeout,

0 commit comments

Comments
 (0)