You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Alternatively, you can build from source and install the wheel file:
@@ -120,7 +120,7 @@ the changes aren't made through the automated pipeline, you may want to make rel
120
120
121
121
### Publish with a GitHub workflow
122
122
123
-
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/digitalocean/gradientai-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
123
+
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/digitalocean/gradient-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `do_gradientai.APIConnectionError` is raised.
246
+
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `gradient.APIConnectionError` is raised.
247
247
248
248
When the API returns a non-success status code (that is, 4xx or 5xx
249
-
response), a subclass of `do_gradientai.APIStatusError` is raised, containing `status_code` and `response` properties.
249
+
response), a subclass of `gradient.APIStatusError` is raised, containing `status_code` and `response` properties.
250
250
251
-
All errors inherit from `do_gradientai.APIError`.
251
+
All errors inherit from `gradient.APIError`.
252
252
253
253
```python
254
-
importdo_gradientai
255
-
fromdo_gradientaiimportGradientAI
254
+
importgradient
255
+
fromgradientimportGradient
256
256
257
-
client =GradientAI()
257
+
client =Gradient()
258
258
259
259
try:
260
260
client.chat.completions.create(
@@ -266,12 +266,12 @@ try:
266
266
],
267
267
model="llama3.3-70b-instruct",
268
268
)
269
-
exceptdo_gradientai.APIConnectionError as e:
269
+
exceptgradient.APIConnectionError as e:
270
270
print("The server could not be reached")
271
271
print(e.__cause__) # an underlying Exception, likely raised within httpx.
272
-
exceptdo_gradientai.RateLimitError as e:
272
+
exceptgradient.RateLimitError as e:
273
273
print("A 429 status code was received; we should back off a bit.")
274
-
exceptdo_gradientai.APIStatusError as e:
274
+
exceptgradient.APIStatusError as e:
275
275
print("Another non-200-range status code was received")
276
276
print(e.status_code)
277
277
print(e.response)
@@ -299,10 +299,10 @@ Connection errors (for example, due to a network connectivity problem), 408 Requ
299
299
You can use the `max_retries` option to configure or disable retry settings:
300
300
301
301
```python
302
-
fromdo_gradientaiimportGradientAI
302
+
fromgradientimportGradient
303
303
304
304
# Configure the default for all requests:
305
-
client =GradientAI(
305
+
client =Gradient(
306
306
# default is 2
307
307
max_retries=0,
308
308
)
@@ -325,16 +325,16 @@ By default requests time out after 1 minute. You can configure this with a `time
325
325
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
@@ -401,9 +401,9 @@ completion = response.parse() # get the object that `chat.completions.create()`
401
401
print(completion.choices)
402
402
```
403
403
404
-
These methods return an [`APIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) object.
404
+
These methods return an [`APIResponse`](https://github.com/digitalocean/gradient-python/tree/main/src/gradient/_response.py) object.
405
405
406
-
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradientai-python/tree/main/src/do_gradientai/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
406
+
The async client returns an [`AsyncAPIResponse`](https://github.com/digitalocean/gradient-python/tree/main/src/gradient/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
407
407
408
408
#### `.with_streaming_response`
409
409
@@ -473,10 +473,10 @@ You can directly override the [httpx client](https://www.python-httpx.org/api/#c
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
497
497
498
498
```py
499
-
fromdo_gradientaiimportGradientAI
499
+
fromgradientimportGradient
500
500
501
-
withGradientAI() as client:
501
+
withGradient() as client:
502
502
# make requests here
503
503
...
504
504
@@ -515,7 +515,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con
515
515
516
516
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
517
517
518
-
We are keen for your feedback; please open an [issue](https://www.github.com/digitalocean/gradientai-python/issues) with questions, bugs, or suggestions.
518
+
We are keen for your feedback; please open an [issue](https://www.github.com/digitalocean/gradient-python/issues) with questions, bugs, or suggestions.
519
519
520
520
### Determining the installed version
521
521
@@ -524,8 +524,8 @@ If you've upgraded to the latest version but aren't seeing any new features you
524
524
You can determine the version that is being used at runtime with:
0 commit comments