Skip to content

Commit 332386e

Browse files
committed
Remove the "AI" namespace
Closes #9. Also abstract out a few more steps for availability and creation, so that we don't have to repeat them for every spec.
1 parent d46993d commit 332386e

File tree

2 files changed

+281
-291
lines changed

2 files changed

+281
-291
lines changed

README.md

Lines changed: 14 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ Both of these potential goals could pose challenges to interoperability, so we w
8181
All three APIs share the same format: create a summarizer/writer/rewriter object customized as necessary, and call its appropriate method:
8282

8383
```js
84-
const summarizer = await ai.summarizer.create({
84+
const summarizer = await Summarizer.create({
8585
sharedContext: "An article from the Daily Economic News magazine",
8686
type: "headline",
8787
length: "short"
@@ -93,7 +93,7 @@ const summary = await summarizer.summarize(articleEl.textContent, {
9393
```
9494

9595
```js
96-
const writer = await ai.writer.create({
96+
const writer = await Writer.create({
9797
tone: "formal"
9898
});
9999

@@ -103,7 +103,7 @@ const result = await writer.write(
103103
```
104104

105105
```js
106-
const rewriter = await ai.rewriter.create({
106+
const rewriter = await Rewriter.create({
107107
sharedContext: "A review for the Flux Capacitor 3000 from TimeMachines Inc."
108108
});
109109

@@ -117,7 +117,7 @@ const result = await rewriter.rewrite(reviewEl.textContent, {
117117
All three of the APIs support streaming output, via counterpart methods `summarizeStreaming()` / `writeStreaming()` / `rewriteStreaming()` that return `ReadableStream`s of strings. A sample usage would be:
118118

119119
```js
120-
const writer = await ai.writer.create({ tone: "formal", length: "long" });
120+
const writer = await Writer.create({ tone: "formal", length: "long" });
121121

122122
const stream = await writer.writeStreaming(
123123
"A draft for an inquiry to my bank about how to enable wire transfers on my account"
@@ -133,7 +133,7 @@ for (const chunk of stream) {
133133
A created summarizer/writer/rewriter object can be used multiple times. **The only shared state is the initial configuration options**; the inputs do not build on each other. (See more discussion [below](#one-shot-functions-instead-of-summarizer--writer--rewriter-objects).)
134134

135135
```js
136-
const summarizer = await ai.summarize.create({ type: "tl;dr" });
136+
const summarizer = await Summarizer.create({ type: "tl;dr" });
137137

138138
const reviewSummaries = await Promise.all(
139139
Array.from(
@@ -150,7 +150,7 @@ The default behavior for the summarizer/writer/rewriter objects assumes that the
150150
It's better practice, if possible, to supply the `create()` method with information about the expected languages in use. This allows the implementation to download any necessary supporting material, such as fine-tunings or safety-checking models, and to immediately reject the promise returned by `create()` if the web developer needs to use languages that the browser is not capable of supporting:
151151

152152
```js
153-
const summarizer = await ai.summarize.create({
153+
const summarizer = await Summarizer.create({
154154
type: "key-points",
155155
expectedInputLanguages: ["ja", "ko"],
156156
expectedContextLanguages: ["en", "ja", "ko"],
@@ -189,7 +189,7 @@ Whenever any API call fails due to too-large input, it is rejected with a `Quota
189189
This allows detecting failures due to overlarge inputs and giving clear feedback to the user, with code such as the following:
190190

191191
```js
192-
const summarizer = await ai.summarizer.create();
192+
const summarizer = await Summarizer.create();
193193

194194
try {
195195
console.log(await summarizer.summarize(potentiallyLargeInput));
@@ -205,16 +205,16 @@ try {
205205

206206
Note that all of the following methods can reject (or error the relevant stream) with this type of exception:
207207

208-
* `ai.summarizer.create()`, if `sharedContext` is too large;
208+
* `Summarizer.create()`, if `sharedContext` is too large;
209209

210-
* `ai.summarizer.summarize()`/`summarizeStreaming()`, if the combination of the creation-time `sharedContext`, the current method call's `input` argument, and the current method call's `context` is too large;
210+
* `summarize()`/`summarizeStreaming()`, if the combination of the creation-time `sharedContext`, the current method call's `input` argument, and the current method call's `context` is too large;
211211

212212
* Similarly for writer creation / writing, and rewriter creation / rewriting.
213213

214214
In some cases, instead of providing errors after the fact, the developer needs to be able to communicate to the user how close they are to the limit. For this, they can use the `inputQuota` property and the `measureInputUsage()` method on the summarizer/writer/rewriter objects:
215215

216216
```js
217-
const rewriter = await ai.rewriter.create();
217+
const rewriter = await Rewriter.create();
218218
meterEl.max = rewriter.inputQuota;
219219

220220
textbox.addEventListener("input", () => {
@@ -264,15 +264,15 @@ An example usage is the following:
264264
```js
265265
const options = { type: "teaser", expectedInputLanguages: ["ja"] };
266266

267-
const availability = await ai.summarizer.availability(options);
267+
const availability = await Summarizer.availability(options);
268268

269269
if (availability !== "unavailable") {
270270
// We're good! Let's do the summarization using the built-in API.
271271
if (availability !== "available") {
272272
console.log("Sit tight, we need to do some downloading...");
273273
}
274274

275-
const summarizer = await ai.summarizer.create(options);
275+
const summarizer = await Summarizer.create(options);
276276
console.log(await summarizer.summarize(articleEl.textContent));
277277
} else {
278278
// Either the API overall, or the combination of teaser + Japanese input, is not available.
@@ -286,7 +286,7 @@ if (availability !== "unavailable") {
286286
For cases where using the API is only possible after a download, you can monitor the download progress (e.g. in order to show your users a progress bar) using code such as the following:
287287

288288
```js
289-
const writer = await ai.writer.create({
289+
const writer = await Writer.create({
290290
...otherOptions,
291291
monitor(m) {
292292
m.addEventListener("downloadprogress", e => {
@@ -322,7 +322,7 @@ Each API comes equipped with a couple of `signal` options that accept `AbortSign
322322
const controller = new AbortController();
323323
stopButton.onclick = () => controller.abort();
324324

325-
const rewriter = await ai.rewriter.create({ signal: controller.signal });
325+
const rewriter = await Rewriter.create({ signal: controller.signal });
326326
await rewriter.rewrite(document.body.textContent, { signal: controller.signal });
327327
```
328328

@@ -393,12 +393,6 @@ Although the APIs contain support for streaming output, they don't support strea
393393

394394
However, we believe that streaming input would not be a good fit for these APIs. Attempting to summarize or rewrite input as more input streams in will likely result in multiple wasteful rounds of revision. The underlying language model technology does not support streaming input, so the implementation would be buffering the input stream anyway, then repeatedly feeding new versions of the buffered text to the language model. If a developer wants to achieve such results, they can do so themselves, at the cost of writing code which makes the wastefulness of the operation more obvious. Developers can also customize such code, e.g. by only asking for new summaries every 5 seconds (or whatever interval makes the most sense for their use case).
395395

396-
### Alternative API spellings
397-
398-
In [the TAG review of the translation and language detection APIs](https://github.com/w3ctag/design-reviews/issues/948), some TAG members suggested slightly different patterns than the `ai.something.create()` pattern, such as `AISomething.create()` or `Something.create()`.
399-
400-
We are open to such surface-level tweaks to the API entry points, and intend to gather more data from web developers on what they find more understandable and clear.
401-
402396
### Directly exposing a "prompt API"
403397

404398
The same team that is working on these APIs is also prototyping an experimental [prompt API](https://github.com/webmachinelearning/prompt-api/). A natural question is how these efforts related. Couldn't one easily accomplish summarization/writing/rewriting by directly prompting a language model, thus making these higher-level APIs redundant?

0 commit comments

Comments
 (0)