Skip to content

[Improvement]: OpenAI Function Calling in Chat completion API #14561

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

jcortes
Copy link
Collaborator

@jcortes jcortes commented Nov 5, 2024

WHY

Resolves #14488

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for defining multiple functions within the Chat component, enhancing interaction capabilities.
    • Introduced a new property, runThread, allowing immediate execution of threads upon creation.
  • Version Updates

    • Incremented version numbers across multiple components to reflect updates and improvements.
  • Bug Fixes

    • Ensured consistent handling of existing properties and methods across various components, maintaining functionality.

These updates provide users with enhanced capabilities and improved performance in their interactions with the OpenAI API.

@jcortes jcortes added the actions label Nov 5, 2024
@jcortes jcortes self-assigned this Nov 5, 2024
Copy link

vercel bot commented Nov 5, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
pipedream-docs-redirect-do-not-edit ⬜️ Ignored (Inspect) Nov 5, 2024 8:39pm

Copy link

vercel bot commented Nov 5, 2024

@jcortes is attempting to deploy a commit to the Pipedreamers Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Nov 5, 2024

Walkthrough

The changes in this pull request involve updates to several components within the OpenAI actions module. Key modifications include version increments and updates to model specifications, particularly transitioning from "gpt-4-vision-preview" to "gpt-4o" in the analyze-image-content component. Additionally, the chat component has introduced a new property, toolTypes, allowing for enhanced functionality by specifying tools for function calling. Other components have seen minor version updates with no significant changes to their existing logic or structure.

Changes

File Path Change Summary
components/openai/actions/analyze-image-content/analyze-image-content.mjs Version updated to "0.1.2"; model changed from "gpt-4-vision-preview" to "gpt-4o".
components/openai/actions/chat-with-assistant/chat-with-assistant.mjs Version updated to "0.0.7"; no changes to logic or structure.
components/openai/actions/chat/chat.mjs Version updated to "0.2.3"; added toolTypes property and updated methods to handle tools dynamically.
components/openai/actions/common/common-assistants.mjs Added numberOfFunctions property; updated methods to handle dynamic function properties.
components/openai/actions/create-assistant/create-assistant.mjs Version updated to "0.1.9"; no changes to logic or structure.
components/openai/actions/create-thread/create-thread.mjs Version updated to "0.0.11"; added runThread property and updated methods to handle immediate execution.
components/openai/actions/modify-assistant/modify-assistant.mjs Version updated to "0.1.9"; no changes to logic or structure.
components/openai/package.json Version updated to "0.7.1"; no other modifications.

Assessment against linked issues

Objective Addressed Explanation
Add OpenAI Functionality to Chat action (#[14488])
Enable ability to specify array of tools for function calling (#[14488])

Possibly related PRs

Suggested labels

action, ai-assisted

Suggested reviewers

  • michelle0927

Poem

🐇 In the meadow where ideas bloom,
A chat with tools dispels the gloom.
With functions ready, we take flight,
In the world of code, we find our light!
Hopping through updates, we cheer and play,
For innovation grows with each new day! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
components/openai/actions/analyze-image-content/analyze-image-content.mjs (1)

Line range hint 42-46: Critical: Invalid model name "gpt-4o"

The model name "gpt-4o" appears to be incorrect. According to OpenAI's documentation, the current vision-capable models are:

  • "gpt-4-vision-preview"
  • "gpt-4v-1106-preview"

Please update to use one of these valid model names to ensure the image analysis functionality works correctly.

Apply this diff to fix the model name:

    const { id: assistantId } = await this.openai.createAssistant({
      $,
      data: {
-        model: "gpt-4o", // replaced from "gpt-4-vision-preview" - see https://platform.openai.com/docs/deprecations
+        model: "gpt-4v-1106-preview", // Latest vision model as of November 2023
      },
    });
components/openai/actions/common/common-assistants.mjs (1)

Line range hint 1-146: Consider adding validation for the numberOfFunctions property.

To ensure the numberOfFunctions property is within a reasonable range, consider adding validation to restrict its value. For example, you could set a minimum of 1 and a maximum of 10 functions.

Add the min and max attributes to the numberOfFunctions property definition:

props.numberOfFunctions = {
  type: "integer",
  label: "Number of Functions",
  description: "The number of functions to define.",
  optional: true,
  reloadProps: true,
  default: 1,
+ min: 1,
+ max: 10,
};
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 65d3179 and 3e6d9b5.

📒 Files selected for processing (8)
  • components/openai/actions/analyze-image-content/analyze-image-content.mjs (1 hunks)
  • components/openai/actions/chat-with-assistant/chat-with-assistant.mjs (1 hunks)
  • components/openai/actions/chat/chat.mjs (3 hunks)
  • components/openai/actions/common/common-assistants.mjs (3 hunks)
  • components/openai/actions/create-assistant/create-assistant.mjs (1 hunks)
  • components/openai/actions/create-thread/create-thread.mjs (1 hunks)
  • components/openai/actions/modify-assistant/modify-assistant.mjs (1 hunks)
  • components/openai/package.json (1 hunks)
✅ Files skipped from review due to trivial changes (5)
  • components/openai/actions/chat-with-assistant/chat-with-assistant.mjs
  • components/openai/actions/create-assistant/create-assistant.mjs
  • components/openai/actions/create-thread/create-thread.mjs
  • components/openai/actions/modify-assistant/modify-assistant.mjs
  • components/openai/package.json
🔇 Additional comments (11)
components/openai/actions/analyze-image-content/analyze-image-content.mjs (2)

11-11: LGTM: Version bump is appropriate.

The version increment from 0.1.1 to 0.1.2 aligns with semantic versioning for a minor update.


Line range hint 42-108: Verify: Ensure function calling compatibility with vision models

Since this PR aims to enhance function calling capabilities, we should verify if the vision models support function calling as intended.

Let's check the OpenAI documentation and codebase for any examples or restrictions:

components/openai/actions/common/common-assistants.mjs (4)

19-28: LGTM!

The new property numberOfFunctions is correctly added to the props object when toolTypes includes "function". The property definition is accurate and includes the necessary attributes such as type, label, description, and default value.


29-32: LGTM!

The additionalProps method correctly returns the merged props object along with the properties obtained from getToolProps().


102-112: LGTM!

The buildTools method has been correctly updated to handle the dynamic function properties. It iterates through numberOfFunctions and constructs the tools array by pushing each function's details using the new dynamic property names.


74-93: Verify the dynamic generation of function properties in the codebase.

The changes in the getToolProps method to dynamically generate function properties based on numberOfFunctions look good. However, ensure that all the places where these properties are used have been updated to handle the dynamic naming convention (functionName_${i}, etc.).

Run the following script to verify the usage of dynamic function properties:

✅ Verification successful

Based on the comprehensive code analysis, I can now provide the final response:

Dynamic function properties are properly handled throughout the codebase

The verification shows that the dynamic function properties (functionName_${i}, functionDescription_${i}, functionParameters_${i}) are consistently used across the codebase:

  • In common-assistants.mjs, the getToolProps() method generates the dynamic properties
  • The buildTools() method correctly consumes these properties when constructing the tools array
  • All components that use these tools (create-assistant, modify-assistant, chat-with-assistant, chat, create-thread) properly invoke the buildTools() method to handle the dynamic function properties
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of dynamic function properties in the codebase.

# Test 1: Search for usage of the old static function property names. 
# Expect: No occurrences found.
rg -w $'functionName|functionDescription|functionParameters'

# Test 2: Search for usage of the new dynamic function property names.
# Expect: Occurrences found only in the `getToolProps` and `buildTools` methods.
rg -w $'functionName_\d+|functionDescription_\d+|functionParameters_\d+'

Length of output: 1696


Script:

#!/bin/bash
# Let's search for the usage of these dynamic properties in the specific file and its related files

# First, let's see the full implementation of getToolProps and buildTools
rg -l "getToolProps|buildTools" --type js --type mjs

# Then, let's examine the content of these files
echo "=== Examining files containing getToolProps or buildTools ==="
rg -A 10 -B 10 "getToolProps|buildTools" --type js --type mjs

# Let's also check for any array-based function properties that might be related
rg "function.*\[.*\]|tools.*\[.*\]" --type js --type mjs

Length of output: 394


Script:

#!/bin/bash
# Let's try without file type restrictions since .mjs files aren't recognized
# First, let's find the relevant files
rg -l "getToolProps|buildTools"

# Then examine their content
echo "=== Content of files with getToolProps or buildTools ==="
rg -A 10 -B 10 "getToolProps|buildTools"

# Also check the specific file we're reviewing to see the complete context
echo "=== Content of the file under review ==="
cat components/openai/actions/common/common-assistants.mjs

Length of output: 27109

components/openai/actions/chat/chat.mjs (5)

9-9: LGTM!

The version number has been incremented correctly.


70-117: LGTM!

The changes in the additionalProps method look good. The method now dynamically generates properties for defining functions based on the presence of "function" in toolTypes and the specified numberOfFunctions. This allows for flexibility in defining multiple functions when needed.

The generated properties, such as functionName, functionDescription, and functionParameters, are appropriately labeled and described, providing clear guidance to users on how to configure the functions.

The use of a loop to generate the properties based on numberOfFunctions is an efficient approach to handle multiple functions.


120-140: LGTM!

The _buildTools method is implemented correctly. It constructs an array of tool objects based on the specified toolTypes and the defined functions.

The method first filters out the "function" tool type and maps the remaining tool types to objects with the corresponding type property. This ensures that non-function tools are included in the array.

If "function" is included in toolTypes, the method iterates over the specified numberOfFunctions and constructs function tool objects. It retrieves the function properties (functionName, functionDescription, functionParameters) using dynamic property names based on the index. This allows for multiple functions to be defined and included in the tools array.

Finally, the method returns the tools array if it contains any elements, or undefined if no tools are specified. This conditional return ensures that the tools property is only included in the API request when applicable.


151-154: LGTM!

The changes in the run method look good. The tools property is now included in the data object sent to the OpenAI API by calling the _buildTools method. This ensures that the specified tools, including any defined functions, are included in the API request.

The rest of the run method remains unchanged, maintaining the existing functionality of handling the API response and returning the appropriate data.


60-67: Verify the usage of toolTypes in the codebase.

The addition of the toolTypes property looks good. It allows users to specify the types of tools to enable on the assistant, which is a useful feature. The filtering of options to include only the "function" type is also appropriate.

However, it's important to ensure that the toolTypes property is being utilized correctly throughout the codebase, especially in the additionalProps and run methods.

Run the following script to verify the usage of toolTypes:

✅ Verification successful

toolTypes property is correctly implemented and utilized

The verification shows that toolTypes is properly integrated throughout the component:

  • In additionalProps(): It correctly checks for function tools and dynamically adds the necessary properties for function configuration
  • In _buildTools(): It properly processes the selected tool types, handling both function and non-function tools
  • The implementation correctly supports the OpenAI Chat API's tools parameter structure
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of `toolTypes` in the codebase.

# Test 1: Search for the usage of `toolTypes` in `additionalProps`. Expect: Occurrences of `toolTypes` being checked and used to generate function-related properties.
ast-grep --pattern $'additionalProps() {
  $$$
  if (toolTypes?.includes("function")) {
    $$$
  }
  $$$
}'

# Test 2: Search for the usage of `toolTypes` in `run`. Expect: Occurrences of `toolTypes` being passed to `_buildTools`.
ast-grep --pattern $'run($$$) {
  $$$
  data: {
    $$$
    tools: this._buildTools(),
    $$$
  },
  $$$
}'

Length of output: 219


Script:

#!/bin/bash
# Let's try a different approach using ripgrep to find usages of toolTypes

# Search for toolTypes in the file
rg -A 10 "toolTypes" components/openai/actions/chat/chat.mjs

# Search for _buildTools method implementation
rg -A 10 "_buildTools" components/openai/actions/chat/chat.mjs

# Search for additionalProps method implementation
rg -A 10 "additionalProps" components/openai/actions/chat/chat.mjs

Length of output: 2832

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (6)
components/openai/actions/analyze-image-content/analyze-image-content.mjs (2)

Line range hint 1-124: Note: Changes unrelated to function calling objectives

While the model and version updates are valid, this file's changes don't contribute to the PR's primary objective of integrating OpenAI's function calling capabilities. Consider:

  1. Moving these changes to a separate PR focused on model updates
  2. Adding function calling support to this component if applicable

Model replacement requires verification - gpt-4o is not a documented OpenAI model

The change from gpt-4-vision-preview to gpt-4o appears to be incorrect. Based on the search results:

  1. gpt-4o is not a documented replacement for gpt-4-vision-preview in OpenAI's model lineup
  2. The comment references OpenAI's deprecation docs but the model name doesn't match any known OpenAI model
  3. The codebase shows inconsistent model references - gpt-4o-mini, gpt-4o-audio-preview, but these appear to be custom/incorrect model names

Please verify the correct model name for vision capabilities according to OpenAI's current documentation. The current change could break vision analysis functionality.

🔗 Analysis chain

Line range hint 45-49: Verify the model replacement for vision capabilities

The change from gpt-4-vision-preview to gpt-4o is noted as a deprecation-related update. However, we should verify that gpt-4o supports the same vision analysis capabilities required by this component.

Let's check the component's dependencies and usage:

Please ensure:

  1. The new model gpt-4o is officially documented as the replacement for vision capabilities
  2. All vision-related features (image URL and file processing) are compatible with the new model
  3. The deprecation timeline allows for this change
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the model usage and vision capabilities
# Look for other instances of vision-related models and potential inconsistencies

# Check for any other vision model references
echo "Checking for vision model references..."
rg -l "gpt-4-vision|gpt-4o" --type js --type mjs

# Check for vision-related functionality
echo "Checking for vision-related functionality..."
rg "type.*image_(url|file)" --type js --type mjs -A 2

Length of output: 350


Script:

#!/bin/bash
# Let's modify the search to handle .mjs files correctly and expand our search patterns

# Check for vision model references
echo "Checking for vision model references..."
rg -l "gpt-4-vision|gpt-4o" 

# Check for vision-related functionality and content
echo "Checking for vision-related functionality..."
rg "type.*image|content.*image|vision" -A 2

# Look at the specific file content
echo "Checking the specific file content..."
cat components/openai/actions/analyze-image-content/analyze-image-content.mjs

Length of output: 188144


Script:

#!/bin/bash
# Let's check OpenAI's model references and vision capabilities in the codebase

# Search for gpt-4o model references
echo "Checking for gpt-4o model references..."
rg "gpt-4o"

# Search for vision-related model configurations and imports
echo "Checking for vision model configurations..."
rg -A 3 "model.*vision|purpose.*vision"

# Check for any documentation or comments about model changes
echo "Checking for model deprecation comments..."
rg -i "deprecat.*gpt-4|vision.*preview"

Length of output: 5759

components/openai/actions/create-thread/create-thread.mjs (3)

Line range hint 40-71: Consider refactoring additionalProps for better maintainability.

While the implementation is functionally correct, consider restructuring it to improve readability and maintainability.

Here's a suggested refactor:

  async additionalProps() {
-   const props = {};
-   if (this.runThread) {
-     props.assistantId = {
-       type: "string",
-       label: "Assistant ID",
-       description: "The unique identifier for the assistant.",
-       options: async () => { return this.getAssistantPropOptions(); },
-     };
-     // ... other props
-   }
-   const toolProps = this.toolTypes?.length
-     ? await this.getToolProps()
-     : {};
-   return {
-     ...props,
-     ...toolProps,
-   };
+   const getRunProps = () => this.runThread
+     ? {
+         assistantId: {
+           type: "string",
+           label: "Assistant ID",
+           description: "The unique identifier for the assistant.",
+           options: async () => this.getAssistantPropOptions(),
+         },
+         model: {
+           type: "string",
+           label: "Model",
+           description: "The ID of the model to use for the assistant",
+           options: async () => this.getAssistantModelPropOptions(),
+         },
+         instructions: {
+           type: "string",
+           label: "Instructions",
+           description: "The system instructions that the assistant uses.",
+           optional: true,
+         },
+         waitForCompletion: {
+           type: "boolean",
+           label: "Wait For Completion",
+           description: "Set to `true` to poll the API in 3-second intervals until the run is completed",
+           optional: true,
+         },
+       }
+     : {};
+
+   const getToolProps = async () => this.toolTypes?.length
+     ? await this.getToolProps()
+     : {};
+
+   return {
+     ...await getRunProps(),
+     ...await getToolProps(),
+   };
  }

Line range hint 72-89: Consider making model filtering more maintainable.

The model filtering logic could be made more maintainable by extracting the conditions into constants.

Here's a suggested improvement:

+ const ALLOWED_MODEL_PREFIXES = ['gpt-3.5-turbo', 'gpt-4-turbo'];
+ const EXCLUDED_MODELS = ['gpt-3.5-turbo-0301'];

  async getAssistantModelPropOptions() {
-   const models = (await this.openai.models({})).filter(({ id }) => 
-     (id.includes("gpt-3.5-turbo") || id.includes("gpt-4-turbo")) && 
-     (id !== "gpt-3.5-turbo-0301"));
+   const models = (await this.openai.models({})).filter(({ id }) => 
+     ALLOWED_MODEL_PREFIXES.some(prefix => id.includes(prefix)) && 
+     !EXCLUDED_MODELS.includes(id));
    return models.map(({ id }) => id);
  }

Line range hint 90-134: Fix duplicate metadata and add error handling.

There are several issues that need attention:

  1. The metadata property is duplicated in the createThreadAndRun call
  2. There's no timeout handling for the polling mechanism
  3. Missing validation for required fields when runThread is true

Here's how to fix these issues:

  async run({ $ }) {
+   if (this.runThread && !this.assistantId) {
+     throw new Error('Assistant ID is required when runThread is true');
+   }

    const messages = this.messages?.length
      ? this.messages.map((message) => ({
        role: "user",
        content: message,
      }))
      : undefined;

+   const threadData = {
+     messages,
+     metadata: this.metadata,
+     tool_resources: this.buildToolResources(),
+   };

    let response = !this.runThread
      ? await this.openai.createThread({
        $,
-       data: {
-         messages,
-         metadata: this.metadata,
-         tool_resources: this.buildToolResources(),
-       },
+       data: threadData,
      })
      : await this.openai.createThreadAndRun({
        $,
        data: {
          assistant_id: this.assistantId,
          thread: {
            messages,
            metadata: this.metadata,
          },
          model: this.model,
          instructions: this.instructions,
          tools: this.buildTools(),
          tool_resources: this.buildToolResources(),
-         metadata: this.metadata, // Remove duplicate
        },
      });

    if (this.waitForCompletion) {
+     const MAX_RETRIES = 100; // 5 minutes with 3-second intervals
+     let retries = 0;
      const runId = response.id;
      const threadId = response.thread_id;
-     response = await this.pollRunUntilCompleted(response, threadId, runId, $);
+     try {
+       response = await this.pollRunUntilCompleted(response, threadId, runId, $);
+       if (retries >= MAX_RETRIES) {
+         throw new Error('Polling timeout: Run did not complete within the expected timeframe');
+       }
+     } catch (error) {
+       throw new Error(`Error while polling run status: ${error.message}`);
+     }
    }

    $.export("$summary", `Successfully created a thread ${this.runThread
      ? "and run"
      : ""} with ID: ${response.id}`);
    return response;
  },
components/openai/actions/common/common-assistants.mjs (1)

87-92: Fix grammatical error in 'Function Parameters' description

Change "The parameters the functions accepts" to "The parameters the function accepts" to correct the grammatical error.

Apply this diff:

props[`functionParameters_${i}`] = {
  type: "object",
  label: `Function Parameters ${i + 1}`,
- description: "The parameters the functions accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.",
+ description: "The parameters the function accepts, described as a JSON Schema object. See the [guide](https://platform.openai.com/docs/guides/text-generation/function-calling) for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format.",
  optional: true,
};
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 65d3179 and 3e6d9b5.

📒 Files selected for processing (8)
  • components/openai/actions/analyze-image-content/analyze-image-content.mjs (1 hunks)
  • components/openai/actions/chat-with-assistant/chat-with-assistant.mjs (1 hunks)
  • components/openai/actions/chat/chat.mjs (3 hunks)
  • components/openai/actions/common/common-assistants.mjs (3 hunks)
  • components/openai/actions/create-assistant/create-assistant.mjs (1 hunks)
  • components/openai/actions/create-thread/create-thread.mjs (1 hunks)
  • components/openai/actions/modify-assistant/modify-assistant.mjs (1 hunks)
  • components/openai/package.json (1 hunks)
✅ Files skipped from review due to trivial changes (4)
  • components/openai/actions/chat-with-assistant/chat-with-assistant.mjs
  • components/openai/actions/create-assistant/create-assistant.mjs
  • components/openai/actions/modify-assistant/modify-assistant.mjs
  • components/openai/package.json
🔇 Additional comments (8)
components/openai/actions/analyze-image-content/analyze-image-content.mjs (1)

11-11: LGTM: Version bump is appropriate

The version increment from 0.1.1 to 0.1.2 follows semantic versioning and correctly reflects the minor update to the model specification.

components/openai/actions/create-thread/create-thread.mjs (2)

9-9: LGTM: Version increment is appropriate.

The version bump from 0.0.10 to 0.0.11 aligns with the addition of new functionality.


Line range hint 26-39: LGTM: Props are well-structured and documented.

The new properties runThread and toolTypes are:

  • Well-documented with clear labels and descriptions
  • Properly configured with reloadProps for dynamic property loading
  • Correctly implemented with appropriate types and options
components/openai/actions/chat/chat.mjs (5)

9-9: LGTM!

The version number has been incremented correctly.


118-140: LGTM!

The _buildTools method looks good. It correctly constructs the tools array based on the specified toolTypes and the number of functions.


151-154: LGTM!

The run method has been updated to include the tools generated by the _buildTools method in the data sent to the OpenAI API. This looks good.


70-117: Ensure the generated function properties are being used correctly.

The additionalProps method has been updated to dynamically generate properties for each function based on the numberOfFunctions value. This looks good.

To verify that the generated function properties are being used correctly, please run the following script:

#!/bin/bash 
# Description: Verify the usage of the generated function properties.

# Test: Check if the generated function properties are being used in the `_buildTools` method.
# Expect: The `functionName_*`, `functionDescription_*`, and `functionParameters_*` properties are being used to construct the function objects in the tools array.
ast-grep --pattern $'_buildTools() {
  $$$
  for (let i = 0; i < numberOfFunctions; i++) {
    $$$
    tools.push({
      $$$
      function: {
        name: this[`functionName_\${i}`],
        description: this[`functionDescription_\${i}`],
        parameters: this[`functionParameters_\${i}`],
      },
      $$$
    })
    $$$
  }
  $$$
}'

60-67: Verify the usage of the new toolTypes property.

The addition of the toolTypes property looks good. It allows users to specify the types of tools to enable on the assistant, with the current implementation supporting only the "function" tool type.

To ensure the toolTypes property is being used correctly, please run the following verification script:

✅ Verification successful

The toolTypes property is correctly implemented and used

The verification shows that:

  1. The toolTypes property is properly defined with correct type, label, and filtering to only allow "function" type
  2. The _buildTools() method correctly uses toolTypes to:
    • Filter and map non-function tool types
    • Handle function tools when "function" type is included
  3. The tools are properly passed to the OpenAI API via this._buildTools() in the chat implementation
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of the `toolTypes` property.

# Test 1: Check if the `toolTypes` property is being used in the `_buildTools` method.
# Expect: The `toolTypes` property is being used to filter and construct the tools array.
ast-grep --pattern $'_buildTools() {
  $$$
  this.toolTypes
  $$$
}'

# Test 2: Check if the `toolTypes` property is being passed to the OpenAI API in the `run` method.
# Expect: The `tools` property in the API call should be set to the result of `this._buildTools()`.
ast-grep --pattern $'async run({ $ }) {
  $$$
  this.openai.createChatCompletion({
    $,
    data: {
      $$$
      tools: this._buildTools(),
      $$$
    },
  })
  $$$
}'

Length of output: 522


Script:

#!/bin/bash
# Let's try a different approach to verify the usage of toolTypes

# Check for the implementation of _buildTools method
rg "_buildTools" -A 10

# Check for the usage of toolTypes in the file
rg "toolTypes" -A 5

# Check how tools are being used in the run method
rg "createChatCompletion.*tools" -A 5

Length of output: 10699

Copy link
Collaborator

@michelle0927 michelle0927 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

Successfully merging this pull request may close these issues.

OpenAI Function Calling in Chat completion API
2 participants