Skip to content

exp: Research difference metrics to measure performance for LLM code-to-documentation generation. #30

@anujsinha3

Description

@anujsinha3

As of date, AutoDoc code-to-documentation LLM has very few code examples to test the performance of our model.

This task involves the research and comparison of the probable metrics that can be used to quantify the performance of the code-to-documentation LLM model.

A few of the example metrics that are currently used:
https://uwnetid.sharepoint.com/sites/og_ssec_escience/_layouts/15/Doc.aspx?sourcedoc={80d21348-b124-4e69-a17d-542a8cfca496}&action=edit&wd=target%28Untitled%20Section.one%7C4a55f2ee-6dcd-42d4-aaac-82c7bcc49788%2FLiterature%20Survey%7C18d2f5b8-af8b-475d-b4ca-f8731dda9d11%2F%29&wdorigin=NavigationUrl

Task Completion is dependent on the provisioning of training data from the AutoRA team.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions