Skip to content

Conversation

shyamnamboodiripad
Copy link
Contributor

@shyamnamboodiripad shyamnamboodiripad commented Apr 17, 2025

This allows evaluators to record contextual information that was used in the evaluation which can then be displayed in the evaluation report as depicted in the below screenshot. This PR also updates all evaluators that rely on contextual information and that ship as part of the Quality and Safety packages (i.e., GroundednessEvaluator, EquivalenceEvaluator, GroundednessProEvaluator and UngroundedAttributesEvaluator) to include contextual information as part of the EvaluationMetrics they produce.

image

Also includes some cleanup for extension methods as part of public API stabilization.

  • Removes extension methods for adding a single diagnostic in favor of overloads that take a params array which can be called in the same way.

Fixes #6033

@shyamnamboodiripad shyamnamboodiripad requested a review from a team as a code owner April 17, 2025 07:43
@github-actions github-actions bot added the area-ai-eval Microsoft.Extensions.AI.Evaluation and related label Apr 17, 2025
Copy link
Member

@peterwald peterwald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@shyamnamboodiripad shyamnamboodiripad merged commit 6abae3c into dotnet:main Apr 17, 2025
6 checks passed
@shyamnamboodiripad shyamnamboodiripad deleted the context branch April 17, 2025 18:30
@github-actions github-actions bot locked and limited conversation to collaborators May 18, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-ai-eval Microsoft.Extensions.AI.Evaluation and related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[AI Evaluation] Add EvaluationMetric property for contexts used in evaluation
2 participants