Guides

Evaluations

Evaluations

Why Are Evaluations Important?

Evaluating large language models is intrinsically difficult because of the subjective nature of responses to open ended requests.
Using LLMs as evaluators are known to suffer from bias, verbosity, self-enhancement (favor answers generated by themselves) and others. Many prompts require domain expertise to evaluate accurately. Finally, evaluations are only as good as their evaluation sets - diversity and comprehensiveness are key.

However, getting Evaluations right is essential for enterprises. Companies need a way to:

  • Reliably test their custom models for regressions and compare different model versions
  • Identify new weak spots / blind spots in their models
  • Test and verify a model is ready before it is deployed

Running Evaluations using the SGP Python SDK

To perform an evaluation, the user begins by setting up the foundational elements – the dataset, Studio project, and project registration. Subsequently, the evaluation tasks, including data generation, annotation, monitoring, analysis, and visualization, are carried out iteratively for each evaluation.

One-Time Setup

  • Dataset Creation: To initiate the evaluation process, the user creates a dataset containing various test cases. These test cases serve as input data for the generative AI project and provide diverse scenarios for evaluation.
  • Studio Project Setup: In preparation for human annotation, the user establishes a Studio project. This project acts as the platform where human annotators assess the AI-generated responses. Setting up the Studio project is a crucial one-time step to enable evaluations involving human input.
  • Application Spec Registration: The user registers some metadata about their generative AI project, providing necessary details such as its name, description, and version. This registration associates the project with the evaluation process, ensuring that its performance can be systematically assessed.

Recurring Evaluation Tasks

  • Data Generation and Annotation: For each test case within the dataset, the generative AI project generates responses. Simultaneously, metadata and additional information chunks are collected. This process is repeated for each test case, ensuring a comprehensive evaluation of the project's capabilities across various inputs.
  • Evaluation Monitoring: The system continuously monitors the progress of ongoing evaluations. It checks the status of each evaluation, ensuring that the assessments are being conducted as expected. This monitoring step is repeated for every evaluation initiated by the user.
  • Analysis and Visualization: Post-evaluation, the user retrieves the results and conducts an in-depth analysis. Performance metrics are processed and organized, allowing for comparisons across different evaluations or specific test cases. This analysis provides valuable insights into the project's strengths and areas for improvement, contributing to informed decision-making.
    The SGP Python SDK provides a simple yet powerful set of tools to create these evaluations.

End to End Evaluation Example Workflow

Basic Setup

First however, we need run a small boilerplate:

import dotenv
import questionary as q

from scale_egp.sdk.client import EGPClient
from scale_egp.sdk.types.evaluation_test_case_results import GenerationTestCaseResultData
from scale_egp.sdk.types.evaluation_configs import (
    CategoricalChoice, CategoricalQuestion,
    StudioEvaluationConfig
)
from scale_egp.sdk.enums import TestCaseSchemaType, EvaluationType, ExtraInfoSchema
from scale_egp.utils.model_utils import BaseModel

ENV_FILE = ".env.local"
dotenv.load_dotenv(ENV_FILE, override=True)

DATASET_ID = None
APP_SPEC_ID = None
STUDIO_PROJECT_ID = None


def timestamp():
    return datetime.now().strftime('%Y-%m-%d %H:%M:%S')


def dump_model(model: Union[BaseModel, List[BaseModel]]):
    if isinstance(model, list):
        return json.dumps([m.dict() for m in model], indent=2, sort_keys=True, default=str)
    return json.dumps(model.dict(), indent=2, sort_keys=True, default=str)

Evaluation Dataset

An evaluation dataset is a list of test cases that users want to benchmark their project’s performance on.
A user will create an evaluation dataset by uploading a CSV file and specifying the schema of their dataset.
The dataset schema is specified at creation time. At first, the only schema that we will support is a CSV file with a required input column and optional expected_output and expected_extra_info columns where input refers to the end-user prompt, expected_output refers to the expected output of the AI project, and expected_extra_info refers to any additional information that the AI project should have used to generate the output. This schema definition will be designed as a GENERATION schema.
Because we don't flatten the columns in our Postgres database, this design allows for flexible schema definitions. In the future, we can support additional schema types as needed.
Defining this schema also allows users to understand how to read the dataset and what to expect in each row. This makes it easy for users to pull datasets that they did not create and use them in standardized processing scripts.

evaluation_dataset_name = f"My AI Application Dataset {current_timestamp}"
if DATASET_ID:
    evaluation_dataset_id = DATASET_ID
else:
    evaluation_dataset_id = q.text(
        f"ID of existing dataset (Leave blank to create a new one with name "
        f"'{evaluation_dataset_name}'):"
    ).ask()
if evaluation_dataset_id:
    evaluation_dataset = client.evaluation_datasets().get(id=evaluation_dataset_id)
else:
    evaluation_dataset = client.evaluation_datasets().create_from_file(
        name=evaluation_dataset_name,
        schema_type=TestCaseSchemaType.GENERATION,
        filepath=os.path.join(os.path.dirname(__file__), "data/golden_dataset.jsonl"),
    )
    print(
        f"Created evaluation dataset:\n{dump_model(evaluation_dataset)}"
    )

Application

This is simply a metadata entry that describes an end user application. This is useful as a grouping mechanism to allow users to relate multiple evaluations together to a single user application. It also allows the user to specify the detailed state and components of the current application, i.e. which kind of retrieval components were used in conjunction with the LLM if any.

application_spec_name = f"My AI Application Spec {current_timestamp}"
if APP_SPEC_ID:
    application_spec_id = APP_SPEC_ID
else:
    application_spec_id = q.text(
        f"ID of existing application spec (Leave blank to create a new one with name "
        f"'{application_spec_name}'):"
    ).ask()
if application_spec_id:
    application_spec = client.application_specs().get(id=application_spec_id)
else:
    application_spec = client.application_specs().create(
        name=application_spec_name,
        description=gen_ai_app.description
    )
    print(f"Created application spec:\n{dump_model(application_spec)}")

Annotation Project

Registers an annotation project using Scale Studio. This is only used if Scale Studio is the platform used for annotations. Learn more about Scale Studio.

Sometimes evaluations require that external resources be created. Because these platforms do not need to share any properties it makes most sense to regard them as entirely separate components.
For example, since the primary annotation platform we will use is Studio, we will allow users to create Studio projects using SGP APIs and store references to these projects in our data tables.
This way, once an admin user creates a Studio project and adds taskers to it in the Studio UI, developers can create evaluations to send evaluation tasks to centralized projects.

studio_project_name = f"{current_timestamp}"
if STUDIO_PROJECT_ID:
    studio_project_id = STUDIO_PROJECT_ID
else:
    studio_project_id = q.text(
        f"ID of existing studio project (Leave blank to create a new one with name "
        f"'{studio_project_name}'):"
    ).ask()
if studio_project_id:
    studio_project = client.studio_projects().get(id=studio_project_id)
else:
    studio_project = client.studio_projects().create(
        name=studio_project_name,
        description="Annotation project for the project",
        studio_api_key=os.environ.get("STUDIO_API_KEY"),
    )
    studio_project_id = studio_project.id
    print(f"Created studio project:\n{dump_model(studio_project)}")

Evaluation

This refers to the action of sending off a batch of tasks to evaluate a specific iteration of a user application.
It will contain references to the application’s current state, the id of the application this evaluation is for, the status of the evaluation, as well as any configuration that is needed for the annotation mechanism (i.e. Studio task project id / questions, Auto LLM model name / questions, client-side function name, etc.)
For each evaluation, a dataset will be referenced. The customer application will iterate through each row in the dataset to get the input value of the dataset. The customer application will generate an output for each input value in the dataset.

evaluation = client.evaluations().create(
    application_spec=application_spec,
    name=f"Regression Test - {current_timestamp}",
    description="Evaluation of the project against the regression test dataset",
    tags=gen_ai_app.tags(),

    evaluation_config=StudioEvaluationConfig(
        evaluation_type=EvaluationType.STUDIO,
        studio_project_id=studio_project.id,
        questions=[
            CategoricalQuestion(
                question_id="based_on_content",
                title="Was the answer based on the content provided?",
                choices=[
                    CategoricalChoice(label="No", value=0),
                    CategoricalChoice(label="Yes", value=1),
                ],
            ),
            CategoricalQuestion(
                question_id="accurate",
                title="Was the answer accurate?",
                choices=[
                    CategoricalChoice(label="No", value=0),
                    CategoricalChoice(label="Yes", value=1),
                ],
            ),
        ]
    )
)
print(f"Created evaluation:\n{dump_model(evaluation)}")

Get Test Case Results

Each evaluation consists of test case results where the results of all test case evaluations are stored.
By looking at a single test case result, you can see which test case the result is for, the dataset that test case belonged to, and which evaluation the result was a part of. Because the application state can be defined in the tags of the evaluation definition, the user can also refer to the state of the application that was used to generate the outputs for this result.
Because the test-case information is stored on a per-result basis, the history of all annotation results for a given dataset row item can be constructed.

print(f"Submitting test case results for evaluation dataset:\n{evaluation_dataset.name}")
test_case_results = []
for test_case in client.evaluation_datasets().test_cases().iter(
    evaluation_dataset=evaluation_dataset
):
    output, extra_info = gen_ai_app.generate(input=test_case.test_case_data.input)
    test_case_result = client.evaluations().test_case_results().create(
        evaluation=evaluation,
        evaluation_dataset=evaluation_dataset,
        test_case=test_case,
        test_case_evaluation_data=GenerationTestCaseResultData(
            generation_output=output,
            generation_extra_info=extra_info,
        ),
    )
    test_case_results.append(test_case_result)

print(f"Created {len(test_case_results)} test case results:\n{dump_model(test_case_results)}")