Skip to content
This repository was archived by the owner on Mar 21, 2024. It is now read-only.
This repository was archived by the owner on Mar 21, 2024. It is now read-only.

Documentation for the end-to-end workflow of evaluating a segmentation model #540

@ant0nsc

Description

@ant0nsc

If anybody wants to evaluate a pre-trained segmentation model, what steps do they need to follow? Ensure that we have that documented and tried out. Steps to be taken:

  • Move a pre-trained model to the workspace (we have a script for that)
  • If people are starting with a dataset in DICOM, show how to use the createDataset tools to turn that to Nifti. They need to ensure that the DICOM files are re-scaled to the right pixel size.
  • Upload to blob storage into the right account (that holds the datasets)
  • Run the InnerEye tools in inference mode on that dataset: This will be based on Run inference using checkpoints from registered models #509, where we can run inference off checkpoints in a registered model. Would like like runner.py --model Prostate --model_id=Prostate:123 --no-train --azure_dataset_id=new_dataset --allow_incomplete_labels
  • Look at reports

Alternative solution:

  • We have the submit_for_inference script that can take a DICOM .zip file and run a model on that. We could suggest a shell-script way of looping over a set of folders, submitting a job for each of those.
  • Once the DICOM-RT files are then downloaded, users will have to run their own tools to compare that against the ground truth segmentation.
  • This would only work if the DICOM series all have the same voxel spacing as the model!

AB#4253

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions