Challenge Dataset Annotation Protocol


The dataset annotation protocol can be described in three stages:

  1. Annotation definition
  2. Labeling and mediation
  3. Label formatting


1. Annotation definition

This is carried out by expert endocrine surgeons under the project CONDOR. An annotation dictionary of cholecystectomy ontology  is developed with the description of the following:

  • procedures: video of laparoscopic cholecystectomy
  • phases: list of standard names for the surgical phases, description of each phase including the markers of their beginning and ending points, indicators for phase transitions, conditions for reporting exceptions.
  • stages: list and description of the situations within each phase.
  • actors: indicators for the operators of each instrument.
  • effectors: instruments' names, visibility flags, unified names for varying designs of the same instruments, and a list of instruments possible in each phase/stage, conditions for reporting exceptions.
  • verbs: names for the surgical actions, common names for similar actions, determinants for identifying each actions, conditions for annotating continuity for an action, list of actions possible in each phase/stage, conditions for reporting exceptions.
  • recipients: list of target elements mostly tissues and foreign bodies, their surgical names and characteristics, differentiating factors for similar targets, conditions for annotation of a subtle involved target, list of targets possible in each phase/stage, conditions for reporting exceptions, and the possible actors on each target.
  • triplets: the combinations of instrument, verb and target. The surgical definition of each triplet, their possible phases/stages, starting and ending points,  conditions for reporting exceptions, and visibility flags.

The surgeons reviewed the annotation dictionary over several successive meetings until a consensus is reached.


2. Labeling and Mediation

The cholecystectomy recordings were annotated by two surgeons using the software Surgery Workflow Toolbox-Annotate from the B-com institute. Annotators set the beginning and end on a timeline for each identified action, then assigned to the corresponding instrument, verb, and target class labels. An action ends when the corresponding instrument exits the frame, or if the verb or target changes.

Mediation was carried out with the introduction of an additional medical doctor who led the label validation and mediation. To have a reasonable number of classes with maximum clinical utility, a team of clinical experts selected the top relevant labels for the triplet dataset. This is achieved in two steps: (a) surgical relevance rating of triplet composition based on their possibility and usefulness in the cholecystectomy. Their average scores, as well as the triplet's number of occurrences, is used to order the triplet classes, after which the top relevant classes are selected; (b) class grouping to super-class triplets that are semantically the same. This lead to a 100 triplet classes used in the dataset.


3. Label formatting

Computer engineers down-sampled the video annotations at 1 fps, and generated single .txt file for each video with rows corresponding to the frames and columns corresponding to categories. Annotations are in form of binary presence labels for the triplets and its components. Test case for CholecTriplet2022 are annotated with bounding box spatial labels using Mosaic web annotation tool of IHU Strasbourg by PhD students with no conflict of interest with the challenge participants. The region boundaries are over the tool tips for each triplet. Out-of-frame actions are not reported, and video frames that are recorded outside the patient's body are zeroed out. Label mapping from text names to integer ids are created as well.

Only the training videos are released to the participants for the challenge. Evaluation will be done by the organizers on the test set.