Task



Surgical action triplet detection
Challenge Objectives
To detect surgical activities as triplets of {instruments, verb, target}
where :
  1. instrument is the tool used to perform an action
  2. verb is the action performed
  3. target is the underlying anatomy/objects acted on




Challenge Tasks
The challenge task is divided into three (3) sub-tasks:
  1. recognize all action triplets in every image in a video
  2. localize all used instruments using bounding box
  3. associate all localized bounding boxes to their corresponding action triplets

Challenge Method
New Machine Learning models or a customization of existing/state-of-the-arts models.
Challenge Data
CholecT50: an endoscopic video dataset that has been annotated with action triplet labels.
Method Supervision Labels
  1. Challenge dataset contains triplet binary labels for full supervision of triplet recognition
  2. Bounding box labels are not provided for localization training. This challenge focuses on Weak Supervision in this regard.
N.B.: A sample of bounding box and box-triplet association labels will be provided during validation phase.
Classification, bounding box, and box-triplet association labels of 5 videos will be used for method testing.
Method Evaluation
Submitted methods will be tested on three criteria:
  1. Classification AP for action triplet recognition
  2. Localization AP for surgical instrument localization
  3. Detection AP for box-triplet association

Valid Submission
Docker submission is to be used. The three sub-tasks are considered as a single challenge: a single docker must produce outputs for classification, localization and box-triplet pairing per frame. The docker can contain a single model or linked models that would produce the 3 outputs in one docker run.




We provide a Colab and GitHub with sample codes for easy starting 
Still have question? check our FAQ or email us @ cholectriplet2021-support@icube.unistra.fr