Task¶
Surgical action triplet detection
Challenge Objectives¶
To detect surgical activities as triplets of
{instruments, verb, target
}
where :
instrument
is the tool used to perform an actionverb
is the action performedtarget
is the underlying anatomy/objects acted on
¶
Challenge Tasks¶
The challenge task is divided into three (3) sub-tasks:
- recognize all action triplets in every image in a video
- localize all used instruments using bounding box
- associate all localized bounding boxes to their corresponding action triplets
Challenge Method¶
New Machine Learning models or a customization of existing/state-of-the-arts models.
Challenge Data¶
CholecT50: an endoscopic video dataset that has been annotated with action triplet labels.
Method Supervision Labels¶
- Challenge dataset contains triplet binary labels for full supervision of triplet recognition
- Bounding box labels are not provided for localization training. This challenge focuses on Weak Supervision in this regard.
N.B.: A sample of bounding box and box-triplet association labels
will be provided during validation phase.
Classification, bounding box, and box-triplet association labels of 5
videos will be used for method testing.
Method Evaluation¶
Submitted methods will be tested on three criteria:
- Classification AP for action triplet recognition
- Localization AP for surgical instrument localization
- Detection AP for box-triplet association
Valid Submission¶
Docker submission is to be used. The three sub-tasks are considered as a single challenge: a single docker must produce outputs for classification, localization and box-triplet pairing per frame. The docker can contain a single model or linked models that would produce the 3 outputs in one docker run.
We provide a Colab and
GitHub with sample codes for easy starting
Still have question? check our FAQ or email us @
cholectriplet2021-support@icube.unistra.fr