Rules¶
In order to qualify for prizes, the top 5 participants in each track of the challenge must submit a detailed technical report on their solution, along with the code they used to train the model. This code should be made available on a repository, such as Github, with instructions for performing both training and inference. Additionally, the top 5 participants must produce predictions using submitted Docker containers. Once the challenge results have been calculated, the winners will be asked to provide a description of their method, as well as the code they used to train the winning model. This information will be included in the challenge paper, which will be published at a later date.
Participants are allowed to publish papers that include their official performance on the challenge dataset, as long as proper reference is given to the challenge and the dataset. There is no embargo period for publication. The challenge organizers aim to publish a summary of the challenge in a peer-reviewed journal, and the first and last authors of the submitted paper will qualify as authors of the summary paper. Participating teams are free to publish their own results in a separate publication, after coordination with the organizers to avoid significant overlap with the summary paper.
Follow the grand challenge steps for submitting your model for the test phase :prepare-your-code-for-containerization