Submission Instructions

Submission is open from July 27 to July 31 (first round) and from November 15 to November 20 (second round).

All participants who sent us the signed DUA (linked under Data) will receive an upload link on July 27 (first round) and on November 15 (second round). Please notice that you have to send separate DUAs for the two rounds.


Please pay attention to the following criteria:

  1. Reconstructions are expected to be submitted in the correct slice order as Nifti files. Please find some helpful code for re-ordering and saving the reconstructions as Nifti files in our GitHub repository.
  2. Please include a README in your submission indicating whether you submit as an individual or a team and stating the team name, the affiliation, the names of all participants, the name and mail address of the contact person, as well as whether additional publicly available training data was used. Furthermore, please include a summary (optionally as pdf) of the preprocessing methods (model descriptions etc.) that were used for the reconstructions. We encourage all participants to make their code publicly available, e.g. on GitHub.
  3. The reconstructions should be named 'Recon_nod_t1.nii' and 'Recon_nod_t2.nii', respectively. 
  4. The reconstructions for the different test subjects should be in different folders, named after the test subject ID, e.g. for submitting reconstructions from both tasks a folder structure like: 

              

              The folder 'Team1' (substitute with your team name) should be compressed as .tar.gz file before uploading it.


First round: Questionnaire - MICCAI special interest group on biomedial image analysis challenges

Together with your submission, we kindly ask you to submit a questionnaire that is prepared by a new initiative with respect to biomedical challenges. The link to the survey will be sent in a separate email. This initiative involves many research institutions and is led by the MICCAI special interest group on biomedical image analysis challenges (contact person: Lena Maier-Hein, German Cancer Research Center (DKFZ)). We were asked for our support, and we have decided to provide it.

What is the initiative about? In the past few years, the initiative has been working on bringing biomedical image analysis to the next level of quality [1, 2, 3, 4, 5]. While the focus was on the meta research question “Is the winner really the best?”, the goal now is to go one step further and analyze challenge participation characteristics (e.g. expertise of team, algorithm design, computational infrastructure used). To this end, we are planning to perform a meta-analysis of the challenges conducted in 2021 (ISBI and MICCAI).

As in the previous Nature Communications paper [1], the results will be presented in an anonymized and aggregated fashion, such that findings are generally not linked to specific challenges.

As a further incentive, the initiative is pleased to offer you a co-authorship on the arXiv publication of the statistical analysis. Careful completion of the survey will be a prerequisite for co-authorship. Additionally, you can choose to be considered for prizes that will be raffled among the pool of ISBI and MICCAI 2021 challenge participants that submit the questionnaire.

References:

[1] Maier-Hein, L., Eisenmann, M., Reinke, A., et al. 2018. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217. https://doi.org/10.1038/s41467-018-07619-7

[2] Reinke, A., Eisenmann, M., Onogur, S., et al. 2018. How to Exploit Weaknesses in Biomedical Challenge Design and Organization, in: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. Springer International Publishing, Cham, pp. 388–395.

[3] Maier-Hein, L., Reinke, A., Kozubek, et al. 2020. BIAS: Transparent reporting of biomedical image analysis challenges. Med. Image Anal. 66, 101796. https://doi.org/10.1016/j.media.2020.101796

[4] Wiesenfarth, M., Reinke, A., Landman, B.A., et al. 2021. Methods and open-source toolkit for analyzing and visualizing challenge results. Sci. Rep. 11, 1–15. https://doi.org/10.1038/s41598-021-82017-6

[5] Roß, T., Bruno, P., Reinke, A., et al. 2021. How can we learn (more) from challenges? A statistical approach to driving future algorithm development. ArXiv210609302 Cs.