Website: http://www.imageclef.org/2018/VQA-Med
Visual Question Answering is an exciting problem that combines natural language processing and computer vision techniques. Inspired by the success of visual question answering in the general domain <http://visualqa.org/>, we are organizing a pilot task this year as part of the CLEF 2018 conference in Avignon, France (http://clef2018.clef-initiative.eu/) to focus on visual question answering in the medical domain. Given a medical image accompanied with a clinically relevant question, participating systems are tasked with answering the question based on the visual image content.
*Motivation*
With the increasing interest in artificial intelligence (AI) to support clinical decision making and improve patient engagement, opportunities to generate and leverage algorithms for automated medical image interpretation are currently being explored. Since patients may now access structured and unstructured data related to their health via patient portals, such access also motivates the need to help them better understand their conditions regarding their available data, including medical images.
The clinicians' confidence in interpreting complex medical images can be significantly enhanced by a “second opinion” provided by an automated system. In addition, patients may be interested in the morphology/physiology and disease-status of anatomical structures around a lesion that has been well characterized by their healthcare providers – and they may not necessarily be willing to pay significant amounts for a separate office- or hospital visit just to address such questions. Although patients often turn to search engines (e.g. Google) to disambiguate complex terms or obtain answers to confusing aspects of a medical image, results from search engines may be nonspecific, erroneous and misleading, or overwhelming in terms of the volume of information.
*Data*
The data will include a training set (~5K) and a validation set (0.5K) of medical images accompanied with question-answer pairs, and a test set (0.5K) of medical images with questions only. To create the datasets for the proposed task, we considered medical domain images extracted from PubMed Central articles (essentially a subset of the ImageCLEF 2017 caption prediction task).
*Important Dates*
- 08.11.2017: registration opens for all ImageCLEF tasks (open until
27.04.2018)
- 06.03.2018: training and validation data release
- 20.03.2018: test data release
- 01.05.2018: deadline for submitting the participants runs
- 15.05.2018: release of the processed results by the task organizers
- 31.05.2018: deadline for submission of working notes papers by the
participants
- 15.06.2018: notification of acceptance of the working notes papers
- 29.06.2018: camera ready working notes papers
- 10-14.09.2018: CLEF 2018 <http://clef2018.clef-initiative.eu>,
Avignon, France
*Participant Registration*
Please refer to the general ImageCLEF registration instructions <http://www.imageclef.org/2018#registration>to participate in the challenge. Registration is open until *27th April, 2018*.
*Organizing Committee*
Sadid Hasan, Philips Research, USA
Yuan Ling, Philips Research, USA
Oladimeji Farri, Philips Research, USA
Joey Liu, Philips Research, USA
Henning Müller, University of Applied Sciences, Switzerland
Matthew Lungren, Stanford University Medical Center, USA
For more details and updates, please visit the task website at: http://www.imageclef.org/2018/VQA-Med and join our mailing list: https://groups.google.com/d/forum/imageclef-vqa-med .
Thank you.
*Sadid Hasan, PhD.* Senior Scientist, Artificial Intelligence Lab Philips Research North America Web: www.sadidhasan.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 12397 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20180309/7a91c7d3/attachment.txt>