The EPIC-QA shared task focuses on answering COVID-19 questions for both the scientific/medical communities and healthcare consumers. The task will be co-located at TAC 2020. Details below and at https://bionlp.nlm.nih.gov/epic_qa/ and https://tac.nist.gov/2020/index.html
August 21 2020: Preliminary evaluation cycle begins. September 21 2020: Preliminary evaluation cycle ends. October 26 2020: Preliminary evaluation judgments become available. November 2 2020: Final evaluation cycle begins. November 20 2020: Final evaluation cycle ends. December 23 2020 Release of individual evaluated results to participants January 15, 2021 Deadline for system reports (workshop notebook version) January 25-26, 2021 Thirteenth TAC workshop (online) March 1, 2021 Deadline for system reports (final proceedings version)
The Text Analysis Conference (TAC) is a series of shared tasks, evaluations and workshops organized to promote research in Natural Language Processing and related applications, by providing a large test collection, common evaluation procedures, and a forum for organizations to share their results.
In response to the COVID-19 pandemic, the Epidemic Question Answering (EPIC-QA) track at the Thirteenth Text Analysis Conference (TAC 2020) challenges teams to develop systems capable of automatically answering ad-hoc questions about the disease COVID-19, its causal virus SARS-CoV-2, related coronaviruses, and the recommended response to the pandemic.
While COVID-19 has been an impetus for a large body of emergent scientific research and inquiry, it also raises questions for consumers trying to respond to the epidemic. The rapid increase in coronavirus literature and evolving guidelines on community response create a challenging burden for both the general public and the scientific and medical communities to stay up-to-date on the latest developments. Consequently, the goal of EPIC-QA is to evaluate systems on their ability to provide timely and accurate expert-level answers as expected by the scientific and medical communities, as well as answers in consumer-friendly language for the general public.
EPIC-QA has two tasks. For both tasks, systems must extract answers from a collection of documents that includes scientific and government articles about COVID-19, SARS-CoV-2, related coronaviruses, and information about community response.
Task A (Expert QA): In Task A, teams are provided with a set of questions asked by experts and are asked to provide a ranked list of expert-level answers to each question. In Task A, answers should provide information that is useful to researchers, scientists, or clinicians.
Task B (Consumer QA): In Task B, teams are provided with a set of questions asked by consumers and are asked to provide a ranked list of consumer-friendly answers to each question. In Task B, answers should be understandable by the general public.
While each task will have its own set of questions, many of the questions will overlap. This is by design, so that the collection can be used to explore whether the same approaches or systems can account for different types of users.
Travis Goodwin & Dina Demner-Fushman (U.S. National Library of Medicine) Kyle Lo & Lucy Lu Wang (Allen Institute for AI) William R. Hersh (Oregon Health & Science University) Hoa T. Dang & Ian M. Soboroff (National Institute of Standards and Technology)
-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 8732 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200820/d49b5367/attachment.txt>