[Corpora-List] Deadline Extension for the Workshop on "Benchmarking: Past, Present and Future" (ACL-IJCNLP2021 workshop)

Valia Kordoni evangelia.kordoni at anglistik.hu-berlin.de
Mon Apr 26 15:40:51 CEST 2021

Apologies for cross-posting ------------------------------------------------------ Dear colleagues,

due to the recent surges of Covid-19 around the world, we are extending to the *5th of May* the deadline for the ACL-IJCNLP2021 workshop on

"Benchmarking: Past, Present and Future"

to be held on August 5-6, 2021

Webpage: https://github.com/kwchurch/Benchmarking_past_present_future/blob/master/README.md

Important Dates

* May 5, 2021: Paper submission deadline * May 28, 2021: Notification of acceptance * June 7, 2021: Camera-ready papers due * August 5-6, 2021: Workshop dates

Please see further details below. --------------------------------------------------------

It is easier to talk about the past than the future. These days, benchmarks evolve more bottom up (such as papers with code). There used to be more top-down leadership from government (and industry, in the case of systems, with benchmarks such as SPEC). Going forward, there may be more top-down leadership from organizations like MLPerf and/or influencers like David Ferrucci, who was responsible for IBM’s success with Jeopardy, and has recently written a paper suggesting how the community should think about benchmarking for machine comprehension (To Test Machine Comprehension, Start by Defining Comprehension). Tasks such as reading comprehension become even more interesting as we move beyond English. Multilinguality introduces many challenges, and even more opportunities.

Keynote Talks

We have an amazing collection of invited talks, many with direct first-hand knowledge of the history, and many insights for the future:

1. Past

i. John Makhoul

ii. Mark Liberman

iii. Ellen Voorhees 2. Present

i. Ming Zhou

ii. Hua Wu and Jing Liu

iii. Neville Ryant

iv. Brian MacWhinney and Saturnino Haider

v. Samuel Bowman

vi. Douwe Kiela

vii. Eunsol Choi

viii. Anders Søgaard 3. Future

i. Greg Diamos

ii. David Ferrucci

iii. Ido Dagan


We accept two types of submissions, long papers and short papers, all following the ACL2021 style, and the ACL submission policy: https://www.aclweb.org/adminwiki/index.php?title=ACL_Policies_for_Submission,_Review_and_Citation

Long papers may consist of up to eight (8) pages of content, plus unlimited references, short papers may consist of up to four (4) pages of content; final versions will be given one additional page of content so that reviewers' comments can be taken into account.

Submissions should be sent in electronic forms, using the Softconf START conference management system. Please choose the appropriate submission modality (long/short): https://www.softconf.com/acl2021/w01_Benchmarking-2021/

We invite original research papers from a wide range of topics, including but not limited to:

1. What important technologies and underlying sciences need to be fostered, now and in the future? 2. In each case, are there existing tasks/benchmarks that move the field in the right direction? 3. Where are there gaps? 4. For the gaps, are there initial steps that are accessible, attractive, and cost effective? 5. How large should a benchmark be?

a. How much data do we need to measure significant differences? b. How much data do machines need to obtain good performance?

c. How much data do babies need to learn language?

Submissions are open to all, and are to be submitted anonymously. All papers will be refereed through a double-blind peer review process by at least three reviewers with final acceptance decisions made by the workshop organizers.

The workshop is scheduled to last for one day either August 5th or 6th. If you have any questions, contact us at pc-benchmarking-ws-acl2021 at googlegroups.com

Workshop organizers

Kenneth Church (Baidu USA) Mark Liberman (University of Pennsylvania) Valia Kordoni (Humboldt-Universität zu Berlin)

More information about the Corpora mailing list