[Corpora-List] Composes workshop: call for participation

Marco Baroni marco.baroni at unitn.it
Fri Apr 8 15:43:55 CEST 2016


(Apologies for multiple postings and reminders)

Composes end-of-project workshop: Call for participation

Workshop website: http://clic.cimec.unitn.it/composes/workshop.html

The end-of-project workshop of the Composes project (http://clic.cimec.unitn.it/composes/) will take place on Sunday August 14th 2016 in Bolzano (Italy), as a satellite event of ESSLLI 2016 (http://esslli2016.unibz.it/).

The workshop will be an occasion to discuss some exciting topics in computational semantics, with some great invited speakers/panelists leading the discussion. We foresee a mixture of position statements on the topics below by the invitees and audience participation in the form of open debates.

Speakers/Panelists:

- Nicholas Asher - Marco Baroni - Stephen Clark - Emmanuel Dupoux - Katrin Erk - Adele Goldberg - Alessandro Lenci - Hinrich Schütze - Jason Weston

Topics:

- Lessons learned from the Composes project: Which problems were we

trying to solve? Have we solved them? Have new-generation neural

networks made compositional distributional semantics obsolete?

- End-to-end models and linguistics: What is the role of linguistics

in the (new) neural network/end-to-end/representation learning era?

Do such systems need linguistics at all? Are some linguistic

theories better tuned to them than others? Is there an appropriate

vocabulary of linguistic units for end-to-end systems? Is

compositionality a solved problem? Which linguistic challenges are

difficult to tackle with neural networks?

- "Fuzzy" vs "precise" (concepts vs entities, generics vs specifics,

lexical vs phrasal/discourse semantics, analogy vs reasoning, sense

vs reference): Are learning-based statistical methods only good at

fuzzy? Can new-generation neural networks (Memory Networks, Stack

RNNs, NTMs etc) handle both fuzzy and precise? Is fuzzy a solved

problem?

- Learning like humans do: If we want to develop systems reaching

human-level language understanding, what is the appropriate input?

What should training data and objective functions look like? What

are appropriate tests of success? Assuming our methods are much more

data-hungry than human learning is, why is this the case? Ideas for

fixing that? What ways can we teach our models to understand, other

than through expensive labeling of data?

Please visit the workshop website for information about (free) registration and for updates:

http://clic.cimec.unitn.it/composes/workshop.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 2950 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20160408/3ca7b51d/attachment.txt>



More information about the Corpora mailing list