[Corpora-List] Fully-funded PhdD position in NLP, University of Tartu

Kairit Sirts kairit.sirts at gmail.com
Wed Apr 14 17:30:40 CEST 2021


Dear all,

A PhD position, starting from September 2021, is available in the NLP group at the Institute of Computer Science, University of Tartu, Estonia. The position is fully funded for four years.

There are two project topics and only one will be filled, depending on the applicants preferences, see the titles and short summaries below.

The interested applicants should send their CV and motivation letter latest by the end of April (sirts at ut.ee <mailto:sirts at ut.ee>). Please don’t hesitate to contact me also in case of any questions.

Regards Kairit

-- Kairit Sirts, Phd Research Fellow at TartuNLP <https://tartunlp.ai/branches/machine-learning> Institute of Computer Science, University of Tartu ksirts.github.io <http://ksirts.github.io/>

******

Topic 1: Neural text analysis models enhanced with external linguistic resources

Deep neural models are good at learning representations for natural language, but they are only able to learn the linguistic regularities present in the training data. If the amount of annotated training data is abundant then the chances for the model to learn most relevant linguistic regularities is very high. For many languages, however, the amount of annotated data is not large enough. One option to tackle this problem is to direct resources into annotating more data. Another direction is to leverage the already existing linguistic resources to enhance the neural systems. Over several decades, many linguistic resources in the form of either rule-based systems or dictionaries have been developed. The goal of this thesis project is to study effective ways of integrating existing linguistic resources into deep neural models, using a range of NLP tasks. Integrating all available linguistic resources can help considerably in low-resource settings. However, even in well-resourced settings making use of existing lexicon-based or rule-based resources can help to improve the predictions in case of the more irregular and less frequent inputs.

Topic 2: Explainable neural network models for predicting mental health problems

Recent advances in artificial neural network technology have made it possible to considerably improve the accuracy of many artificial intelligence systems, like for instance image and text classification. However, there are domains where simply improving the classification accuracy is not enough and for the systems to be usable, the predictions have to be also explainable to humans. Using AI systems to predict mental health problems based on textual data is one such domain. The potential applications of such models include assisting psychiatrists and clinical psychologist in assigning the diagnosis or as part of self-help systems for tracking the patients’ status over time. In order for these systems to be trusted by their users, they should be able to supply each prediction with a supporting explanation. These explanations can be snippets of texts that were crucial for the model for making the particular predictions, potentially classifying the snippet into one of the predefined categories of relevant symptoms. The goal of this thesis project is to systematically study the methods suggested in recent literature for providing explanations for neural models’ predictions and based on that, develop suitable methods aimed specifically for making the systems of predicting mental health problems more transparent and therefore more trustable for their potential users. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 4635 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20210414/126291cb/attachment.txt>



More information about the Corpora mailing list