[Corpora-List] PhD project at UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intellligence "Symbolic knowledge representations for time-sensitive offensive language detection"

McGillivray, Barbara barbara.mcgillivray at kcl.ac.uk
Fri Jan 28 10:39:09 CET 2022

The UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intellligence has approximately 12 fully funded doctoral studentships available each year. Apply now for entry in September 2022. The next application deadline is Tuesday, 15 February, 2022.

Committed to providing an inclusive environment in which diverse students can thrive, we particularly encourage applications from women, disabled and Black, Asian and Minority Ethnic (BAME) candidates, who are currently under-represented in the sector.

You can see all projects proposed here: https://safeandtrustedai.org/apply-now/#projects .

The following project will be supervised by Albert Meroņo Peņuela (Department of Informatics) and Barbara McGillivray (Department of Digital Humanities, King's College London).

Project Title: Symbolic knowledge representations for time-sensitive offensive language detection

Project description: Language models learned from data have become prevalent in AI systems, but they are sensitive to the identification of undesired behaviour posing risks to society, like offensive language. The task of automatic detection of offensive language has attracted significant attention in Natural Language Processing (NLP) due to its high social impact. Policy makers and online platforms can leverage computational methods of offensive language detection to oppose online abuse at scale. State-of-the-art methods for automatic offensive language detection, typically relying on ensembles of transformer-based language models such as BERT, are trained on large-scale annotated datasets.

Detecting offensive language is aggravated by the fact that the meaning of words changes over time, and conventional, neutral language can evolve into offensive language at short time scales, following rapid changes in social dynamics or political events. The word karen, from a neutrally connotated name of person, for example, acquired an offensive meaning in 2020, turning into a “pejorative term for a white woman perceived as entitled or demanding beyond the scope of what is normal”. Adapting to the way meaning of language changes is a key characteristic of intelligent behaviour. Current AI systems developed to process language computationally are not yet equipped to react to such changes: the artificial neural networks they are built on do not capture the full semantic range of words, which only becomes available if we access additional knowledge (e.g. author, genre, origin, register) that is typically contained in external, symbolic, and linguistic world knowledge bases.

This project aims to develop new computational methods for offensive language detection that combine distributional information from large textual datasets with symbolic knowledge representations to develop time-sensitive methods for offensive language detection. Specifically, this project will develop representations of word meaning from textual data and external knowledge bases containing relevant linguistic and world knowledge, such as lexicons, thesauri, semantic networks, knowledge graphs (e.g. Wikidata), and ontologies, embedding this knowledge into distributional word vectors derived from time-sensitive text data (diachronic corpora) and exploring various approaches for combining these representations.

Full description: https://safeandtrustedai.org/project/symbolic-knowledge-representations-for-time-sensitive-offensive-language-detection/ .

Applications Deadline: 15-Feb-2022

Web Address for Applications: https://safeandtrustedai.org/apply-now/

Best wishes,

Barbara McGillivray | @BarbaraMcGilli<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FBarbaraMcGilli&data=04%7C01%7Cbmcgillivray%40turing.ac.uk%7C1aa98cbc957847d43f0608d96df35e96%7C4395f4a7e4554f958a9f1fbaef6384f9%7C0%7C0%7C637661714288597910%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=cHBwrdYO5EI%2F1upmvn5Yw61xtLIvi3tKHhRwY7DE6NU%3D&reserved=0> Lecturer in Digital Humanities and Cultural Computation Strand Campus, Strand, London, WC2R 2LS, Room 3.28, Department of Digital Humanities, King’s College London Office hours (online): Wednesdays 15:00-16:00 and Fridays 15:00-16:00.

Turing Fellow<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.turing.ac.uk%2Fpeople%2Fresearchers%2Fbarbara-mcgillivray&data=04%7C01%7Cbmcgillivray%40turing.ac.uk%7C1aa98cbc957847d43f0608d96df35e96%7C4395f4a7e4554f958a9f1fbaef6384f9%7C0%7C0%7C637661714288597910%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=OFq%2By0Yg%2F%2B%2FQisAJK0mr%2FbIaJW%2BdSh3jP8vPUVMuKJ0%3D&reserved=0>, The Alan Turing Institute Editor-in-chief of Journal of Open Humanities Data<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopenhumanitiesdata.metajnl.com%2F&data=04%7C01%7Cbmcgillivray%40turing.ac.uk%7C1aa98cbc957847d43f0608d96df35e96%7C4395f4a7e4554f958a9f1fbaef6384f9%7C0%7C0%7C637661714288607871%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=hQhYXbGTrr%2BDadbB4sq%2FkYr49BJ7PZMGLZZtv80lzBs%3D&reserved=0>

-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 10775 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20220128/efd3e1d6/attachment.txt>

More information about the Corpora mailing list