[Corpora-List] SemEval 2020 Task 8 - Understanding the Emotions of Memes

Amitava Das amitava.santu at gmail.com
Mon Sep 16 10:41:03 CEST 2019


Memotion Analysis - Understanding the Emotions of Memes

http://www.amitavadas.com/Memotion.html http://alt.qcri.org/semeval2020/index.php?id=tasks https://competitions.codalab.org/competitions/20629

*RATIONALE*

Information on social media comprises of various modalities such as textual, visual and audio. NLP and Computer Vision communities often leverage only one prominent modality in isolation to study social media. However, computational processing of Internet memes needs a hybrid approach. The growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter further suggests that we can not ignore such multimodal content anymore. To the best of our knowledge, there is not much attention towards meme emotion analysis. The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes. The task Memotion analysis will release 8K annotated memes - with human annotated tags namely sentiment, and type of humor that is, sarcastic, humorous, or offensive.

*The Multimodal Social Media* In the last few years, the growing ubiquity of Internet memes on social media platforms such as Facebook, Instagram, and Twitter has become a topic of immense interest. Memes, one of the most typed English words (Sonnad, 2018) in recent times. Memes are often derived from our prior social and cultural experiences such as TV series or a popular cartoon character (think: One Does Not Simply - a now immensely popular meme taken from the movie Lord of the Rings). These digital constructs are so deeply ingrained in our Internet culture that to understand the opinion of a community, we need to understand the type of memes it shares. (Gal et al., 2016) aptly describes them as performative acts, which involve a conscious decision to either support or reject an ongoing social discourse.

*Online Hate - A brutal Job: * The prevalence of hate speech in online social media is a nightmare and a great societal responsiblity for many social media companies. However, the latest entrant Internet memes (Williams et al., 2016) has doubled the challenge. When malicious users upload something offensive to torment or disturb people, it traditionally has to be seen and flagged by at least one human, either an user or a paid worker. Even today, companies like Facebook and Twitter rely extensively on outside human contractors from start-ups like CrowdFlower, or companies in the Philippines. But with the growing volume of multimodal social media it is becoming impossible to scale. The detection of offensive content on online social media is an ongoing struggle. OffenseEval (Zampieri et al., 2019) is a shared task which is being organized since the last two years at SemEval. But, detecting an offensive meme is more complex than detecting an offensive text – it involves visual cue and language understanding. This is one of the motivating aspects which encourages us to propose this task.

*The Memotion Analyis Task* Memes typically induce humor and strive to be relatable. Many of them aim to express solidarity during certain life phases and thus, to connect with their audience. Some memes are directly humorous whereas others go for sarcastic dig at daily life events. Inspired by the various humorous effects of memes, we propose three task as follows:

- *Task A- Sentiment Classification:* Given an Internet meme, the first

task is to classify it as positive or negative meme. We presume that a meme

is not neutral.

- *Task B- Humor Classification:* Given an Internet meme, the system has

to identify the type of humor expressed. The categories are sarcastic,

humorous, and offensive meme. If a meme does not fall under any of these

categories, then it is marked as a other meme. A meme can have more than

one category. For instance, Fig 3 is an offensive meme but sarcastic too.

- *Task C- Scales of Semantic Classes:* The third task is to quantify

the extent to which a particular effect is being expressed. Details of such

quantifications is reported in the Table 1. Appropriate annotated data will

be provided.

We will release 8K human annotated Internet memes labelled with semantic dimensions namely *sentiment*, and type of humor that is, *sarcastic*, *humorous*, or *offensive*. The humor types are further quantified on a likert scale as in Table 1. The dataset will also contain the extracted captions/texts from the memes.

*The Memotion Analyis Task*

Taking into account the sheer volume of photos shared each day on Facebook, Twitter, and Instagram, the number of languages supported on our global platform, and the variations of the text, the problem of understanding text in images is quite different from those solved by traditional optical character recognition (OCR) systems, which recognize the characters but don’t understand the context of the associated image (Sivakumar et al.,2018).

For instance, the caption for the Fig. 1 is sufficient to sense the sarcasm or even dislike towards a feminist man. In fact, the image has no significant role to play in this case and the provided text is good enough to sense the pun. But, in Fig. 2, the final punchline on the unavailability of Deep Learning based OCR tutorials is dependednt on the man’s expression in the meme who is trying to read a small piece of paper - the facial expression of the man aids in interpreting the struggle to find OCR tutorials online. To derive the intended meaning, someone needs to establish an association between the provided image and the text. If we solely process the caption, we will lose the humor and also the intended meaning (lack of tutorials). In Fig. 3, to establish the sense of provided caption along with the racism against the middle east woman, we need to process image, caption and dominant societal beliefs.

*Data & Resources*

This shared task will release 8K annotated memes categorized into sentiment classes and the type of humor. As discussed earlier, both textual and visual cues are indispensable for meme emotion analysis. Thus, extracting caption from memes is an inevitable step towards automatic Meme emotion analysis. Keeping that in mind this shared task will also provide the caption from memes as a part of the dataset. The extraction is performed using the Google OCR system and then manually corrected by crowdsourcing services. Therefore, the performance of the participating team’s performances will not dependent on the OCR accuracy.

*Quality:* To ensure the quality of data annotation, we make sure to get each meme annotated by at least two annotators. After the SemEval2020, the labels for the test data will be released as well. We will ask the participants to submit their predictions in a specified format (within 24 hours), and the organizers will evaluate the results for each participant. We will make no distinction between constrained and unconstrained systems, but the participants will be asked to report what additional resources they have used for each submitted run.

Thanks, Amitava -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 12692 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20190916/4b68535a/attachment.txt>



More information about the Corpora mailing list