[Corpora-List] CfP: IJCNLP 2017 Shared Task - Dimensional Sentiment Analysis for Chinese Phrases

Lung-Hao Lee lunghaolee at gmail.com
Wed Jun 7 11:52:06 CEST 2017


------------------------------------------------------------ ---------------------------------------------- The 8th International Joint Conference on Natural Language Processing ( *IJCNLP** 2017*) November 27- December 1, 2017 at Tainan, Taiwan *http://ijcnlp2017.org/ <http://ijcnlp2017.org/>* ------------------------------------------------------------ ---------------------------------------------- (With apologies for cross-posting)

*Call for Participation*

*IJCNLP 2017 Shared Task:* *Dimensional Sentiment Analysis for Chinese Phrases* *http://nlp.innobic.yzu.edu.tw/tasks/dsa_p/ <http://nlp.innobic.yzu.edu.tw/tasks/dsa_p/>*

Sentiment lexicons with valence-arousal ratings are useful resources for the development of dimensional sentiment applications. Due to the limited availability of such VA lexicons, especially for Chinese, the objective of the task is to automatically acquire the valence-arousal ratings of Chinese affective words and phrases.

Given a word or phrase, participants are asked to provide a real-valued score from 1 to 9 for both valence and arousal dimensions, indicating the degree from most negative to most positive for valence, and from most calm to most excited for arousal. The input format is “term_id, term”, and the output format is “term_id, valence_rating, arousal_rating”. Below are the input/output formats of the example words 好 (good), 非常好 (very good), 滿意 (satisfy), and 不滿意 (not satisfy).

- *Example 1*:

Input: 1, 好

Output: 1, 6.8, 5.2

- *Example 2*:

Input: 2, 非常好

Output: 2, 8.500, 6.625

- *Example 3*:

Input: 3, 滿意

Output: 3, 7.2, 5.6

- *Example 4*:

Input: 4, 不滿意

Output: 4, 2.813, 5.688

*Data *

- Training Set:

- For words: 2,802 single words annotated with valence-arousal

ratings (CVAW 2.0) (Yu et al., 2016a).

- For phrases: 2,250 multi-word phrases annotated with

valence-arousal ratings

- Test set:

- 750 single words and 750 multi-word phrases. The policy of this

shared task is an open test. Participating systems are allowed to use other

publicly available data for this shared task, but the use of other data

should be specified in the final technical report.

*Evaluation*

The performance is evaluated by examining the difference between machine-predicted ratings and human-annotated ratings (valence and arousal are treated independently). The evaluation metrics include:

- Mean absolute error

- Pearson correlation coefficient

*Registration*

Participants need to register in order to obtain the training and test data. To register, please send the following information to Lung-Hao Lee ( lhlee at ntnu.edu.tw).

- Team Name

- Organization of your team

- Name and E-mail address of contact person for your team

*Important Dates*

- Registration open: May 15, 2017

- Release of training data: May 15, 2017

- Registration close: August 11, 2017

- Release of test data: August 14, 2017

- Testing results submission due: August 21, 2017

- Release of evaluation results: August 31, 2017

- System description paper due: September 15, 2017

- Notification of acceptance: September 30, 2017

- Camera-ready deadline: October 10, 2017

- Shared task date: December 1, 2017

*Organizers*

- Liang-Chih Yu (Yuan Ze University)

- Lung-Hao Lee (National Taiwan Normal University)

- Jin Wang (Yunnan University)

- Kam-Fai Wong (The Chinese University of Hong Kong)

-- Lung-Hao Lee (李龍豪), Ph.D. Postdoctoral Fellow & Adjunct Assistant Professor Graduate Institute of Library and Information Studies National Taiwan Normal University Email: lhlee at ntnu.edu.tw Web: http://web.ntnu.edu.tw/~lhlee/ -- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 8693 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20170607/453c1c93/attachment.txt>



More information about the Corpora mailing list