[Corpora-List] Syntactic parsing performance by humans?

Amir Zeldes Amir.Zeldes at georgetown.edu
Fri May 13 19:17:37 CEST 2016


Hi all,

I think the numbers would vary considerably depending on the annotation scheme, the training offered to the students, and what we're measuring (attachment, labeling accuracy, or both).

If you're interested in student and machine parsing performance for Web data with the popular Stanford schema, the article below shows attachment percentages in the mid 80s for the Stanford parser, and student performance in the mid 90s for manual correction of that data. There is some variation depending on genre, in several types of relatively standard English data from the Web. Labeling precision is much higher, in the upper 90s (see article).

Zeldes, Amir (2016) The GUM Corpus: Creating Multilayer Resources in the Classroom. Language Resources and Evaluation. https://corpling.uis.georgetown.edu/amir/pdf/GUM_paper_prepub.pdf

For the task of student annotation from scratch, this paper found individual beginner annotator attachment accuracy of 79% on average, in French Wikipedia data, but also shows that using 4-5 annotators per sentence, accuracy in the mid 90s can be reached using weighted adjudication scores assigning more validity to reliable annotators.

Gerdes, Kim (2013) Collaborative Dependency Annotation. In Proceedings of the Second International Conference on Dependency Linguistics (DepLing 2013). Prague, 88-97. http://www.aclweb.org/anthology/W13-3711

In both cases, students received only a modest amount of training.

Best, Amir ------------ Dr. Amir Zeldes Asst. Prof. of Computational Linguistics Department of Linguistics Georgetown University 1437 37th St. NW Washington, DC 20057

http://corpling.uis.georgetown.edu/amir

-----Original Message----- From: corpora-bounces at uib.no [mailto:corpora-bounces at uib.no] On Behalf Of John F Sowa Sent: Friday, May 13, 2016 10:22 To: corpora at uib.no Subject: Re: [Corpora-List] Syntactic parsing performance by humans?

On 5/13/2016 7:55 AM, Darren Cook wrote:
> Are there really no studies of human performance?! Surely some
> professor has hinted to their PhD students that it is a nice bit of
> relatively easy linguistics research, that should also get them cited a
lot...

I strongly doubt that it is "relatively easy linguistics research".

Any parser will use some set of conventions for parsing and annotating sentences. Google found "that linguists trained for this task agree in 96-97% of the cases". That is what they defined as "human performance".

I assume that those linguists were Google employees or other researchers who worked on projects published in the literature.

For a PhD student to do comparable work would (a) take years of unpaid labor, (b) require the kind of funding that Google has, or (c) use some magic methodology for determining the implicit "mental parses" of untrained native speakers.

John Sowa

_______________________________________________ UNSUBSCRIBE from this page: http://mailman.uib.no/options/corpora Corpora mailing list Corpora at uib.no http://mailman.uib.no/listinfo/corpora



More information about the Corpora mailing list