[Corpora-List] News from LDC

Linguistic Data Consortium ldc at ldc.upenn.edu
Thu Apr 26 23:55:25 CEST 2012

/In this newsletter:/

*- LDC Timeline -- Two Decades of Milestones <#timeline> -*

/New publications:/

LDC2012V01 *- 2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News <#vace> -*

LDC2012T03 *- 2009 CoNLL Shared Task Part 1 <#conll1> -*

LDC2012T04 *- 2009 CoNLL Shared Task Part 2 <#conll2> -*

LDC2012S05 *- USC-SFI MALACH Interviews and Transcripts English <#malach> -*

------------------------------------------------------------------------ ------------------------------------------------------------------------ *LDC Timeline -- Two Decades of Milestones*

April 15 marks the "official" 20th anniversary of LDC's founding. We'll be featuring highlights from the last two decades in upcoming newsletters, on the web and elsewhere.For a start, here's a brief timeline of significant milestones.

1992: The University of Pennsylvania is chosen as the host site for

LDC in response to a call for proposals issued by DARPA; the mission

of the new consortium is to operate as a specialized data publisher

and archive guaranteeing widespread, long-term availability of

language resources. DARPA provides seed money with the stipulation

that LDC become self-sustaining within five years.Mark Liberman

assumes duties as LDC's Director with a staff that grows to four,

including Jack Godfrey, the Consortium's first Executive Director.

1993: LDC's catalog debuts. Early releases include benchmark data

sets such as TIMIT, TIPSTER, CSR and Switchboard, shortly followed

by the Penn Treebank.

1994: LDC and NIST (the National Institute of Standards and

Technology) enter into a Cooperative R&D Agreement that provides the

framework for the continued collaboration between the two organizations.

1995: Collection of conversational telephone speech and broadcast

programming and transcription commences. LDC begins its long and

continued support for NIST common task evaluations by providing

custom data sets for participants. Membership and data license fees

prove sufficient to support LDC operations, satisfying the

requirement that the Consortium be self-sustaining.

1996: The Lexicon Development project, under the direction of Dr.

Cynthia McLemore, begins releasing pronouncing lexicons in Mandarin,

German, Egyptian Colloquial Arabic, Spanish, Japanese, and American

English. By 1997 all 6 are published.

1997: LDC announces LDC Online, a searchable index of newswire and

speech data with associated tools to compute n-gram models, mutual

information and other analyses.

1998: LDC adds annotation to its task portfolio. Christopher Cieri

joins LDC as Executive Director and develops the annotation operation.

1999: Steven Bird joins LDC; the organization begins to develop

tools and best practices for general use. The Annotation Graph

Toolkit results from this effort.

2000: LDC expands its support of common task evaluations from

providing corpora to coordinating language resources across the

program. Early examples include the DARPA TIDES, EARS and GALE programs.

2001: The Arabic treebank project begins.

2002: LDC moves to its current facilities at 3600 Market Street,

Philadelphia with a full-time staff of approximately 40 persons.

2004: LDC introduces the Standard and Subscription membership

options, allowing members to choose whether to receive all or a

subset of the data sets released in a membership year.

2005: LDC makes task specifications and guidelines available through

its projects web pages.

2008: LDC introduces programs that provide discounts for continuing

members and those who renew early in the year.

2010: LDC inaugurates the Data Scholarship program for students with

a demonstrable need for data.

2012: LDC's full-time staff of 50 and 196 part-time staff support

ongoing projects and operations which include collecting, developing

and archiving data, data annotation, tool development,

sponsored-project support and multiple collaborations with various

partners.The general catalog contains over 500 holdings in more than

50 languages. Over 85,000 copies of more than 1300 titles have been

distributed to 3200 organizations in 70 countries.

*New Publications*

(1) 2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News <http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012V01> was developed by researchers at the Department of Computer Science and Engineering <http://www.cse.usf.edu/>, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group <http://nist.gov/itl/iad/mig/> at the National Institute of Standards and Technology (NIST). It contains approximately 60 hours of English broadcast news video data collected by LDC in 1998 and annotated for the 2005 VACE (Video Analysis and Content Extraction) tasks. The tasks covered by the broadcast news domain were human face (FDT) tracking, text strings (TDT) (glyphs rendered within the video image for the text object detection and tracking task) and word level text strings (TDT_Word_Level) (videotext OCR task).

The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.

Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants.

The broadcast news recordings were collected by LDC in 1998 from CNN Headline News (CNN-HDL) and ABC World News Tonight (ABC-WNT). CNN HDL is a 24-hour/day cable-TV broadcast which presents top news stories continuously throughout the day. ABC-WNT is a daily 30-minute news broadcast that typically covers about a dozen different news items. Each daily ABC-WNT broadcast and up to four 30-minute sections of CNN-HDL were recorded each day. The CNN segments were drawn from that portion of the daily schedule that happened to include closed captioning.


(2)2009 CoNLL Shared Task Part 1 <http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T03> contains the Catalan, Czech, German and Spanish trial corpora, training corpora, development and test data for the 2009 CoNLL (Conference on Computational Natural Language Learning) Shared Task Evaluation <http://ufal.mff.cuni.cz/conll2009-st/>. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates.

The Conference on Computational Natural Language Learning (CoNLL) <http://www.cnts.ua.ac.be/conll/> is accompanied every year by a shared task intended to promote natural language processing applications and evaluate them in a standard setting. In 2008, the shared task focused on English and employed a unified dependency-based formalism and merged the task of syntactic dependency parsing and the task of identifying semantic arguments and labeling them with semantic roles; that data has been released by LDC as 2008 CoNLL Shared Task Data (LDC2009T12 <http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2009T12>). The 2009 task extended the 2008 task to several languages (English plus Catalan, Chinese, Czech, German, Japanese and Spanish). Among the new features were comparison of time and space complexity based on participants' input, and learning curve comparison for languages with large datasets.

The 2009 shared task was divided into two subtasks:

(1) parsing syntactic dependencies

(2) identification of arguments and assignment of semantic roles for

each predicate

The materials in this release consist of excerpts from the following corpora:

Ancora <http://clic.ub.edu/ancora/> (Spanish + Catalan): 500,000

words each of annotated news text developed by the University of

Barcelona, Polytechnic University of Catalonia, the University of

Alacante and the University of the Basque Country

Prague Dependency Treebank 2.0 <http://ufal.mff.cuni.cz/pdt2.0/>

(Czech): approximately 2 million words of annotated news, journal

and magazine text developed by Charles University; also available

through LDC, LDC2006T01


TIGER Treebank <http://www.ims.uni-stuttgart.de/projekte/TIGER/> +

SALSA Corpus <http://www.coli.uni-saarland.de/projects/salsa/>

(German): approximately 900,000 words of annotated news text and

FrameNet annotation developed by the University of Potsdam, Saarland

University and the University of Stuttgart


(3) 2009 CoNLL Shared Task Part 2 <http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T04> contains the Chinese and English trial corpora, training corpora, development and test data for the 2009 CoNLL (Conference on Computational Natural Language Learning) Shared Task Evaluation <http://ufal.mff.cuni.cz/conll2009-st/>. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates.

The materials in this release consist of excerpts from the following corpora:

Penn Treebank II <http://www.cis.upenn.edu/%7Etreebank> (LDC95T7)

<http://www.ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC95T7> (English):

over one million words of annotated English newswire and other text

developed by the University of Pennsylvania

PropBank <http://verbs.colorado.edu/%7Empalmer/projects/ace.html>



(English): semantic annotation of newswire text from Treebank-2

developed by the University of Pennsylvania

NomBank <http://nlp.cs.nyu.edu/meyers/NomBank.html> (LDC2008T23)


(English): argument structure for instances of common nouns in

Treebank-2 and Treebank-3 (LDC99T42)


texts developed by New York University

Chinese Treebank 6.0 <http://www.cis.upenn.edu/%7Echinese/ctb.html>



780,000 words (over 1.28 million characters) of annotated Chinese

newswire, magazine and administrative texts and transcripts from

various broadcast news programs developed by the University of

Pennsylvania and the University of Colorado

Chinese Proposition Bank 2.0

<http://verbs.colorado.edu/chinese/cpb/> (LDC2008T07)


(Chinese): predicate-argument annotation on 500,000 words from

Chinese Treebank 6.0 developed by the University of Pennsylvania and

the University of Colorado


(4) USC-SFI MALACH Interviews and Transcripts English <http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012S05> was developed by The University of Southern California's Shoah Foundation Institute (USC-SFI), the University of Maryland, IBM and Johns Hopkins University as part of the MALACH (Multilingual Access to Large Spoken ArCHives) Project <http://malach.umiacs.umd.edu/>. It contains approximately 375 hours of interviews from 784 interviewees along with transcripts and other documentation.

Inspired by his experience making Schindler's List, Steven Spielberg established the Survivors of the Shoah Visual History Foundation in 1994 to gather video testimonies from survivors and other witnesses of the Holocaust. While most of those who gave testimony were Jewish survivors, the Foundation also interviewed homosexual survivors, Jehovah's Witness survivors, liberators and liberation witnesses, political prisoners, rescuers and aid providers, Roma and Sinti (Gypsy) survivors, survivors of eugenics policies, and war crimes trials participants. In 2006, the Foundation became part of the Dana and David Dornsife College of Letters, Arts and Sciences at the University of Southern California in Los Angeles and was renamed as the USC Shoah Foundation Institute for Visual History and Education.

The goal of the MALACH project was to develop methods for improved access to large multinational spoken archives; the focus was advancing the state of the art of automatic speech recognition (ASR) and information retrieval. The characteristics of the USC-SFI collection -- unconstrained, natural speech filled with disfluencies, heavy accents, age-related co-articulations, un-cued speaker and language switching and emotional speech -- were considered well-suited for that task. The work centered on five languages: English, Czech, Russian, Polish and Slovak. USC-SFI MALACH Interviews and Transcripts English was developed for the English speech recognition experiments.

The speech data in this release was collected beginning in 1994 under a wide variety of conditions ranging from quiet to noisy (e.g., airplane over-flights, wind noise, background conversations and highway noise). Approximately 25,000 of all USC-SFI collected interviews are in English and average approximately 2.5 hours each. The 784 interviews included in this release are each a 30 minute section of the corresponding larger interview. The interviews include accented speech over a wide range (e.g., Hungarian, Italian, Yiddish, German and Polish).

This release includes transcripts of the first 15 minutes of each interview. The transcripts were created using Transcriber <http://trans.sourceforge.net/en/presentation.php> 1.5.1 and later modified.

------------------------------------------------------------------------ ------------------------------------------------------------------------ ------------------------------------------------------------------------

-- --

Ilya Ahtaridis Membership Coordinator -------------------------------------------------------------------- Linguistic Data Consortium Phone: 1 (215) 573-1275 University of Pennsylvania Fax: 1 (215) 573-2175 3600 Market St., Suite 810 ldc at ldc.upenn.edu Philadelphia, PA 19104 USA http://www.ldc.upenn.edu

-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 20675 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20120426/64c51a1f/attachment.txt>

More information about the Corpora mailing list