[Corpora-List] Tokenizer for English Web Corpus
a.ferraresi at gmail.com
Tue Mar 13 12:40:04 CET 2007
I am currently embarking on a research project aiming at building a large
corpus of English by automatic crawls of the web. For this purpose I would
be interested in having some suggestions about an efficient tokenizer for
English. This should in some way take into account specific aspects of Web
writing (such as the treatment of emoticons, typos, commonly used
abbreviations, etc.). Does anyone know about a similar tool?
I will provide a resume of the answers I (hopefully!) will get.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Corpora-archive