[Corpora-List] Tokenizer for English Web Corpus

Adriano Ferraresi a.ferraresi at gmail.com
Tue Mar 13 12:40:04 CET 2007


Hi everybody,

I am currently embarking on a research project aiming at building a large
corpus of English by automatic crawls of the web. For this purpose I would
be interested in having some suggestions about an efficient tokenizer for
English. This should in some way take into account specific aspects of Web
writing (such as the treatment of emoticons, typos, commonly used
abbreviations, etc.). Does anyone know about a similar tool?

I will provide a resume of the answers I (hopefully!) will get.

Thank you.

Adriano Ferraresi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailman.uib.no/public/corpora-archive/attachments/20070313/4455ba95/attachment.html


More information about the Corpora-archive mailing list