Invoke IT Blog

Menu

Frequency Word Lists

I originally created the word lists while I was trying to improve the dictionaries I used for my windows phone app called Slydr.

Of course there were commercial options – however I was quoted about £500 per language for a nice / cleaned wordlist.. Me of course being a cheap git.. decided to create my own.

If you decide to use it, please let me know what you are using it for. Its yours to use.

Note: I used public / free subtitles to generate these and like most things, it will have errors.

I would like to thank opensubtitles.org as their subtitles form the basis of the word lists. I would also like to thank the Tehran University for Persian Language corpus which allowed me to build Persian / Farsi word list (2011 version).

While the subtitles are free, donations do motivate further work. If you would like to donate, please click the Donate button to donate using Paypal.

If you like to create you own word lists, here’s something to get you started. Download FrequencyWordsHelper. When you run the app, it will ask for a directory to scan and then ask for output filename. once you provide both, it will scan the directory for all txt files and create a word list out of it. The app requires .NET framework 4.5

you are welcome and thank you. i stumbled upon wiktionary when i was looking for word lists and i know its not easy to find many free / extensive sources. since i only frequented English Wiktionary, i only posted links on the english version of the page.

what other details are there available?? for example what is the size of the corpus?? im comparing your frequency counts with counts from here-> http://hnc.ilsp.gr/default.asp and my personal -small corpus

well my overall subtitle corpus for all languages was 53 GB compressed archive. I unfortunately deleted all except the original archive. Let me open it and i can give you an idea on the number of files at least. Based on my tests, frequency lists generated using decent amount of data should be comparable. I am assure you that there were lot more entries than 50k i used and provided for download

I suggest you lemmatize your wordlists, and not only present them as wordforms (group verbforms walks, walked, under walk_V, and noun forms a walk, the walks under walk_N), and similarily for the other languages. Here is an overview of software to do so: http://en.wikipedia.org/wiki/Constraint_Grammar

well i can do that but the word lists i consume are for a keyboard app and i need raw words to match user input. infact when i started i came across a few word lists and i could not use them for my requirement purely because i depending upon user input i would want to show walked in my app and lemmatised word lists would make loading a lot slower.

maybe at some point once i am done with current work load, i will look into lemmatising the lists. thanks for writing

That’s a fantastic work indeed. I am using it for studying Russian now. Learning the most frequent words helps you to understand the spoken languages much easier and faster. And it’s really fabulous there’re people who share that. Thanks.

there.. i’ve added the links to Korean list i had.. it was small.. after initial clean up it was 14k or so words only. Not a very high quality frequency list i must admit – but at least something to start with.

the size of word lists depends on how many subtitles are used (and their size and the unique word count in them). i have a log somewhere of how many files / words per language
Total files: 15
Unique word count: 19215
Total word count: 369216225
See how low the file count was. higher the file / word count, better is the quality of word list

well the format i used for word list is sort of generic one i found around. it goes like this
word1 wordfrequency1
word2 wordfrequency2
word3 wordfrequency3
word comes first, and is followed by word frequency which is a number, with space in between.

thanx, dave.) but isnt that just a crazy figure? ~700 BILLION words, can it be true? @_@ as far as i know the lagest courpus of all available is COCA that made up only from around 425 million words!! http://www.wordfrequency.info

The unique word count is unique occurance. subtitles has lot of errors for example words “i don’t” sometimes occurs as idon’t which would be a unique word. however its occurance would be lot further in the list than i and don’t as those two words are used lot often. Total word count is just a count of all words in all subtitles. Each word could be repeated 1000s of times.

dave, i meant the total words figure, of course, that makes up your corpus. it appears that this figure of 70 billions words is just incorrect, because a simple calculation such as “Total word count/Total files” gives 6 and a half millions words in a single file, which, you must acknowlege, merely cant be true, especially applying to an average subtitle-file that is supposed to be quite small in terms of the words contained in it.

6,5 mln. words per file would give a file of 10 Mb as minimum. so, again: this cant be a true figure — Total word count: 690,318,369,316.

judging by words occurancy in your corpus, its size cant exceed that one of COCA.

top10 from yours

you — 21,953,223
the 18609293
to 13815857
and 8134383
it 7913344
of 7131178
that 6534300
in 6010124
is 5924671
me 5619307

top10 from coca’s

the a 22,038,615
be v 12545825
and c 10741073
of i 10343885
a a 10144200
in i 6996437
to t 6332195
have v 4303955
to i 3856916
it p 3872477

so, it turns out the total words number of your corpus is slightely less than that of COCA which means the total number of word in yours must be around 400,000,000 words.

you were correct. I re-ran the word list generator a couple of times and I found the mistake i made in computing the total count. The other details are correct however the total word count came up to 765703147 and not 690788712769

i was about to think the question had been closed, but yet another thought has just occured to me, this time about the total amount of files figure.)

this number of 107, 000 files, as it seems to me, might also be incorrect… why? because the total amount of subtitle files on http://www.opensubtitles.org is currently 1,593,684. but this figure seems not to refer to the number of UNIQUE files on the site, but rather to the number of all possible files available, while it is a well known fact that sometimes several or even dozen files can be attached to a single movie. for example, 62 english subtitles are currently attached to Avatar movie:http://www.opensubtitles.org/ru/search/sublanguageid-eng/idmovie-19984
thus, all i want to say is, are you sure all the files making up your english corpus are really unique? the current figure of 107 thousand files seems to be dubious as it implies there are actual 107 thousand englishspeaking movies which is an enormous number.

that is true… based on my random scan i would say that most directories only had one file however there were a few directories with multiple files (repeating subtitles) and in at least one case i saw 10 entries. so yes there is a flaw however some subtitles are split and some are not.

well, as long as there are recurring files in the corp it cant be authentic to the fullest. moreover, this fact also means the total words figure should be lessened even further, so i think you have to solve this problem somehow for sake of accuravy, although it might prove to be a not easy task this time, indeed…

true.. i do understand how corpus is created. i might have to write my own directory scan mechanism – its just a problem of logic and time.. its takes ages to generate word lists… more than a day as i couldn’t be asked to multi thread it (overall data is more than 50GB compressed). i’ll try to update the files over the next few days depending upon what i am doing.

my reworked frequency list builder works well.. managed to process all languages though i need to rework english to handle subtitle annoyances with don’t etc… its usually split into don ‘t as two words.. hopefully that will be done at some point this week.. right now both desktop and laptop busy downloading maps for another project

this is great news, dave!.. have you thought of finding some ways as to lemmatize your corpras, at least that english one? it would be even greater!.. i’m not an expert, but it seems like i”ve seen a piece of software for that purpose somewhere in the internet…

I haven’t gotten around to reworking the english dictionary yet.. I have been asked about Lemmatizing the corpas but so far i haven’t gone that route for 2 reasons. 1) i consume straightforward word lists and thats why i build them in this manner. 2) i need to look into it and since i didn’t need for consumption it becomes low priority. Anyways been busy with christmas. I will try to get english list sorted tomorrow and then probably upload raw lists.

Total word count is the total count used for frequency list.
Overall word count was the actual word count. some words has junk character or at length of 1 which are ignored. Hence Total word count <= Overall word count.

Luigi,
The word lists i have generated ignore 1 letter words like a and i. Its difficult to validate a single char word across multiple languages unless you know the language or can spend time tuning the rules per language. I know a bit about it as i have done something similar for accents across various latin based european languages. If you really want one, i can generate a one off and email it to you.

The details of corpus is available and you should check the log file. Most languages have a log file entry in the table.

I found an extract someone had already done across various languages. I just consumed what i found – i think the resource was up to date with all movies across various genres. The concept of frequency lists dictates that it should be close if not representative of the actual usage. UK english usage is different from US English which is different from that in Canada, Australia, India etc etc.

this word list is a general one that doesn’t represent the en-UK or en-US etc just english in general.

Hi Dave,
nevermind, i find the way to see it correctly.
Good!
Please, would you mind to let me know which other rules you adopted to construct the corpuses?
– exclude words with one char
– … ?

Do you also have an idea where i could find a digital resource of a russian (and other languages) with definitions of words, which I can import easily (a list in txt, csv, xml are perfect…, while pdf is not …) ?

I unfortunately dont have them locally (they are on the hosting server). If i get around to generating them again, i will zip them up in a single archive.

Having said that i will try to generate torrent files, one that references all the 50kzip and another one that references all full zips. Once i generate these, i will udpate this page with the torrent files.

I however did not get what you mean by “by column too” !! each column currently offers a 50k zip and a full zip for that language. Providing all language download in same column is confusing – rather it should be a single entry at the very top or possibly on top of the table itself.

For the columns, I mean to say all the 50k for every language, and all the full for every language, as two separate downloads like you described first, Not everything in one column! It is just an extra option for downloaders to choose…not urgent or important really :)

You can host the files on a hosting site if bandwidth is a problem. Multi upload is a good option here.

:) i have download them off my smallbusiness live hosting account.. and packaged them up… took a lot of clicks.
50k can be upload to my host, full one is about 80megs and is not allowed.. will have to upload to megaupload.. its been a while since i uploaded anything there.. will have to do it from home..

oops… sorry know about megaupload… i’d be out of touch if i didn’t know that.. meant multiupload – i used to host Windows Mobile ROMs that i used to create there :)… for some reason it says at upload initializing..

well i used to have an excellent package which would give me tons of bandwidth and allow me to host couple of gigs of data however i was not using it.. i dont even know if it still works (actually i will check in a bit).. eventually i moved my email hosting to microsoft live a while back and moved hosting there as well.. worse is wordpress.. they allow you to upload tons of things including movies but not zipped files..

Your wordlists are interesting. So they are completely composed of tokenizing movie subtitles? I am currently working on twitter research and am trying to set up lists to rate words in many of these languages. Our master-lists were generated by tokenizing google books but the tokenizer separates strings at apostrophes. This is a huge problem for our french word-list since words like c’est were appearing as two different words c’ and est which made the set unusable. Are the words on your french wordlist split by apostrophes?

Yes they are very interesting. Gives you something to think about. In case of subtitles, i used ‘ ‘ and ‘-‘ to split the words. In case of subtitles, french subtitles were in good condition. English however had say don’t like don’ t and my code would assume don’ and t are two different words. So i changed logic for english to say that if last char of word is ‘ then join them and that worked perfectly. try something like that.

not under usual circumstances. However i am moving my host (wordpress doesn’t allow files to be hosted) so they are not available for that reason. I should have a solution (a place to host the files soon). If you want, i can email them to you on your hotmail.fr address – which ones would you like ?

Hello, mentioned above is the lists are being moved to a different host. Is this still in progress? I can’t find a date on any of these entries, so I have no idea if this is an abandoned project or not. I am particularly interested in a Korean list, but the others as well.

well i have a bit of c# code that churns through files. what format of files do you have ? are they utf-8 / unicode text files ? are they xml files. i have two sets routines, 1 deals with data in text files and another in specialised xml files

Did you really get the Chinese wordlist to work? I downloaded zh_50K.txt but no matter what options I choose open in Microsoft Word and Open Office (both on Mac) it just displays corrupted characters. Any way around this?

Hi Dave,
I’m creating a MS Word add-in to optimize the AutoCorrect. The add-in is useful to shorten typing, not to correct typographical errors. One part of its function is to shorten typing English words with suffix.

For example: caref – careful, darkn – darkness, acceptg -accepting.

So, I need a list of English words (probably around 4000 words) to create a database that will be used for the add-ins. I intend to share the add-in for the public, this is a freeware and an open source. Can I get your permission to use the list of English words from your Word Frequency List.
Thank you.

I’ve been using the German wordlist for some Psychology experiments. We needed emotionally neutral words for the task we designed, and the top few were a great starting point. Extremely helpful, thanks very, very much!

Hi. Well done for a great site. Such a great resource. I am developing a word game and plan to use the lists for foreign language versions. It’s a crowded market so don’t expect to make any money but enjoying doing it. I plan to explain about the source of the word list but if you have developed a more “formal” list of any languages except for English, then I would be interested in obtaining (and paying) for them. If not then, your excellent list is a fantastic fall back position for me. Regards. Mitch.

Hi Dave. Thanks for your prompt reply. I think I am doing more clean up than that so I will continue working on your files (except English). This might seem a bit mad but hey ho, you never know … if you ever need a list of words in alphabetical order, where the frequency has been deleted, virtually all the words contain at least three consonants, all accents/acutes/etc have been either replaced with ordinary caps or deleted, no dupes then I’m your man! Did I say a BIT mad! It’s what I need for my app.
Once again, thanks for a great resource. If I publish my game I’ll let you know!
Regards.
Mitch.

Thanks for the wonderful collection of word lists from various languages!

I am using some of the English word lists as a marker against a dictionary word list in my word game to determine the difficulty level for individual words. It is an indirect use of your work. I would like to know how I can acknowledge/credit you?

Hi Hermit, sorry if it took so long to respond. what I actually need is a document frequency list: a frequency list of the documents where that specific word appears… do you think this is doable for you to do?

Dave, thanks for your work, it can be put to so many uses. I have recently learned that the creator of a smartphone keyboard (which all use wordlists for prediction/validation) used your lists (as one input among others). I have now noted that there seem to be an unusually high number of words that are falsely spelt in lower case instead of capitals. In many languages this only affects proper nouns (which is bad enough), but for some languages which use capitals for regular nouns (like German), your list needs a lot of cleaning up before it can be relied on.
So my question is: do you do any processing that can cause this effect, or are all these errors really in the subtitle files?
Feel free to answer by e-mail if you like.

the repo i used is an open source repo additonally i have little knowledge of how capital letters appear in non english language. so the two combined meant that i wouldn’t know if i can rely of each creator using the capitalisation as required and whether i can actually understand that part.

for that reason, i force all words to lower case to build the frequency word list. i myself used it in Slydr (keyboard like app in Windows Phone) i created last year. i can look further but again without language specific input, i am helpless :(

Thanks for the explanation, Dave. I guess it probably depends on what you actually want to do with the lists. If you want to use the frequency data alone (whatever they might be good for on their own) forcing lower case might be fine. However, if you want to re-use the actual words, I’m afraid that forcing all words to lower case might do more harm than good, particularly (but not only) in languages with extensive use of capitalisation like German.

All other corpus-based word lists l’ve come across so far leave the data untouched¹. After all, anyone can convert a list to lower case (and recompute frequencies if desired) with minimal effort. However, restoring the original state from a lower-case list is impossible without external sources (and difficult even *with* external help, for instance dictionaries or spellcheckers). So here’s an emphatic vote to leave the data unchanged, even if this might mean double entries for many words – which again would also carry potentially useful information, eg. on the likelihood of occurrence of a particular lemma at the start of an utterance.

¹ I have seen one corpus-based list carry additional entries (with asterisks, e.g. That* or Man*) for upper-case occurrences of words whose dictionary form is lower-case, presumably where context indicated that the upper case was attributable to the position of the word (beginning of sentence or paragraph). Similarly, in your case one might argue that in occurrences where it’s reasonably likely that the capitalisation is due to the word’s position (rather than being a basic attribute of the word like in proper nouns), it makes sense to convert to lower case before processing (e.g. computing frequencies). This way, you might even provide added value to the users of the lists (who don’t have context information to make that distinction). In contrast, with indiscriminate lc conversion you do what any user can do if they want (so no real value added), but at the same time you corrupt the list for many uses.

You make a fine point. its easy to rework to compute frequencies in lower case and persist in case specific word. I however need a few days. Thanks for persisting and pushing your logic in clear manner.

Hi Dave,
I’m still trying to open the Hebrew file with Ms word on OSX, trying 20 different encodings incl. unicode usf8 and I still get gibberish, the same with nearly all the files, is there a quick fix? Also concerning the english files, did you come across a ressource with Pos and/or IPA translations? Kind regards, Nick

Thanks a lot for the lists, Hermit! Now, three brief questions:
1) What version of the opensubtitles corpora did you use? Did you use only this source for the lists?
2) In the end, did you use all the available translations for each movie or just one?
3) For some reason, the Hebrew list seem to have a high degree of dissimilarity with other equivalent lists from purely written language. Any idea?

that is correct. I however tend to rebuild them at the same time and I am sure I did add single character entries after some discussion here. let me check it tomorrow and if required rerun the code again

Hi Dave,
Thanks for the frequency lists! I am writing a little class term paper (totally nonbinding) and need to explain the source of the Bulgarian corpus (besides the info you put in the log text file). Do you know where you took the Bulgarian frequency lists from? Was it from the Bulgarian Natioanal Corpus (that is written) or did you also get info from a spoken corpus? How do I quote your frequency lists? Thanks for your help!

Hi Dave. Awesome list. For each corpus did you only use subtitles for movies that were in their native language (i.e., only French film subtitles like Amélie for the French corpus)? Or did you also include subtitles that were translated from different languages?

The corpora include translated material (in fact in most languages other than English, an overwhelming majority of the corpus will consist of subtitles translated from English). This does introduce a certain skew that is particularly noticeable with names – while non-English corpora typically contain English names in dozens of variants, many names from the respective language are missing or underrepresented. All in all, the picture that the corpora give you represent the language as it is used in blockbusters in your local cineplex, *not* a more general picture of the language at large. That’s an inevitable consequence of the sample used and not a deficiency per se, just something to keep in mind.

How did you do these? Do you use any script for it? I ask you this question as I would like to find somewhere or to do by myself such list, but for a specific purposes. I mean for some narrow subjects like most frequent words for nurses, lawyers, construction workers and other such a groups. I would appreciate very much if you could help me in any way to establish such lists or give me some tools or advices how to do it relatively easy way, fast and cheap, as
Hoping to hear from you soon, I wish you all the best.

If I don’t find any ready lists, I will have to do it by hand, putting words and expressions into Excel document. There is no one source of it. I have found few books to teach English as a second language for specific purposes, for law, nursing or medicine for instance, then I will use adequats dictionaries, books for students of law, medicine, nurcery school, some websites where this kind of vocab is used, etc. It will takes me weeks of a hard work :( So I will get lists of few thousand words for some of disciples probably. And then I have to check frequency of every word or expression (maybe with google search) to choose the most frequent once. That is a very hard work, so I search for any possibility to do it faster or easier way. So far did not found any better idea. Do you have any, maybe?

The way I have done is using files. I can share my solution with you which can scan all files in a directory and generate word list out of it. All you’d need to do is create relevant files and run the app. Let me know if that’s good enough.

Hi Hermit,
I’m so sorry to answer you after so long time, but I had so much work and other obligations… Yes, I thing your proposal is very kind and you application sufficient for my needs. I would appreciate very much if you share your application with me. I will then build a folder with documents I have already found and then run your application. I would save me a lot of time and hard work.
Kind regards,
Monika

Hello Hermit,
thank you very much for helping me. I have finally managed with your application :) It works :) Your an angel :) Thank you. Tell me:
1) it scan only one text document in the folder or all .txt document in the folder? because it has scaned only one from all .txt files in the choosen folder
2) do you have any idea how to copy resaults from one column (numbers and words in the .txt document are, let say, in one “column”) and I would like to have them in 2 column in the Excel document, like numbers in A column and words in B column?
Regards,
Monika

ok, I will try again with another folder and another files :) That was quite hard to me first time, I couldn’t at first understand what it is all about :) maybe second time will be easier :) Need to get used to this application. Since today I will not have any access to the computer and internet for 5 days, so I will let you know next week if I succeed this time :) Thank you Hermit for helping me. You’re very kind to me.

thank you very much for your app.
I have finally succeeded to run it
It worked this time
All .txt files was scaned and I got a frequency list of all of them
It will facilitate my teaching job a lot!

Would it be possible in any way to use it for PDF documents
as I have a lot of books in PDF format
or
to make a frequency list from some www sites?
For instance I would like to prepare a frequency lists for my students
from some journals online like Le Figaro ou Le Monde?

glad it worked. The problem with PDF is many fold, it can be text + image or image only etc. Its difficult to work out. The easier solution is to extract text from PDF and operate on the extracted data.

PDF readers have an option of saving contents in text files.

Websites are easier but they are a different fish as code will have to deal with markups etc. its not difficult – just annoying as its easy to break such a mechanism. Plus websites do not like screen scraping and move swiftly to block the IPs

Thanks for your work. I downloaded the Simplified Chinese list but fount there are a lot of Traditional characters in it. Maybe the resources you use are a mixture of Traditional Chinese and Simplified Chinese? Not sure. But it’s very useful anyway.

I have two sets of files, 1 was a common Chinese Dictionary which had words in both Simplified and Traditional Chinese side-by-side and other was the Subtitles – thought that was only in one language can’t remember which one. I used subtitles wordlist and then the other dictionary to then build a dictionary for both.

Unfortunately I don’t know either to know about the mixing of characters. I apologize.

Hello. My name is Paul. Thanks for these wordlists! I will use the wordlist ‘en.zip’ for my research about english article readability. But can I ask how you build these wordlists? And I can build it myself if I need (because I may build a wordlist for the ESL/EFL learner ).

Hello Hermit,
thank you very much for helping me. I have finally managed with your application :) It works :) Your an angel :) Thank you. Tell me:
1) it scan only one text document in the folder or all .txt document in the folder? because it has scaned only one from all .txt files in the choosen folder
2) do you have any idea how to copy resaults from one column (numbers and words in the .txt document are, let say, in one “column”) and I would like to have them in 2 column in the Excel document, like numbers in A column and words in B column?
Regards,
Monika

Hi, Hermit Dave. I’m guessing your objective by using subtitles is to provide guidance to the study of spoken language. With this in mind, would it be possible to generate the frequency of combinations of words? For instance, in the previous sentence, if you searched for two word combinations, it would check the frequency of “would it”, “it be”, “be possible”, and so on. This could pick up on more frequently used phrases, idioms, etc that may have a meaning in their combination that isn’t revealed in a basic study of the individual words. It would really help in the study of idioms.

Thanks. You could do it for two consecutive words, three consecutive words, four consecutive words, etc. It would be the ones with the most hits that would be relevant as worth studying, particularly with the larger word combos, and the lower hits could be ignored. The results, if possible to import into spreadsheet cells, could then program those cells to “highlight” (ie, background changes to yellow, for instance) if the contents also appear in second spreadsheet made up of common idioms (pulled from an idiom dictionary.) Then the student could go down the list of highlighted results and study those idioms in order of frequency.

Hi Hermit Dave,
I just figure out your website while writing up my master thesis in validating the vocabulary of spoken Malay to be used by people with speech and language impairments. I always stumble to find the high frequency words in Malay since less rigorous study was done using SPOKEN language as their resource. Do you mind if I email you regarding the details on how do you generate the list? I will definitely cite your work in my thesis since the results of the word list that you have is very common if compare with my list. It is worth to discuss our findings and develop the knowledge together. Thank you for such an awesome job!

I used your word list in https://github.com/dw/cheatcodes/ , which is a simple function for mapping BitTorrent magnet URIs to spoken English. This was just a Sunday afternoon project, but I might try to improve later (e.g. minimize soundex/levenstein score of the chosen words).

you need to open the file as UTF8 encoded text file.
On windows I use Notepad and it can open it without any issues.

50K files are word list containing top 50000 or 50K entries.
Full files are full wordlists
Log files are logs of word list generation containing a few metrics like total word count, unique word count and total number of files processed.

Hi there. Fantastic initiative and very useful. Thanks very much! About the Korean lists: it seems the ko-2011 wordlist has inadvertently been run interspersed with Russian (according to Google Translate auto-detection)? Would it be possible at all to re-run this without Russian co-mingled? Also, this list (ko.txt) doesn’t display properly in Notepad by default as a result (I think the Russian confuses it). It can be opened in a web browser or MS Word just fine (but still has the Russian entries). Also, the ko-2012 list appears to be missing from Skydrive. There is a kk-2012 list (no idea which language this is though). This is the only subtitle-based Korean list I’ve found on the Internet so far so I’m super keen to get a working file! Thanks again :)

I have seen such wrong language entries while I was checking larger lists like Arabic and Hebrew – like Mandarin, those are earlier to spot. Sadly beyond hard coding language based character ranges etc its difficult to generate these cleaned.

I did however clean a few when I was consuming those. I simply rename the files to CSV and open them in Excel and then sort by alphabets.. cut out what you don’t need and save the output again in whatever format I need.

I will check my output repo at home for 2012’s Koream list. KK refer to Kazakh.

OK – cool. I’ll simply employ the same technique for now and cleanse using Excel :)
Ahhh so it was Kazakh! It even stumped Google, that one! Thanks again – you are doing the language learning communities of the world a massive favour.

I want to “lemmatize” these lists but I’d like to get the 53GB (as of a few years ago) files from opensubtitles.org. How did you accomplish this? Did you email them and ask nicely or what? I know that subtitles are a new paradigm in word frequency studies and I’d like to create my own. I’ll share them with you to post when I get done!

Thanks. I’m using this to supplement a Spanish word list I created using the EuroParl corpus. I just wanted to detect Spanish words (vs. English words) and unfortunately the EuroParl corpus is too formal for the type of data I’m using (it doesn’t common insults, for example). Subtitles are much more conversational, which is what I need.

Hello!
Thank you for your word lists. It is very helpful for a project I am currently working on: it is a OCR-project for the University of Applied Sciences in Salzburg and I am using it to determine the quality of a OCR-framework. Currently I am using your German, English and Spanish word lists.
Greetings from Austria,
Stefan Auer, BSc

When I initially commented I clicked the “Notify me when new comments are added” checkbox and now each time a
comment is added I get four emails with the same
comment. Is there any way you can remove people from that service?
Many thanks!

Hey thanks this great information, and great place to start for creating my own.. I’m working in Scala and playing around with some custom word extractions this weekend. Question, how do you go about selecting subtitle files to download? Is there some kind of bulk option to grab a bunch of them, or did you find yourself trying to select specific movies? Thank you

Thanks for these lists. I’m building them into my open-source password tool, OWASP Passfault. They are great. I’m combining 2011 and 2012 and throwing out the #1 and #2 occurrences as outliers (they seem to have the most typos). Thanks for making them available!

Great work! Is there a nice easy way to point this at a wikipedia in a language and get a word frequency list from that? I’m mainly interested in the small to mid sized languages with under 50,000 articles. From there it might be possible to create some games to verify words and improve some of the dictionaries for a few of these.

while many have done so and it could be done, I needed spoken language and I found subtitle source. Wikipedia does allow its content to be used.. I am sure I came across corpus generated with Wikipedia source.

I’m using your word lists so that I can decide which words are necessary for me to learn to have some fluency in a language, specifically Arabic; and its likely in the future French and Japanese. I’m always worried about spending time worrying over words that I won’t ever use

When you run the app, and press the button “Build frequency list”, it asks you to
1) Select directory (containing the text files)
2) Shows you save as dialog to for you to type a name (and location) for generated frequency list

Sorry for the late reply. Along with the word list, I package 2 additional log files
de-s.log contains a like of files users along with word count in the file
de.log contains summary info like total words and unique word count etc. have a look at those

Hi there, great work, a couple of comments: It would be quite easy to clear other lists of the “pollution” from other character sets, e.g. Russian and mandarin in English.
In general the algorithm does not seem to find words that include a “-” -> some important, frequent words will be missing, e.g. in Danish.
Off course, composite words and idioms would be the next great achievement – do you know of any open sources that offer these, e.g. in English. Thanks!

Yes it is easy to the lists.. when consuming I tend to do it in excel. It can be done programmatically as well as long as I define the correct range for each language.

I do split works with – into two words when building the list. Languages like French use ‘ within words and its difficult to generate a list without fully understand the language intricacies behind it.

Thanks for your reply. I’d be happy to show you an example of the positive impact it would have if “-” combined words were kept as such. One other point: there seems to be a systematic quality issue with the input; in e.g. German and dutch but also other languages there are many words with “ii” where the correct spelling is one i or il or li. They don’t feel like normal typos but rather some interpretation error done by a computer – can you comment, tjsnks in advance Olfert

Just to say thanks a lot for creating the Greek frequency list – I am currently using it to set up a ‘Memrise’ vocab course. Do you have a longer list to hand by any chance? Or is it simple to generate using the app?

I downloaded your Ukrainian frequency list but I don’t recognize the words. I see they are in the Roman (English) alphabet not Cyrillic (Ukrainian). But I don’t recognize them in either language. Can you clue me in here?? Thanks!

Hi Dave,
You have mentioned about the quotes from commercial source which costs about £500 per language for a cleaned wordlist. Could you please tell me which source can sale the reliable word frequency list? We are building a keyboard apps, and we do need a reliable word frequency lists with different languages, which we are happy to pay for them. Please assist !

Thanks so much–this is awesome. I’m using the Hebrew 2012 list for my dissertation on morphology.
I hate to bother you ,since you’ve already provided such an amazing service, but do you by any chance have a record of how many subtitles were included when you downloaded Hebrew 2012 from open subtitles.org, or maybe the date when you downloaded, since people add more there all the time? Thanks!