I found something

I found a simple small decompiler using google athttp://tankado.com/index.php?2008/06/21/281-babylon-bgl-decompiler
I checked and it worked the the older version of BGL file I had (didn't work for the new ones).
I also verified with filemon it doesn't mess around (still not taking responsibility, not my file).
Hope this helps.

------------------ EDIT: sorry, just realized its the same app as bglgls previously posted on this thread.

Last edited by d8o8s8; December 2nd, 2008 at 08:25.

I promise that I have read the FAQ and tried to use the Search to answer my question.

Szereshki I've used nothing of special, I've simply used DevC++ 4.9.9.2 and of course the zlib 1.2.3 downloaded as package from DevC++. The only think to remind is the casting problem ( not implicit in c++ see my post #35 and #36 ), the inclusion of ctype.h in the main and finally the linker problem solved by addressing DevC++ linker to the location ( folder ) of libz.a.
If can be useful I can post my entire DevC++ project.

regards

I promise that I have read the FAQ and tried to use the Search to answer my question.

Szereshki I've used nothing of special, I've simply used DevC++ 4.9.9.2 and of course the zlib 1.2.3 downloaded as package from DevC++. The only think to remind is the casting problem ( not implicit in c++ see my post #35 and #36 ), the inclusion of ctype.h in the main and finally the linker problem solved by addressing DevC++ linker to the location ( folder ) of libz.a.
If can be useful I can post my entire DevC++ project.

regards

May you plz post theDevC++ project? I have problems with linker? would you please tell us the details? Sorry.

Since I got most of my help from here, I figured I'd register and post what I've discovered for the benefit of all, to give something back

I'm working specifically on the japanese->english dictionary http://info.babylon.com/glossaries/4E9/Babylon_Japanese_English_dicti.BGL. My goal was to decompile it, extract all the data, then add additional entries from a different dictionary.

I don't know why (not a huge C person) but the code provided thus far misses a lot of entries, and, more importantly in my case, doesn't extract the alternate spellings from each record, which is critical for word recognition in Japanese. So I decided to write my own extractor (in Python, because it's awesome).

I don't know about other dictionaries, but in this one the record structure is:

header byte - record type/length byte as described earlier by Bilbolength bytes - 1-2 bytes holding length of recordterm length byte - byte holding length of termterm - the dictionary entry for this record0x00 byteunknown byte - never figured out what this does. I suspect it specifies the record contents, eg: has a definition, has alternate spelling, has a classification, etcdefinition - term's definition, including html code and such0x14 - separator byte (or end byte if definition was the last part of record)0x02 - classification specifier - means a word type (noun, verb, etc) will followclassification - in this case, was between 0x30 and 0x3b and was mapped in one of the 'id' records earlier in the dictionaryalternate spellings - separated by 0x## between 0x00 and 0x30 (seems arbitrary what the separator character is)

Note that the record length does not include the record header byte or the length bytes themselves.

Anyway, armed with this I created a quick and dirty program to parse it, and lo and behold, it works. The resulting file can be run through the Glossary builder and, at least as far as I've tested, appears to be identical to the original.

The code clearly isn't designed to be flexible or anything - I seriously just threw it together in an hour - but hopefully it might provide some insight as to how to go about making the perfect decompiler :P

Last edited by Ulrezaj; December 7th, 2008 at 16:35.
Reason: spelling

I promise that I have read the FAQ and tried to use the Search to answer my question.

I've also atthached a compiled bglgls exe with the buffer more capable.

Doesn't work. I tried a big bgl (>9mb). The previous bgl2gls works great but has the problem of cutting long definitions. But this one exports a 600kb gls (incomplete) and also dont delete the 50mb dat temporary file.

I increased all 1024s to more and it worked. I also changed the character validation function to always return 1 (for characters other than English) except for 1E and 1F characters (which are placed before and after a bitmap file name). I changed the Target language and alphabet from English and Default to Arabic (in my case).
Now it can generate a gls from my big bgl. To problems still exist:
1- I tried hFrasi advanced version (a Persian dic, 9.38mb) and the reproduced bgl is 4.78mb. Some part of definitions is still cut. This problem is not related to the buffer size. (try “forces” or “cut” to see).
2- Now it realizes the bitmap file, but doesn’t correctly include it in the bgl.

These represent a basic defect in the code. Compare two same words (author name) from the original and reproduced bgls:

I increased all 1024s to more and it worked. I also changed the character validation function to always return 1 (for characters other than English) except for 1E and 1F characters (which are placed before and after a bitmap file name). I changed the Target language and alphabet from English and Default to Arabic (in my case).
Now it can generate a gls from my big bgl. To problems still exist:
1- I tried hFrasi advanced version (a Persian dic, 9.38mb) and the reproduced bgl is 4.78mb. Some part of definitions is still cut. This problem is not related to the buffer size. (try “forces” or “cut” to see).
2- Now it realizes the bitmap file, but doesn’t correctly include it in the bgl.

These represent a basic defect in the code. Compare two same words (author name) from the original and reproduced bgls:

Ok szereshki, I'm happy for your progress to solve compilation errors.
I think that we need the help of the original author of the code..

please post here the code you had modified so we can start to see it and think what's to do.

regards

I promise that I have read the FAQ and tried to use the Search to answer my question.