I noticed that the encoder fails to work for regular ansi filenames if path happens to contain unicode characters. For example open command prompt in dir called "Pośród kwiatów i cieni" and try to encode a simple "test.wav" there. Usually encoders, even if they don't support unicode, have no problems doing this.

Hey I just wanted to say that TAK now is faster than ever and I love it ! My music playlist which consists of 2300+ songs is all in TAK because it's much faster than FLAC both when encoding and decoding. (I use OGG on my DAP, which is a Sansa ClipZip).

This is not by any means a neutral test, it is the least-FLACable piece of music from my collection, and it is the least flattering for TAK that I have ever come across (it is WavPack-friendly as long as time is no object ...). Consider it a worst-case test. Music is Merzbow: I lead you towards glorous times, track 3 from Venereology (1994). For a listen: http://www.youtube.com/watch?v=gTRZdFqAOGAHardware: an old p4 3.06 with spinning hard drive.

tak.exe (GUI) used. No tweaking of options used except standard presets. No firm procedure for repeating, this is not meant to be taken as hard science, but the decoding tests were repeated because they surprised me.

Encoding using “test” (as I guessed writing is the bottleneck, definitely looks to be the case) and “compress”, p3 p3e p3m omitted because I'm lazy, but they all fail to break the 100% threshold.

Surprise – the higher compressed decode faster!? Would have made sense if there were large differences in file size which had to be read from drive, but these are virtually identical in size. This experiment was repeated due to the surprising result.

Decoding using “decompress”, writing to file; repeated due to the previous result being such a surprise. Somehow I don't get very consistent results here, but this is not way off:

I noticed that the encoder fails to work for regular ansi filenames if path happens to contain unicode characters. For example open command prompt in dir called "Pośród kwiatów i cieni" and try to encode a simple "test.wav" there. Usually encoders, even if they don't support unicode, have no problems doing this.

I suppose this fails, because TAK always generates a full path specification. In your case it will determine the current directory and add it to the file name. Unfortunately this fails because of TAK's lack of unicode support. Indeed, the lack of unicode support is annoying...

QUOTE (eleria @ Jun 23 2013, 16:32)

Hey I just wanted to say that TAK now is faster than ever and I love it ! My music playlist which consists of 2300+ songs is all in TAK because it's much faster than FLAC both when encoding and decoding. (I use OGG on my DAP, which is a Sansa ClipZip).

So thanks TBeck ! I wish the best for this format's future.

Thank you very much!

However i don't expect TAK to decode much faster than FLAC. Your system must be something special.

QUOTE (Porcus @ Jun 23 2013, 19:22)

This is not by any means a neutral test, it is the least-FLACable piece of music from my collection, and it is the least flattering for TAK that I have ever come across (it is WavPack-friendly as long as time is no object ...). Consider it a worst-case test.

I am always interested in such special files. Could you send me a short snippet?

I just want to say thank you for TBeck for making such an awesome codec. I just started using tak yesterday and must say I'm really impressed by it.Excellent compression and encoding time. In -p4m i can encode on my system at ~108x. I've been using FLAC for over 10 years now for all my music CD's as well as recorded vinyl and i'm seriously thinking converting all to TAK.

I must congratulate you also on the best command line information i've seen in a CLI tool. You really don't need a manual for it.

I will! Well, if time permits... I've just optimized a new compression technique, which was encoding far too slow: 0.1 * realtime on my PC. Now it's more than 250 * realtime and therefore practicable. But it's integration would require a format change and the compression improvement isn't big enough to justify it. So i will have to look for more tuning oppurtunities. At some point the sum of improvements may be sufficient.

It's simply getting more and more difficult to squeeze out some more compression without significantly affecting the decoding speed.

I'm trying to get takc to work with CUETools to runs some tests, but it always fails with"Takc.exe has exited prematurely with code 2: The pipe has been ended."

This is the command line I'm using: takc -e -p%M -md5 -overwrite - %O%M is replaced by the profiles, %O by the outfile by CUETools. If I try to run the command on a testfile in the command prompt, it works since I don't use a pipe in that case. Wavpack works fine with CUETools and a similar configuration, what am I missing?

I wish rockbox supported TAK, I'm interested in which battery times I would get with it.

Argh. It's because the outfile contains non-ansi characters, so it's the dreaded "unicode not supported" case again >.<I'm not that knowledgeable about it, but why didn't you include support for unicode right from the start?

try foobar instead? that uses temporary filenames when encoding and then renames them when done. so it really doesn't matter whether the encoders support unicode or not.

In many cases it works for non Unicode supporting CLI encoders because they don't care or use full path name of those temporary filenames.However, as is reported by Case and confirmed by TBeck in this thread, it will fail for takc if the target directory name contains Unicode characters not present in your locale, since fb2k creates the temporary file in the target directory, and takc constructs full path name from it.

I will! Well, if time permits... I've just optimized a new compression technique, which was encoding far too slow: 0.1 * realtime on my PC. Now it's more than 250 * realtime and therefore practicable. But it's integration would require a format change and the compression improvement isn't big enough to justify it. So i will have to look for more tuning oppurtunities. At some point the sum of improvements may be sufficient.

It's simply getting more and more difficult to squeeze out some more compression without significantly affecting the decoding speed.

I want to venture that most TAK users do not have a major issue with a format change, as longs as: that it is accompanied by a major revision number (i.e. TAK 3.xx); and, backwards compatibility for decoding previous versions exists.

Of course, I expect you (the developer) already knows this. I merely state in public that fears over 'format change' might be exaggerated (unless *somehow* the reverse-engineered TAK decoder gained a lot of traction :shrug:). Looking at all the other lossless codecs, changing the format seems to derive from necessity and evolution via accumulative enhancements.

Whatever may be decided, I will try to stay updated and active with TAK hopefully as contributor rather than encumbrance

I want to venture that most TAK users do not have a major issue with a format change, as longs as: that it is accompanied by a major revision number (i.e. TAK 3.xx); and, backwards compatibility for decoding previous versions exists.

I'm not that knowledgeable about it, but why didn't you include support for unicode right from the start?

Because my old Delphi 6 from 2001 provides very little (none for the GUI) unicode support and TAK uses some libraries i have written long ago which too don't support unicode. Even the implementation of unicode support for the command line version only would be a lot of work. Currently i am not sure, if i will implement unicode support before porting TAK to C++.

Because this definitely will take quite long and of course i understand how important unicode support is, i may use a top-down-approach for the first step of the port:

- Put the Delphi codec core into a library (DLL), which can be called by C++.- Port the much less comprehensive application logic to C++ and add unicode support.

But which road i will take depends on many factors i can't foresee. Therefore i can't make a clear statement.

QUOTE (Destroid @ Jul 1 2013, 11:09)

Of course, I expect you (the developer) already knows this. I merely state in public that fears over 'format change' might be exaggerated (unless *somehow* the reverse-engineered TAK decoder gained a lot of traction :shrug:). Looking at all the other lossless codecs, changing the format seems to derive from necessity and evolution via accumulative enhancements.

QUOTE (d125q @ Jul 1 2013, 12:24)

QUOTE (Destroid @ Jul 1 2013, 11:09)

I want to venture that most TAK users do not have a major issue with a format change, as longs as: that it is accompanied by a major revision number (i.e. TAK 3.xx); and, backwards compatibility for decoding previous versions exists.

I wholeheartedly agree with this.

I definitely don't intend to remove decoding support for older codec versions. If i ever had thought about it, the latest release of the great Monkey's Audio would have taught me better... But at some point i will remove the assembler optimizations for old versions. This will simplify the work on an open source decoder release. It's quite possible that one of the next TAK releases will come without assembler optimizations for V1.x files. I don't think that's a big issue. Decoding will still be quite fast.

What i would like. Yes, i have changed my mind... I don't think i will alter the format without releasing an open source decoder. I would like to make it easy for the ffmpeg developers to implement the new format.

try foobar instead? that uses temporary filenames when encoding and then renames them when done. so it really doesn't matter whether the encoders support unicode or not.

Just to note that i've converted from FLAC over 60 albums with Foobar till now and haven't experienced any problems with filenames using lots of (´ ` ~ ^ ç ...and so on) on the filename characters, it converts everything flawlessly. So maybe it really depends how the applications pass the original names to the *.tak destination file or how they use the pipe for that matter.

Filename is taken care by fb2k, so you should have no problem if path to the destination directory doesn't contain Unicode characters not present in your code page. Otherwise it will fail.If you don't get it, try encoding to C:\❤\☀\ or something.

Filename is taken care by fb2k, so you should have no problem if path to the destination directory doesn't contain Unicode characters not present in your code page. Otherwise it will fail.If you don't get it, try encoding to C:\❤\☀\ or something.

Filename is taken care by fb2k, so you should have no problem if path to the destination directory doesn't contain Unicode characters not present in your code page. Otherwise it will fail.If you don't get it, try encoding to C:\❤\☀\ or something.

Well I don't know, this is what I'm getting if I try to convert this album:

Yes, the example above is quite unnaturally made up and you might not meet this kind of problem.However, in my environment (Japanese, CP932), quite many of artist / album name (which will be naturally used as a folder name) actually bring the problem, because latin accent letters are missing in our code page, and available only through Unicode.

ChronoSphere, I'm quite certain your problem is caused by incorrect command line parameters. If you use pipes with foobar2000 you need to add -ihs parameter for TAK, otherwise it will remove the encoded file in the assumption that something went wrong when length didn't match.

Adding -ihs fixes that, indeed. I did read the takc help, but it wasn't clear the -ihs parameter is mandatory when piping to me.Why not make it set the parameter automatically? Or is it something specific to the way foobar is piping the file?

CUETools still doesn't work though, but only with non-ansi names.

One more thing, is the way TAK works suitable for a (future) GPU implementation? I remember bryant saying that wavpack, for example, isn't.

Adding -ihs fixes that, indeed. I did read the takc help, but it wasn't clear the -ihs parameter is mandatory when piping to me.Why not make it set the parameter automatically? Or is it something specific to the way foobar is piping the file?

Because it's possible that a software writes a valid wave header with valid size data to the pipe. Then you would like to store it in the TAK file to be able to restore the original file with bit-identical meta data. With -ihs applied, TAK will save no header and create it's own one on decoding. Which can differ from the original.

QUOTE (ChronoSphere @ Jul 1 2013, 18:47)

One more thing, is the way TAK works suitable for a (future) GPU implementation? I remember bryant saying that wavpack, for example, isn't.

At least as well as FLAC. Basically TAK is using the same kind of prediction filter as FLAC, the asymmetric mode of Mpeg4Als and LPAC. Possibly it can be implemented more efficiently, because it does only require 16 * 16 bit integer multiplications with an 32-bit accumulator. But i don't know, if current GPUs provide appropriate instructions to take advantage of the simplier arithmetic.

But i wouldn't expect a similar compression advantage of a more extensive evaluation of compression parameters as FLAC achieves. Im most of my evaluations TAK's fast heuristics came very close to a brute force approach which tries most of the possible parameter combinations.