Would anyone be willing to make a script/tool to unpack the files from Black Clover Quartet Knights?I'd upload the files, but they're pretty big and would take too long to upload, but looks like the DATH files are for file directories I think, but can be grabbed down below.

The format is very simple but there are no filenames stored so be ready to some headache browsing about 13000 nameless files (13000 only for the first dat, so double that number):http://aluigi.org/bms/black_cover_gdath2.bms

Also are you sure that the files are not compressed? I was looking at one of the model files : http://www.mediafire.com/file/46cqv2woi ... l.rar/file . I am not sure, but looking at the bone names around 0xB0, they seem like compressed. Like some variation of RLE maybe?

There is still something off though. The output is fine for small files, but with bit larger ones it seems inaccurate. Vertex/index buffer sizes don't match the size given in the header, and you can also see that indices are weird (compared to correct output on smaller files). Maybe there is padding (unnecessary bytes) that messes up the decompression?

I am not sure how can I find the source of the problem, but I will give it a try.

Edit : It wasn't unnecessary bytes, but chunk sizes. The file is separated into chunks. The smaller files with single chunk were working fine, but multiple chunks were failing. So the second "DUMMY" long in your script is the chunk size. After you read that many bytes there will be another long which is the size of the next chunk.

There is something weird about the chunks because apparently the chunks are ignored and the decompression must be applied on the whole data (basically it's like maintaining the "context" during the decompression of the chunks) instead of decompressing each chunk separately.That's something that can't be supported by quickbms so I tried to collect all the chunks in a buffer and decompressing that buffer which is indeed what's expected, but test.mdl failed:

The problem is that the last lz4 block before the new chunk isn't complete, the 2 byte offset is missing (offset for copying from the output buffer). In my own implementation the 2 byte offset was necessary even though the token is 0, so it caused problems. I haven't checked how you implemented lz4 in quickbms, but maybe quickbms also needs it?

I had to change my implementation, so that for the last block only the literals are copied from the block, copying from output buffer (with offset and token) is ignored. I am not sure how this can be achieved in quickbms. Closest I can get to correct output was with inserting 0 as 2 byte short (like dummy offset). It will run on test.mdl without error, but still the output is not exactly correct :

// hard choice here: // LZ4_decompress_safe returns errors if there are additional bytes after the compressed stream (because it's raw) // LZ4_decompress_safe_partial returns no errors if the stream is valid // currently I opt for the second one because gives more freedom to quickbms and its scanner

If interested in how the various algorithms are implemented in quickbms you must check the perform_compression function in perform.c

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum