I currently hold much random knowledge about security, and am writing a book on security (which is on hold). I'm also writing a very large framework for software development in the Objective-C programming language.

I came back to Advogato because I'm looking for a place where I can write articles. Articles are different from just blogs and diary entries, in my mind. The difference is that a diary entry or a blog is a personal log; while an article is a diffusion of information out to the community presented to them to increase their range of knowledge and understanding.

I'm quite hackish with simple blog entries; they're just recounts of personal experience, and don't need to be polished. Articles, books, and web sites, on the other hand, require a gentle, smooth polish to give that firm yet smooth shine to them. Please be courteous.

Currently, I maintain the chpax init.d script for Hardened Gentoo, to assure that applications which break under PaX are properly relieved of those restrictions which break them. That list needs to shrink, but braindamaged programmers don't care; thus, it only grows.

Recent blog entries by bluefoxicy

I'd like to thank all those who have certified me oh so long ago, back before the time where I forgot about this account.

I'm starting up an article, called Bloat, about how bloated and clunky software is. I suppose I'll make diary posts and work on SVTK in the mean time. Please excuse the unkempt, unproofread nature of this entry.

I'm tired of the bloat these days. Look at Firefox. Look at Dillo. Do you think 100M more code and about 80-500M resident size increase will come from adding in JavaScript, client side XML, and a plug-in to translate from Netscape plug-ins? I doubt it. Maybe the system requirements will go up to a 12Ghz cpu as well?

While I ponder this, I'm working on an object oriented framework in Objective-C, SVTK. This framework will be pretty bloated itself I guess; but I'll still make it well made. We're talking about 44 inherant bytes in the base SVObject; plus two pthreads_key_t and two pthread_mutex_t locks gotten with malloc(), however big those are.

NOTE: All numbers ahead are approximate and qualitative; do not take them as actual statistics. I gathered approximations with a few testcases and `time`.

The need for this mass comes from my need to support fault recovery. The overhead from this should be low (very low). Let's tally:

These numbers aren't achievable in a threaded environment; because real locks are used when threaded, the thread safety cost becomes 12; fault tolerance becomes 7.5; total becomes 25.

In truth, I'm going to have to sacrifice the equivalence of 12 lines of (j = k) to be thread safe, 16 if you want to be able to swizzle an object, and 6.5 to clean up after faults in the program. This is a total of 34.5 simple assignments; 22.5 in members that don't need to do special locking (this will be most members).

This isn't including message passing overhead. It's also based around simple, single-assignment lines of code; the line, if (my_field == anArg->field), is probably ~5-7 on this scale. Also, these estimates are extremely arbitrary anyway, and give no accurate measure of how much overhead I'll be adding to do this.

This is complex shite. I supply this all to make the programmer's life simpler. The actual swizzling process that these facilitate is a major operation with a lot of overhead, which would likely be impossible to actually code above the framework.

Let's finish up with fault tolerance. The fault tolerance stuff is there to make sure that if a thread is killed off--or whenever a thread ends really--it will attempt to release retained objects and unlock held locks. It costs a menial level of overhead; but it does one magical thing: It prevents memory leaks and deadlocking.

Deadlocking is when a program encounters a blocking lock which is held by a thread which is either itself waiting for that lock to unblock, or which has ceased to run. In either of these situations, the body of code having locked the lock is no longer executing; and thus, the lock can never be unlocked. Any code encountering this lock will deadlock, and never continue.

My threading object will take a void*(*function)(void*) function pointer and the void* argument as arguments to create a thread. It will encase these in a structure, and pass that structure and a pointer to the condestructor for threading to the pthread_create() function (or whatever native equivalent there is).

The condestructor will de-encapsulate the structure it is passed and execute the initializer with the argument. Upon return, it will release all local autorelease pools and then return whatever the initializer function returned.

In the event of a memory leak or deadlock scenario, such as when an exception terminates a thread, the condestructor will be either returned to during this state OR called with NULL as the argument. In the latter case, it will simply proceed through to releasing the autorelease pools and then return an error.

Once the autorelease pools are free, we can proceed to release retained objects and unlock held locks in any order we please. This will have the pleasant effect of ensuring that excessive releasing will not occur from releasing objects and then letting autorelease pools re-release them; and at the same time releasing objects that are retained by the thread. It will also unlock held locks, allowing other threads waiting on those locks to continue.

I am pondering adding an sv_trash() to put data in a thread-local "trash bin" which upon thread termination is emptied. This would facilitate avoidance of memory leaking from regular malloc()ed data.

Ah well. I should shut up now. This post is poorly written and covers much of just me talking to myself. :)

First I'd like to say, I am not affiliated with Xiph or xiph.org at all. That said...

Well, I'm working on an Ogg structure for fcomp2 ( http://fcomp.sourceforge.net/__fcomp2.plan ). I'll have to modify the format a bit. Here's what I'm thinking right now. This is with N frames of compressed data (one for each file) and assuming m Master Dictionaries.

Now I'm starting to see. Streams. heh.

First off, each Stream starts with 4 bytes of data that identify its type. If the Attribute or File List Streams are damaged, other mirror streams can be located in this way.

Our interlacing scheme is simple: BOS each stream. Then for each stream IN ORDER, place the entire stream contiguously. Then place an EOS. This is a way to "fool" Ogg into doing a sort of self-contained Interleaved-Chained stream (i.e. [A][b][endb][c][endc][ENDA] is illegal, so it's [A][b][c][bdata][bdata][ENDb][cdata][cdata][ENDc][ENDA], which is legal). You CAN interlace however you want but why bother slowing down the decoding process? It would (preferably) look like this:

The first and last stream are Attribute Streams. They store the following information:

Magic Number -- 0x00000000

Stream Numbers of File List Streams -- There are usually 2 File List Streams. Sometimes there's more. These are in a set. This gives the Serial Numbers of each stream.

Stream Numbers of Attribute Mirrors -- Attribute Mirror serial numbers. If this attribute stream is corrupted, these are read and seeked to first, and their magic numbers checked, to avoid a long-winded full physical stream scan (if possible).

Master Dictionary Index -- Gives a list of Master Dictionary numbers, and a list of stream serial numbers for each of them.

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000000

Stream 2: File List Stream

The File List Stream stores the path/file name/stream serial number/Master Dictionary of each file. Each entry is stored as this.

Magic Number: 0x00000001

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000001

Stream 3..(m+2): Master Dictionaries (originals)

These don't exist if you have no Master Dictionaries. If they do, they contain both the data and the length for each string in the dictionary. This means that the first occurance in the fcomp2 compressed stream of a dictionary entry CAN be replaced with a dictionary index.

Master Dictionaries store both the data AND the length; the indicies are determined during the reading of the dictionary.

Magic Number: 0x00000002

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000002

Stream X..(X+N-1): Compressed Files

The compressed files can go into any streams. Spewed or interleaved throughout are any mirrors of Master Dictionaries, the Attributes Streams, and the File List Streams that you want for error recovery.

I've got to write out an adapted fcomp2 storage format for this. For now, we'll go with the following:

Magic Number: 0x00000003

Master Dictionaries -- A list of Master Dictionaries. If more than one is specified, then each is appended to the end of the dictionary as it is read. Also, the stream may have its own dictionary, which is appended to the end of this.

Dictiorary -- This dictionary is appended to the end of the final dictionary made by the Master Dictionaries. If it exists anyway.

Huffman Encoding information -- Flags and values to tell how to work the Huffman Encoding.

Compressed Stream -- The compressed stream. This uses all of the above to decode to the original stream.

May itself be gzip'd. If it's gzip'd, its magic number is 0x80000003. This is provided for consistency, i.e. the code can just look for the MSB to be on in the Magic Number to see if it's gzip'd. Don't do this. :)

Second to Last Stream -- File List Mirror

Mandatory File List Mirror on the end of the physical stream, for error control.

Final Stream -- Attributes Mirror

Primary Attributes Mirror. If the Attributes Stream is dead, then this should exist at least, as a failsafe, even if no other mirrors exist.

The above should provide enough to allow seeking through the stream to any file; a type of "Solid" compression (shared dictionary; real "Solid" would be a compressed tarball); and enough to actually stream by sending the Attributes and File List, allowing a client to select a file, and then sending the Master Dictionary and File Streams. This is especially good for compressing entire web sites and letting a web server supply these to a (compatible) browser, as the server can find everything from the browser request.

Right now I'm hoping to find someone who understands trackers (music software) to help me finish the Advanced Tracker pre-planning. Understand: IT WILL HAVE A SEQUENCER BACKEND! The entire playback engine is event-driven. The trick is to convert legacy modules to AT format. It also will be object oriented C++.

fcomp2 should be pretty nice for compression size. I have someone working on that currently--I think :) He's to look at the doc and do it from scratch on his own, except that I'll give him the fcomp code (which doesn't currently work) to look at to rip things like the fcomp_scan functions out of.

fdber is what, sleeping?

Sampletune is one I need for Advanced Tracker, to add extended sample contol (auto-tuning actually). It'll be cool; no more booting Windoze, opening AnalogX SampleTune, tuning a ton of ripped samples, and then loading modplug under Wine in Linux and fine-tuning. Load AT up right in Linux or Windows or on a Mac or whatever, load the converted SPC or the ripped audio, hit tune, fine tune. Bingo. No intermediate steps.

And wtf is daedalus? He's supposed to be working on CPD/UPD, which is a database for package managment on Linux/Unix designed to replace the RPM, DPKG, Ports/Ported/emerge, and whatever Slackware .tgz uses. It is intended to be integrated with all package managers; see http://cpdupd.sourceforge.net/cpm.html for an explaination.

I'll get back to ESIC one day... don't tell NT I said this but I'm really annoyed with him wanting to design it to look like C, i.e. with brackets and C-ish functions. I want to just extend BASIC to have pointers, classes, and to use the struct keyword. Also I want to header-ize it so that everything isn't all crammed together, and make it non-specific to the system.

E-MAIL ADVANCING

The next project I'm starting after all of this is a mailer message control plug-in structure and an e-mail client that uses it, to supply a simple and easy way to pass messages through spam-filter, compression, encryption, and so on. I'm thinking eventually all e-mail clients will support compression, spam filtering, and address-to-address cypher auto-negotiation.

The auto-negotiation requires a database of e-mails that auto-creates itself. First, when sending to an unknown host, you are allowed to tell it to try to get a message there querying supported features. The return message gives a response for each. E-mail query header:

The Requested-cypher is a number representing the cypher key that is to be used. It is 64 bit hex, as seen there. If there is known cypher between the two addresses, the sender will send one of these, and the client on the other side will reply with a cypher-request-response header.

In this example, compress (proprietary LZ0) is unsupported in the queried address. Once the response (or a "Mailer error" reply) comes back, the message is dealt with (i.e. compressed/cyphered and sent, or halted and the user is informed of the error in the "Mailer Error" issue). Note that the cypher-request-response is {accept/deny} [key-ID].

A cyphered message could need a sync up first, if there is no known cypher between the two clients. Header:

After all these messages go back and forth, the mailer clients finally have a key matched to eachother. The negotiation-cypher-response has two parts: The algorithm, and the key's ID. Now to send messages back and forth:

send-features: bzip2 cipher
send-cypher: 00-F9-CC-A7-B9-24-17-FF

Take note that the send-features are processed in order, left to right. In this example, the message data (not header) is bunzip2'd, then the key referenced by 00-F9-CC-A7-B9-24-17-FF (which is 3DES).

Now we've negotiated send-cypher and encryption, by sending like 4 messages back and forth. Now, both sides CACHE the response values. That means they both know eachothers' features. This process can be forced to occur later anyway, i.e. if you now add an fzip2 module for fcomp2 compression.