Support For Offsystem
Support for the Owner Free File SystemRate Topic:

It would be great to implement support for the OFFSystem within Emule to avoid all the problems with persecution of copyright infringements in the future and to promote this ingenious concept and helping to establish the OFFSystem.

It would be great to implement support for the OFFSystem within Emule to avoid all the problems with persecution of copyright infringements in the future and to promote this ingenious concept and helping to establish the OFFSystem.

The OFFSystem has a 100%+ overhead. While it's true that it provides something in the way of deniability, it's just too wasteful for practical use.

It would be great to implement support for the OFFSystem within Emule to avoid all the problems with persecution of copyright infringements in the future and to promote this ingenious concept and helping to establish the OFFSystem.

+1

the overhead is 30 % and you have the option to make 2 files out of one block, so you are quite efficient.
speeds are very fast.

http://offload.sf.net is solving all these problems, emule has.
It should be integrated as a hybrid or under-hood, so parallel.

emule has per day 150.000 downloads, ares galaxy has 1,5 Mio downloads, so 10x so much, that shows the security lack of emule.
Offload in Emule would be a new perspective.

I don't know why, but that's something very special ATM with Ares, eMule is maybe not at the highest point ever seen, but it does not look for me like dead. Besides that, what does the download numbers say besides of how often the installer is downloaded? eMule was not updated for a while, shall we download it every day just for to keep the numbers high? We also can download it thru ed2k, which is not counted.

I am an emule-web.de member and fan! Hate me or people will get suspicious about you!Ever wondered if it's all worth the trouble?

No, there will not be a new version of my mods. No, I do not want your PM. No, I am certain, use the board and quit sending PMs. No, I am not kidding, there will not be a new version of my mods just because of YOU asking for it!

the overhead is 30 % and you have the option to make 2 files out of one block, so you are quite efficient.
speeds are very fast.

That would be IMO a good price for what you gain, but on wikipedia it's 150%, which is again not acceptable.

I was looking at that recently again, and as I understand the whole thing (after reading it few times), the overhead might be indeed a lot smaller, if you choose different values for example for the percentage of "randomizers". So with different implementation it might eventually get to some more reasonable/acceptable level.

I mean, the system itself is a great idea, it provides plausible deniability, which is IMO much more "intelligent" approach than simply routing the files over few nodes, and an implementation of it in eMule as Kad3 or whatever (eventually with eMule complatible block sizes, i.e. 180KB instead of 128KB, or 9500KB, whatever seems better) could possibly slowly transfer all files, that we now have in the network into a new one, where basically nobody could ever prove, what is shared by whom. And that would be something, that new anonymous networks don't have and which is IMO the reason why they are not popular: lots of users and files basically from the start.

With the idiots thinking about ATCA and such, IMO that's something to think about, even if it might cost some bandwidth.

There is a major difference between how eMule works and how OFF works, OFF works like freenet, meaning that you have to not only donate your bandwidth but also some of your storage space to store blocks.

I'm not sure about the true overhead 200% (unoptimized) is extremely optimistic or unreliable.

When you upload a file to the OFF system you have to store each block redundantly, so it means that while yes the download overhead is 200% (unoptimized).
You need to add an at least onetime 1000% overhead for the initial release of the file.
And in reality you will have every now and then some additional overhead to restore the target redundancy when nodes goes down.

The whole point of OFF is that there is no one out there saying look at me i have the entire file, me, me, me, take me!
Everyone has only meaningless blocks that he did not generated himself but got from someone else.

I don't think that such a major design change could be easily implemented into eMule, it would be much easier to weite a new client that from the begin takes such modes of operation into account.

Except the (in the most civilized countries anyway legal) download site of the filesharing it is not much different form One Click Hosters, except having much worse speed, but instead having a search function.

David X.

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.

There is a major difference between how eMule works and how OFF works, OFF works like freenet, meaning that you have to not only donate your bandwidth but also some of your storage space to store blocks.

With current disc capacities and prices I don't think that's an issue anymore.

DavidXanatos, on 20 February 2012 - 05:58 PM, said:

When you upload a file to the OFF system you have to store each block redundantly, so it means that while yes the download overhead is 200% (unoptimized).
You need to add an at least onetime 1000% overhead for the initial release of the file.
And in reality you will have every now and then some additional overhead to restore the target redundancy when nodes goes down.

Well, in real world you also don't upload a file just once with the current system. With everyone storing random blocks on which he has no influence, eventually the releaser will not have to upload the file that often once it is in the network, with a fixed size of block cache, unsharing a file would not necessarily mean the source is gone, the blocks would be still there for a while. And in such network also nobody would be in hurry to unshare like now, so it might have some positive effects, also regarding dead files.

DavidXanatos, on 20 February 2012 - 05:58 PM, said:

The whole point of OFF is that there is no one out there saying look at me i have the entire file, me, me, me, take me!
Everyone has only meaningless blocks that he did not generated himself but got from someone else.

I don't think that such a major design change could be easily implemented into eMule, it would be much easier to weite a new client that from the begin takes such modes of operation into account.

Developing Kad wasn't done on 1 day either, still it was done, when it appeared necessary. It's hard to tell, if writing new client is easier, OTOH a completely new client wasn't written for Kad either, so why for the next big step? And a new client for a new network has one big disadvantage, no files and no users. eMule has (still) plenty of both, without further development according to the needs of users (and anonymity for P2P is one of those) that might change.

DavidXanatos, on 20 February 2012 - 05:58 PM, said:

Except the (in the most civilized countries anyway legal) download site of the filesharing it is not much different form One Click Hosters, except having much worse speed, but instead having a search function.

Worse speed... well RS throttled free users down to 30KB/s recently, so not really fast. Also files in the network could not be deleted like on one click hosters.

Well, in real world you also don't upload a file just once with the current system. With everyone storing random blocks on which he has no influence, eventually the releaser will not have to upload the file that often once it is in the network,

Yes, but NO.

The releaser needs all in all less bandwidth, but the other upload comes form other clients taht than are missing said bandwidth.
The point is that all transfers form the releaser to the block caches is waisted overhead, as it does not go to any user actually wanting the file.
In eMule all upload does to users wanting the file.
So on eMule you have n uplaods for n users, but in OFF you have n+10 or n+m uploads for the same n users, the n uploads don't come form the releaser himself but still are a drain on the overall network upload bandwidth.

Quote

Developing Kad wasn't done on 1 day either, still it was done, when it appeared necessary. It's hard to tell, if writing new client is easier, OTOH a completely new client wasn't written for Kad either, so why for the next big step?

Well, KAD integration was simple in comparation to OFF integration.
For KAD you dodnt have to change almost anything about how emule handled fiel transfers etc... you just added a few additional extra code pieces.
Of cause developing KAD itself is a different mather and very complex, but the integration into eMule was simple, as almost no changes on existing code was necessary.
Integrating OFF howeever would require to basically rewrite each and any core part of eMule.

Quote

And a new client for a new network has one big disadvantage, no files and no users. eMule has (still) plenty of both, without further development according to the needs of users (and anonymity for P2P is one of those) that might change.

You could give the client ed2k network support.

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.

Well, in real world you also don't upload a file just once with the current system. With everyone storing random blocks on which he has no influence, eventually the releaser will not have to upload the file that often once it is in the network,

Yes, but NO.

The releaser needs all in all less bandwidth, but the other upload comes form other clients taht than are missing said bandwidth.
The point is that all transfers form the releaser to the block caches is waisted overhead, as it does not go to any user actually wanting the file.
In eMule all upload does to users wanting the file.
So on eMule you have n uplaods for n users, but in OFF you have n+10 or n+m uploads for the same n users, the n uploads don't come form the releaser himself but still are a drain on the overall network upload bandwidth.

Well, we don't need to implement OFFS in that way, basically all we need is the plausible deniability part, i.e. we transfer no parts of the actual file, but some random blocks, which our client happen to have in it's cache. Those blocks could still be only uploaded to those clients, who actually request that file, i.e. handle the blocks like we handle chunks right now, only that the one, who recieve the block from us would not know, why we had that block, is it because we also have that file or is it part of an other file. Some random spreading of blocks could (maybe even should) be of course also implemented so we had also that part of plausible deniability, but nothing that uses our all bandwidth. We don't need OFFS as a storage space in the network, we don't need redundancy, all we need is, that the one receiving a block from us can't tell, why we had it in our cache.

DavidXanatos, on 20 February 2012 - 09:00 PM, said:

Of cause developing KAD itself is a different mather and very complex, but the integration into eMule was simple, as almost no changes on existing code was necessary.
Integrating OFF howeever would require to basically rewrite each and any core part of eMule.

Well, IMO it does not really matter, which part of the job is the big one.

DavidXanatos, on 20 February 2012 - 09:00 PM, said:

You could give the client ed2k network support.

Well, I don't know, if writing a completely new client with ed2k/Kad/OFFS with all or at least most of the features eMule has would be easier... but in the end it does not matter for the users, the result for them is the same.

Well, we don't need to implement OFFS in that way, basically all we need is the plausible deniability part, i.e. we transfer no parts of the actual file, but some random blocks, which our client happen to have in it's cache. Those blocks could still be only uploaded to those clients, who actually request that file, i.e. handle the blocks like we handle chunks right now, only that the one, who recieve the block from us would not know, why we had that block, is it because we also have that file or is it part of an other file. Some random spreading of blocks could (maybe even should) be of course also implemented so we had also that part of plausible deniability, but nothing that uses our all bandwidth. We don't need OFFS as a storage space in the network, we don't need redundancy, all we need is, that the one receiving a block from us can't tell, why we had it in our cache.

No this wouldn't work, if someone can prove you that you have 100% (or near to) blocks belonging to same file (and they could if they would look for blocks for the file) would be enough for a conviction as it is statistically extremely unlikely.
Remember its not a criminal charge but usually civil law suits that dot have such high standards of burden.

The important part of the OFF concept is that the people you can point your finger at never have any substantial anount of the file you are interested in.

David X.

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.

No this wouldn't work, if someone can prove you that you have 100% (or near to) blocks belonging to same file (and they could if they would look for blocks for the file) would be enough for a conviction as it is statistically extremely unlikely.
Remember its not a criminal charge but usually civil law suits that dot have such high standards of burden.

Well, sure, today you don't even need an internet connection to be sued by them, but "statistically extremely unlikely" still should not be accepted if the case goes to the court.

DavidXanatos, on 20 February 2012 - 10:22 PM, said:

The important part of the OFF concept is that the people you can point your finger at never have any substantial anount of the file you are interested in.

That could be implemented in a way, so that all blocks of a file are never shared at the same time. That would slow down the release process somehow, but since the randomizer blocks would be parts of your previous releases or files you downloaded or random blocks you've got just for to have them, people starting downloading the new file could start with those (and some part of the new ones, which would slowly change as blocks are uploaded), since they are in the network already.

Link64, on 20 February 2012 - 09:35 PM, said:

Some random spreading of blocks could (maybe even should) be of course also implemented so we had also that part of plausible deniability, but nothing that uses our all bandwidth.

If we had for example one upload slot, which we would use for random block spreading (while all other are used for actually requested blocks), we could use that one for big parts of a new file instead of really random blocks, but we would upload just one block per user per upload session (or maybe 2 or 3, whatever seems resonable) that way.

BTW, if you use the real OFFS, if you download a file, you must have all blocks in your cache at some point, so there must be a way around it.

Well, sure, today you don't even need an internet connection to be sued by them, but "statistically extremely unlikely" still should not be accepted if the case goes to the court.

that is not what I meant.

I mean the following:
If a peer attempts to download a file he necessarily needs a way to locate all peers that have blocks belonging to the file he is interested in.
Now in OFF every peer single peer he finds has only a few randomly spreader blocks.
In the system you proposed he would find some peers that dont have 0,1% but 10 or 50 even 100% of blocks what would be sufficient prove to assert that thay did not got the blocks randomly but are actually sharing this file.

Link64, on 20 February 2012 - 10:15 PM, said:

That could be implemented in a way, so that all blocks of a file are never shared at the same time.

That would be pointless, if the adversary observed the peer for a few days or weeks he will eventually gather enough info to prove that the peer is actually sharing the file.

Link64, on 20 February 2012 - 10:15 PM, said:

BTW, if you use the real OFFS, if you download a file, you must have all blocks in your cache at some point, so there must be a way around it.

I don't know for sure, but I would assume that the people who Download the blocks for own use wont advertise in the network that they have them.

At least it should be like this.

Kind of like KAD, nodes that index keyword and sources are not the same nodes who share the respective files.

David X.

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.

If a peer attempts to download a file he necessarily needs a way to locate all peers that have blocks belonging to the file he is interested in.
Now in OFF every peer single peer he finds has only a few randomly spreader blocks.
In the system you proposed he would find some peers that dont have 0,1% but 10 or 50 even 100% of blocks what would be sufficient prove to assert that thay did not got the blocks randomly but are actually sharing this file.

Well, I'm not a lawer, but...

1. AFAIK, now you can be sued for sharing even a single block/chunk, basically even a single byte of a (in your country) incriminating file, if that's the size of the last chunk. That's because you are actually sharing the file or a part of it. With OFFS you never share a file, but blocks that consist of provably random data, which can't be copyrighted/illegal until they are converted back to the file.

2. Plausible deniability is still there, even if you have a large part of the blocks belonging to an illegal file, simply because a user, who had this file in his share could have released lots of other files which you downloaded, so the blocks belonging to this file became randomizers for those files.

Let's consider an extreme case:
Step 1:
Someone releases a larger series of legal files, but in the same time he has lots of "illegal files" (whatever that might be, does not matter) in his share. Now the blocks originally belonging to the illegal files are used as randomizers for the legal files, if the illegal files are small with file sizes under the block size, basically entire files are used as randomizers. Note, we are considering a release of a series, maybe well over hundred files which might be large and need many randomizer blocks. Now people downloading the series will get a set of blocks, from which the legal files as well as maybe even quite a few illegal files can be generated. So even having the entire file would actually not be a proof, that you actually have downloaded or shared this particular file.
Step 2:
The eMules, which got the "illegal" blocks as randomizers for the legal file wouldn't know, which randomizer blocks belonged originally to which file, so if used again as randomizers by those eMules for their own releases even entire files including the randomizers could be used for a single file, i.e. someone downloading that single file would automatically have all blocks needed for at least one of those illegal files.

I think it could also apply to large files, if someone having just one big file in his cache and not much other blocks releases lots of other files, preferably belonging together, people downloading all those files will automatically get large parts of the illegal file as randomizers.

DavidXanatos, on 20 February 2012 - 11:55 PM, said:

Link64, on 20 February 2012 - 10:15 PM, said:

That could be implemented in a way, so that all blocks of a file are never shared at the same time.

That would be pointless, if the adversary observed the peer for a few days or weeks he will eventually gather enough info to prove that the peer is actually sharing the file.

They can think/suspect it, there's no actuall proof, unless they confiscate the computer in hope of finding the actuall files and that might still not help if the hard drive is encrypted, as than they are again sitting with random numbers in front of them. We are sharing blocks of random data and not any files. They have to prove you have shared the file, all they can prove is, you have shared random data blocks, which might eventually by applying some funky calculations be converted to this file.

A simple example: I can also take a 700MB linux image, a 700MB movie, run the XOR thing over it, and share the resulting file. Am I sharing the movie? Or the linux image? Am I sharing the movie as soon as I add the linux image and the random data file to my share?

But my point is that if you have let say out of a trillion blocks that are in the entire network 1000 and from this 20 belongs to a 20 or 30 block large file (randomizer + data blocks).
The plaintive can assert that it is almost a statistical impossibility that you would got this blocks by chance.

So it is implausible deniability, for a civil suit there is no innocent until proven guilty beyond any reasonable doubt, the side that sounds more plausible wins.
So you need actual plausible deniability the judge will believe.

Also in some countries like Germany having the 10 file blocks + 10 randomizer blocks in your share even if you always only downloaded linux isos would be illegal, it is called "Schtörerhaftung" it means that if you allow someone to abuse your Internet connection to commit a crime you are responsible for his actions due to your own grouse negligence.

Yes thats a big problem for freenet nodes and tor exit nodes, etc... and probably a violation on human rights.

The point is in such countries the judge don't asks if you did it on purpose or even knew about it, he asks just is it so, can someone reconstruct a copyrighted file entirely or to a big part from blocks you are caching it is irrelevant why the blocks are in your cache, just that they are there and that they enable someone who knows how to reconstruct the file.

The only way to be safe is as a user providing blocks never have any one substantial portion of the file in your block cache.

David X.

NeoLoader is a new file sharing client, supporting ed2k/eMule, Bittorent and one click hosters,
it is the first client to be able to download form multiple networks the same file.
NL provides the first fully decentralized scalable torrent and DDL keyword search,
it implements an own novel anonymous file sharing network, providing anonymity and deniability to its users,
as well as many other new features.
It is written in C++ with Qt and is available for Windows, Linux and MacOS.

Also in some countries like Germany having the 10 file blocks + 10 randomizer blocks in your share even if you always only downloaded linux isos would be illegal, it is called "Schtörerhaftung" it means that if you allow someone to abuse your Internet connection to commit a crime you are responsible for his actions due to your own grouse negligence.

Yes thats a big problem for freenet nodes and tor exit nodes, etc... and probably a violation on human rights.

Since I assume you know german: here is stated just the opposite, "Mitstörerhaftung" not possible due to how the OFFS works, specially interesting part 6 and 7 of the PDF.