by virtue of modern compilers alone a clone would be faster... dont forget winmx was designed to run on windows 95 revision A in the days when single core/single thread 400MHZ cpus were a luxury and multiple cpus almost hedonistic for anything but industry use...

Quote

And BTW, it would be fun to keep a current list of all files on the WPN on a website, searchable. Search for something something, get back a list of names and hashes.

fun? yes, doable? actually very yes, but dont forget the one who has this list also takes legal responsibility for it.... ....which is why opennap isnt around as much anymore.....

Sorry to spoil the xmas fun but the idea of using winmx hashes has been tried before in early 2005 by .. KM (with myself and Me_here acting as human filters to remove any dodgy hash entries) , anyone remember the "P2P-Revolution" site ?

The main drawback of the site was both the potential for legal issues as well as the poor quality of hashing available in the WinMX program itself, in short the hashes can be faked by anyone who knows how (hence the extensive manual checking of hashes - very time consuming but we thought you folks where worth it ) and in fact media defender had such an application built into their "anti-p2p interdiction system" codenamed "TrapperKeeper", this weakness is well known about and for this reason alone its best we don't rerun this idea.

I still rely on the Ping function to dump many entries of junk replay attack traffic, if the clients offline or changed IP obviously the ping will fail.

I think the order of the day is to keep our methods variable and manyfold as I expect as soon as we settle on something in great numbers our enemies will look to that area to maximise their effectiveness, by using multiple methods of operating we force them to work

this is actually a somewhat viable method but i could only get around 1500 results for type mp3 on a single test run... a single person would have more than that visible in a browse... it would need to be repeated several times, over quite a span of time, while keeping out duplicates to generate a decent list of files/hashes...

I agree with Miners Lantern. I have more than once DL'd multiple files in the hundreds range as a joke, just to find that WinMX just about pukes on itself trying to get the job done. Once the program locks up, its still running fine just unresponsive, but it takes FOREVER. Then you can highlight the files for deletion in the transfers window... You can watch it in Task Manager, it climbs by about 3mb intervals, until it finally highlights the workload, at which point it starts running again and you can select delete, which takes about double that same amount of time to actually delete them. Where as you can close out WinMX, then use Explorer to navigate to the folder and select all/delete, and it takes about 1 second. It also pegs the CPU/core during these operations.

I actually meant to write my own post about this a year or two ago. Just kind of forgot.

This is not a bad idea. Now if you had a custom program that would index the returned files and add them to a database, then we could search the database, whether it is local or on a website or even IRC. The key is, if it is online, you don't give any live links or anything resembling links, and you could have the online version to filter out any incriminating words. Then throw in some light encryption (like a simple ROT). Even a new client could make use of such data.

local would be best... a program that attaches as secondary to the local primary that indexes the network... since it is running on the users own machine there is no legal bullshittery anyone can pull...

local would be best... a program that attaches as secondary to the local primary that indexes the network... since it is running on the users own machine there is no legal bullshittery anyone can pull...

Agreed, but more costly against an already failing network. I don't think legalities would be a problem, at least if any web version is where there are no treaties. Still, each building their own is safer.

Or a hybrid solution. What if everyone only built part of the database? Then a client could ask each host to search their own database from the WPN and give over any relevant results. Similar to how filesharing clients already work, but a new primary protocol using databases built from the old network.

Maybe then it could be a secondary to the old and a primary to the new.

Not as likely if it's transported using the new protocol. You build from the old network, and share its results with the new...

This idea is for a stopgap client, not the old nor fully the new. And if you later find that you have found a hybrid client, then you do all further communication using the new. And on such a hybrid, you also do little things to harden the old protocol where possible (but not when downloading a database from the old). Like I said before, keep things as stateless as possible and don't allocate any memory until you know what you are getting is good.

And for those who call this imbalanced or leeching, that's why you put out an upgrade announcement, assuming that functionality still works.

I wonder, is there any code for the old primary floating around? I wonder, since the old primary is already compromised, would there be any additional harm in releasing a working implementation of it as it stands? If that were circulated, then those who want to code fixes for it can. Or if anyone wants to use it as a basis for a new protocol, they can.

It seems the best of WMP, G2, and others could be used. Add ideas such as "syn cookies" (a TCP hardening trick that will work whether the other party uses it or not) and my double-checked search idea.

As far as that goes, the minor protocol tweaks and post-processing are compatible with what exists. There are a number of ways to harden the existing implementation, but we all know it cannot last forever and that the hardening only delays the inevitable.

It seems a future idea would be to include some sort of minor encryption that is negotiated upon by hubs. Maybe have it to make a new key each time it loads and then give that key to those who are active with it. If a client has cause to be snubbed, then negotiate a new key. The idea is to prevent casual client eavesdropping and monitoring by rogue clients. So if they cannot tell what you are looking for or doing unless they are directly connected, maybe that would decrease some of the interference, though I admit I don't know. Or better yet, not only a hub group encryption, with regular or provoked key changes, but a individual keys for each connected client that the primary negotiates for it. Thus even if a snubbed attacker knows the old group key, any key changes is made using the individual keys.

Oh, and any snubbing should be done locally only. One you turn it into a "political" thing, you open up all sorts of doors to problems, such as all the attackers being white-listed and all the good sharers being black-listed. If you use a reputation or voting system, it can be intentionally corrupted or subverted (just like any nations politics).

It seems the best of WMP, G2, and others could be used. Add ideas such as "syn cookies" (a TCP hardening trick that will work whether the other party uses it or not) and my double-checked search idea.

the only real part of the original protocol worth saving is the chat system minus the method to retrieve room names... ourmx filters search but i dont know if it uses syn cookies ...

Quote

I wonder, is there any code for the old primary floating around?

as irony would have it the only primary code is ourmx... unless there is something hidden deep on an old harddrive or lost to the sands of time... the mxsock dll would be the closest you could get as far as historical goes...

Quote

would there be any additional harm in releasing a working implementation of it as it stands?

unfortunately yes... due to the DDoS tool possibility... the original protocol is.. lets say.. very basic in parts... a little too basic... winmx was intended to work with winmx and nothing else so the client blindly trusts that thats what its doing...

Quote

Oh, and any snubbing should be done locally only.

most modern p2p apps do automatic snubbing of bad clients... it wouldnt be political it would be modernizing...

----

you mention G2.. this idea seems to be disliked by certain individuals... however i have been on the side of bolting wpn chat to a modded g2 for quite some time...

G2 is simply a whole new area of operations for me, the terminology is all different and the current sources I have seen are all based around incompatible languages for myself to be able to jump from one client to another thus I have to decline any such work or research for myself, but others may feel that its of benefit and should perhaps try their hand if they have the time, all good comes from staying busy.

I think my "political" comment was misunderstood. I was calling anything broader than local snubbing/dumping a "political" thing, since that can be exploited. The idea of sacrificing the one for the all works nicely in filesharing. If you dump a misbehaving client for the good of the node cluster, or a misbehaving superpeer/hub for the good of the network, then that is good. But carrying the information about snubs beyond 1 level or using a reputation system only invites trouble. It would be nice if the network could dynamically figure out the disruptors and pass that data around, but that only has the potential to harm. If each machine has a "vote" on those that should go on permanent block lists, then the system to block disruptors can be subverted and used as another type of DOS attack. Hence the disruptors ally and make themselves the "good guys" and the rest of us the "bad guys," thus doing the opposite of what such an arrangement.

So I was only agreeing with what we said a long time ago, that a simple snub is fine, but a system of one hub or client speaking for others is bad. Now do you get the political metaphor? As long as it acts as individuals and only cleaning up things immediately around them, and not collectively with reputation systems, global snub lists, etc., then we are fine. Such a reputation system would have a potential for harm and would likely be resource hungry.

As for G2 v. WMP, a lot of the terms can be crossed over. We have hubs, they have supernodes, etc. G2 is nice in that it allows for growth and is an extensible protocol. So new clients can add changes to the protocol without breaking it for the others. If a new tag is added, then older clients would be agnostic to it. So the G2 envelope is used by all G2 clients while allowing it to be used as a vehicle for other information that might be specific to a given client software.

And I meant more in taking the concepts from G2, not actually using its code, but what is said about it that sounds good and incorporating those.