Posted
by
CmdrTaco
on Sunday March 09, 2003 @10:04AM
from the could-this-be-the-end-of-smtp dept.

m00nun1t writes "CNET has an article about the Internet Engineering Task Force (IETF) looking at what they can do about spam. According to the article, many of the proposals seems to "require changes in basic e-mail technology", which presumably means SMTP (and about time!). Maybe they are looking beyond just SMTP - anyone have any insights here?"

If an alternative to SMTP were developed, the protocol would not be likely to disappear immediately subsequent to the creation of its successor. The transition would be gradual, as reverse-compatibility could remain necessary for several years afterward. As suggested by the release of Apache 2.0, for instance, not every server administrator adopts a "technological improvement" until it becomes an adequately proven and stable product.

"If an alternative to SMTP were developed, the protocol would not be likely to disappear immediately"

The problem would be: do servers accept connections from legacy SMTP connections (which means spammers can just connect on SMTP and take advantage of the lack of identification), or do servers refuse to accept connections from legacy SMTP connections (which means that either everyone has to upgrade at once, or people using SMTP software have their connections dropped)

The problem would be: do servers accept connections from legacy SMTP connections (which means spammers can just connect on SMTP and take advantage of the lack of identification), or do servers refuse to accept connections from legacy SMTP connections (which means that either everyone has to upgrade at once, or people using SMTP software have their connections dropped)

Presumably AMTP servers (a name I'm making up, A for authenticated) would accept connections from legacy SMTP servers, but prefiltered with various ad-hoc spamblock techniques we use now (Bayesian filtering, limits on connection rates, etc.)

Adopting a new protocol is very different than upgrading to a new version of an implementation of a protocol. In the case of a new protocol, there might be two different kinds of things going on at the same time, either with the same MTA, or different MTAs. In the case of Apache 2.0, you can't have the same web site available under the new version at the old version at the same time. With a new protocol, you can easily have a transition period because of the window of concurrency. With a new version of an implementation of the same protocol, deployed for a single instances of usage (e.g. a domain), it's basically one or the other. You can run Apache 2.0 on www.test-site.example.com while Apache 1.3 still runs www.example.com. But you can't have www.example.com running both very easily.

As indicated by this comment [slashdot.org], I was merely attempting to indicate that the adoption rate of Apache 2.0 is lethargic. My statement was not intended to compare two processes. I concur; migrating to another protocol entirely would be arduous and intensive.

I concur; migrating to another protocol entirely would be arduous and intensive.

Actually, his point seemed to be that actual adoption of a new protocol is simpler than upgrading your webserver since you can just add support for the new protocol without removing support for the old one. Presumably support for the old one wouldn't go away until some critical mass of [nonspam] email volume had moved from the old system to the new. If the servers were set up to receive both kinds but to only send the new kind (when dealing with a recipient that accepted the new kind), that determination would be pretty straightforward, and adoption wouldn't be very scary.

Anyway, the rate of Apache upgrade is mainly determined by a mental calculation of perceived downtime risk to a perfectly-fine existing installation versus the perceived benefit of running 2.0 instead. It's too idiosyncratic to be a very useful predictor for even other product upgrades in general, much less for adoption of a method of processing email in addition to the not-perfectly-fine current system.

Apache 2.0. Apache 2.0 does the same job to previous versions except better,

You've obviously never tried developing anything serious with Apache2+PHP then. The fact is, perfectly usable code has to be rewritten to overcome problems, it uses more system resources, and is far from stable. I had a server running PHP + MySQL scripts that would be thrashing swap within 6-7 hours of light usage (the apache server would have to be restarted every few hours to "fix" this). The exact same scripts are now running fine back on Apache 1.3.27.

I'd advise ANYONE not to use Apache 2 if they are considering PHP. It's just a bad idea...

Convincing "larger ISPs" to implement an alternative standard would also require prodigious effort.

Actually, the more mail your site receives, the more interested you tend to be in stopping the flow of spam. If you consider how much in resources they spend dealing with spam in terms of capacity (for storage, bandwidth, processing volume, and filtering) and user complaints, it isn't that surprising. If a workable implementation ever comes out of this, you can expect the larger ISPs to have test servers up pretty quickly.

I hadn't seen this before. It's interesting, but I don't think I like this reversal for several reasons:

I don't like giving the sender that much information about where I'm reading my mail from, when I'm reading it, etc. If I connect to their server when I want the whole message, I have to. Unless I automatically pull everything right away to my server...but then it's not really a pull model, just an overly complicated push.

This is a big fat lie: "In IM2000, the receiver's ISP can keep notifications in memory." If the notification is lost, the message itself might as well be unless you check that store frequently. The only situation in which I could actually see that being true is something like a mailing list where it's extremely likely I'll get another message soon from the same place.

With the old systems, once the sender sends a message, it can't be changed unless they also control the receiver's system. I like that. Here they can change it up until the point the receiver actually fetches the message. Even afterward unless the receiver caches it locally, so I think receivers always will.

It's only marginally true that people can only fetch messages if they aren't interested in them. If they know based on the store that they aren't interested, sure. But to really know you aren't interested, likely you need to see more information than the simple notification that a message is available. At least the headers will need to be sent.

It doesn't answer the question of how the receiver should authenticate to the sender when pulling the message. That's an important one.

Really, the only problem I really see this absolutely solving is the mailing list one. I'm not willing to abandon a favorable model for everything else just so mailing lists are better. They can either cope or use a separate protocol without dragging everything else along with them.

I'm much more in favor of a new, authenticated push protocol. I think much of the spam problem could be eased by a more reliable way of knowing who sent a message.

The article didn't say much beyond "gee whiz, there's a spam problem!" -- not exactly a revelation. It does hint at technical solutions to the spam problem, but I wonder if that's an approach that will ever work. I think at the heart this is a problem that can only be fixed with legislation (like it or not). Something along the lines of making it illegal to send spam with forged headers would help a lot.

But did say that IETF is willing to get involved in developing (technical) measures, probably making/suggesting changes on the base protocols, and that is stronger than any particular trying to do the same.

Among many, many others, I saw Vernon Schryver, the guy behind Distributed Checksum Clearinghouse [rhyolite.com], on the list. It's been pretty high volume, though, and I haven't had a chance to really spend some time reading it yet.

The really interesting thing about dupes is that they tend to suggest that there are large numbers of readers who pay more attention to the site than the guys running it.

If I was running slashdot, I'd probably push the people who had the power to approve stories to read each and every story that gets approved. It seems like a reasonable minimal committment to the community even for volunteers, and presumably some of these guys are drawing actual paychecks for the work they do here.

The dupes show that the guys approving the stories don't really care enough to take the time to do that.

What they need is some software that can take care of this for them as they are going through the stories; just rewrite SLASH to take care of it (I would, but I am up to my eyeballs in greased potbelly pigs atm).

I don't know -- how many stories get posted in a day? I don't think it would be that much harder than reading a newspaper. I doubt it's harder than what you have to do for your job, or for what most people have to do for their jobs.

I'm not suggesting that people ought to be blamed for not remembering a story from 3 months ago -- but 3 days ago seems reasonable.

I don't believe that an automated system to detect dupes would be simple or effective. There are often different articles about the same thing, and often dupes come through someone submitting a completely different article. I don't think that it's usually from the same submitter (although I haven't checked that), and repeated word scans seem to be something that would be very difficult to pull off.

Someone posted a response to another spam story a few weeks ago, sadly I can't find it, but they described an interesting mail delivery system they'd created.. and it sounded, to me, as if it could certainly be the future of mail delivery.

They said that when someone sent a mail, it simply went to the local server, and no further.

It sounded like a 'reverse IMAP' style system to me. That is, your outgoing mail simply went to a folder on your server, which allowed you to edit and even delete mails BEFORE they were picked up by the recipient. The recipient's e-mail server would only receive a 'notice' that someone had mail for them.

When the recipient went to collect their mail, their own mail server would then have a basic list of where the e-mails for the recipient are, and then it'd go ask for them from the remote servers and feed them through.

So, how does this help spam?

It allows spam to be truly filtered on the OUTGOING rather than the incoming!

Why's that a great thing? Well, it means that if you're an AOL or MSN user, you're not going to lose 80% of your mail simply because of over-zealous filtering by your ISP. Instead, spam mail will not even be sent, let alone received!

Of course, bad eggs could always set up servers with no filtering systems on them and send their spam that way.. but BECAUSE e-mail will be picked up FROM the senders server with this system, it means blacklisting is a whole lot easier! You just ban a server and you know you've got rid of the bad eggs.. whereas the current SMTP system allows open relays and all sorts of 'trickery' to get around filtering systems.

So.. the conclusion is.. make e-mail stay on the sender's server until it's time for it to be collected. It allows you to edit or delete mail before the recipient collects it, it stops spam, and it reduces bandwidth(!) -- if someone never collects their mail, then the mail has never gone across the net.. it's still on the sender's server.

I hope the original poster of this idea will pop up here again and correct me if I got his ideas wrong, but he was certainly on to something.

1) If a person sees an e-mail in their inbox, then they can read it, and they are happy. Can you imagine the hordes of people who would now see that they got an e-mail, but could not get it for one reason or another? This makes e-mail *seem* fragile. Please explain to my step-father why he can see that he has e-mail, but he cannot read it on the plane. This is not a technical issue, but a psychological one, which is much harder to program around:-)

2) By what criteria could you filter the email? If you have not received the e-mail, you probably won't have enough information to tell if it is spam or not. The only information that you could go on is what is in the "notice" message.

1) If a person sees an e-mail in their inbox, then they can read it, and they are happy. Can you imagine the hordes of people who would now see that they got an e-mail, but could not get it for one reason or another? This makes e-mail *seem* fragile. Please explain to my step-father why he can see that he has e-mail, but he cannot read it on the plane. This is not a technical issue, but a psychological one, which is much harder to program around:-)

Just like his e-mail sits on a POP3 server until he downloads it at which point its possible to store it locally if desired. Once the transaction completes, it can still be on his local HD.

2) By what criteria could you filter the email? If you have not received the e-mail, you probably won't have enough information to tell if it is spam or not. The only information that you could go on is what is in the "notice" message.The server on which the the e-mail is stored according to the notice is info enough. If you trust that server, you'll accept that they have already done the spam filtering and canceling for you. If you do not trust that server, you simply ignore all notices from that server. Servers that are ignored by large chunks of the population suddenly become an undesirable place to send from.

Just like his e-mail sits on a POP3 server until he downloads it at which point its possible to store it locally if desired.

With POP3, until that point, the person has no knowledge of the e-mail, so from their point of view they do not have it. If the user chooses to download it onto her hard drive, you've now defeated the purpose of IM 2000.

The server on which the the e-mail is stored according to the notice is info enough.

I suppose just knowing that it comes from yahoo, hotmail, or the IRS is enough, eh?:-)

> Some of my friends use hotmail.com and yahoo.com. Filtering based on servers isn't enough.

If I understand the proposal correctly...

1. If someone *did* try to send spam through hotmail/yahoo, the techies would be able to delete the spam from those servers BEFORE most recipients got to it. That alone is a huge advantage. All reputable ISPs would likely do that.

2. No forging of headers/server names. If the notice said it came from hotmail, your e-mail client would ONLY go to hotmail to pick it up!

> This solution doesn't stop spam.

Maybe not 100%, but I think it would eliminate the bulk of the spam *problem*. It's better than any other ideas I've heard. Do you have a better idea? If so let's hear it!

I see some flaws with this from the user end. Would mail clients have to negotiate a connection for every mail message? One of the things I like about email is that if a message appears in my mailbox, it is there ready to download (via IMAP). One of the things I dislike about pull technologies such as HTTP is that I never know when I request a page if the page will be available.

In addition from a user end it can make things more confusing because of the need to negotiate different policies for how long messages are retained. What happens when I need to grab that 6 month old bit of administrivia that I didn't bother to read then but became less trivial in the last hour? Having the sender control the duration and content of email can be a problem for things like email invoices.

When the checking-mail process begins, the client would go to the receive-side server to get the list of notifications received. It would first apply any local filter rules to strike out unacceptable notifications, then go one-by-one to the servers to confirm that they sent the message the notification claims, that the server is still offering the message, and than ask for the message itself.

If the message has been declared spam by the server operator, then the server will intentially pull the message from availablity and essentially vaporize it before it hits a majority of inboxes. Server owners have an incentive to do this... because it'd be extremely easy to add server owners who don't into a local blacklist.

Yeah, a verbose log file can be made available for the geeks that wanna know what happened under the hood, but the average end user wouldn't see the message pop into their Inbox until the message has been sucessfully cleared and transmitted. Once its in the Inbox, it's a local object that the user can do what they want to.

Yes, this is correct. The end user would never even know that they have received the spam. It would go into the bit-bucket (if it was spam) before it ever appeared in their in-box if the sending server (or for that matter the senders themselves) canceled it before it was picked up. So it would be totally transparent to the user and this avoids having confused users wondering where their mail is.

Since most spam affects 100's if not 100's of 1,000's of people, using a local blackhole list would create allow e-mail to be self moderating. After 1 or N people had reported a given server or server/user as a source of spam, they would be automatically added for a period of N days and this would last until the spam barrage was over. So while yes, some spam would get out, only very few would ever see it before it was canceled by the originating ISP or by others who recived the spam before you did.

When the checking-mail process begins, the client would go to the receive-side server to get the list of notifications received. It would first apply any local filter rules to strike out unacceptable notifications, then go one-by-one to the servers to confirm that they sent the message the notification claims, that the server is still offering the message, and than ask for the message itself.

The big problem I see with this is that it would work very well over robust, high-speed networks where all servers have 24/7 reliability. How well will it work over less robust or fast networks? The latency involved in querying and fetching 100 messages adds up pretty darn quick.

If the message has been declared spam by the server operator, then the server will intentially pull the message from availablity and essentially vaporize it before it hits a majority of inboxes. Server owners have an incentive to do this... because it'd be extremely easy to add server owners who don't into a local blacklist.

I think a much better option would be to stop it before it becomes submitted. But I see significant power issues involved with giving sysadmins the power to retroactively nuke messages by content. Yeah, it helps to stop spam but it also gives the sysadmin the power to nuke political content as well.

In addition, I can see how such a system can be technically circumvented by spammers. Set up a server to broadcast bogus notifications and just send a single file out. Blacklists are not effective then for the same reason they are not effective now, the costs of setting up on a new IP is trivial.

Yeah, a verbose log file can be made available for the geeks that wanna know what happened under the hood, but the average end user wouldn't see the message pop into their Inbox until the message has been sucessfully cleared and transmitted. Once its in the Inbox, it's a local object that the user can do what they want to.

Ok, the initial description just sounded like some kind of a distributed peer-to-peer imap where instead of storing the messages on the recipient server the messages are fetched as they are read. But I disagree that this process will be transparent to the user because of the added latency as the recipient server authenticates each individual messages. Checking my mail with IMAP, I know what is available within a second after I open a connection (using local mailboxes is even quicker). I don't see how a "pull" system that authenticates, verifies and fetches for each mail message can match that performance.

> How well will it work over less robust or fast networks? The latency involved in querying and fetching 100 messages adds up pretty darn quick.

Indeed that's a minor drawback. But we're talking about stopping spam here (or at least the vast majority of it). It is WORTH a bit of pain to get this problem behind us!

Having said that, I doubt this would really be that big a deal. It would be like loading 100 web pages, except that e-mail is far smaller than a web page and hopefully the server would be optimized to respond quickly. Also, the mail client could be threaded, so the bandwidth could usually be maxed out, not just sitting around waiting for servers.

> but it also gives the sysadmin the power to nuke political content as well.

Someone's ISP already has the power to do that to web pages. I don't see how them having that power with e-mail is really that much more of a problem. And any reputable ISP won't meddle with peoples' mail until they are suspected of being spammers.

There are trade-offs to anything, but overall, I think this proposal solves far more problems than it causes, and that's a pretty good deal if you ask me.:)

Indeed that's a minor drawback. But we're talking about stopping spam here (or at least the vast majority of it). It is WORTH a bit of pain to get this problem behind us!

I disagree. Any spam solution that offers any reduction in performance over current technology for legitimate users is not a "solution". In fact most of the arguments for a peer-to-peer pull solution can be rolled into existing "push" server technology. It should not be a big deal to implement sender-side filtering (perhaps with a challenge/response system for suspicious messages), especially given that in excess of 95% of spam involves malformed mailheaders. Throttling mail servers that automatically deny relaying to ip numbers that make a suspiciously large number of requests can serve the same purpose.

Having said that, I doubt this would really be that big a deal. It would be like loading 100 web pages, except that e-mail is far smaller than a web page and hopefully the server would be optimized to respond quickly. Also, the mail client could be threaded, so the bandwidth could usually be maxed out, not just sitting around waiting for servers.

Frequently I find latencies of greater than 3 seconds is not unheard of, even of fast, well-connected networks. Compared to a typical IMAP connection, this is unsatisfactory. The issue is not volume, but negotiation.

There are trade-offs to anything, but overall, I think this proposal solves far more problems than it causes, and that's a pretty good deal if you ask me.:)

Well, as far as a solution to spam, I think it would be extremely easy to circumvent for a bunch of reasons.

1: This is based on the naive assumption that spammers would use sender-side servers that store a copy of every message sent. It would be a trivial task to create a server to send bogus notifications, then reply with to requests for messages with a dynamically generated message. All it takes is an IP number.

2: It suffers from the same flaw that makes spam a problem with SMTP, a dependence on paranoid sysadmins. It is relatively trivial for a good sysadmin to prevent spam relaying, the problem is the large number of bad sysadmins who don't care.

1: This is based on the naive assumption that spammers would use sender-side servers that store a copy of every message sent. It would be a trivial task to create a server to send bogus notifications, then reply with to requests for messages with a dynamically generated message. All it takes is an IP number.That's why this plan would still require blacklists of IP numbers that aren't behaving. The difference now is that spammers would have to have their own server behind that IP address... getting an open relay to do their bidding for them just isn't going to be an opition.

2: It suffers from the same flaw that makes spam a problem with SMTP, a dependence on paranoid sysadmins. It is relatively trivial for a good sysadmin to prevent spam relaying, the problem is the large number of bad sysadmins who don't care.Yes, but now it's much easier to identify servers run by bad sysadmins and nuke them. By putting some authentication into the From: field, you now know for sure that message from Spammer@yahoo.com really passed through Yahoo's hands... and my guess is that's not going to be often.

> Any spam solution that offers any reduction in performance over current technology for legitimate users is not a "solution".

Of course, remember that if you're downloading 100 messages, many will likely be spam. Getting rid of most of those will automatically improve the performance some.

> In fact most of the arguments for a peer-to-peer pull solution can be rolled into existing "push" server technology.

I kind of doubt that. There are a LOT of advantages to this scheme. I'm sure you can put some band aids on the existing protocols to make them work better, but overall I think this solution has more advantages.

> Frequently I find latencies of greater than 3 seconds is not unheard of, even of fast, well-connected networks.

Right. Like I said, though, the mail client should be threaded, so many of these three second waits can happen at the same time.:)

> 1: This is based on the naive assumption that spammers would use sender-side servers that store a copy of every message sent. It would be a trivial task to create a server to send bogus notifications, then reply with to requests for messages with a dynamically generated message. All it takes is an IP number.

The other reply addressed your two points, but to add to this... it's not a problem at all that this type of thing would be allowed! In fact I believe Bernstein pointed out that listservs would behave this way.

The advantage is that there is more accountability and spam is more easily nuked or blacklisted before most recipients get it.

If you can think of a specific way to amend the current system so that it has all these advantages, please suggest it! I'm open to considering any idea, but right now I think this is one of the best!

This sounds perfect, And here is how it can be implement with backwards compatibility

It's implementation could also be made rather interesting. Rather than a completely new protocol that is totally impractical (since it would require everyone upgrading simultaneously) this kind of scheme could be implemented in a completely backwardly compatible manner. Allow me to describe what I mean.

Your email server has been upgraded to the new system and you send an email. Your outgoing server store the email and forwards a very simple email message onto the recipients email server, this small email contains the appropriate subject line and an extra chunk (with appropriate mime type) containing the information necessary to retrieve the full email message (ie: Server details, email id and probably an authentication token of some sort). Your client software supports the new standard it receives the stub email and retrieves the full message appropriately. This stub email is not an extra compatibility thing; we are simply using the existing smtp infrastructure to tell the recipient that they have a piece of email.

But what if the recipient has not yet upgraded, here comes the clever bit. Html email works as an extra mime chunk that enabled clients automatically decode and show the reader, non enabled clients see the standard plain text version of the message that is also present in the message, this mechanic can be used to our advantage here, the normal text or html portion of the stub email contains a hyper link back to the sending server which a url designed to bring up a basic web mail page with the recipients message.

Using this implementation scheme it would be possible for the sender who upgraded from day one to send an email to anyone with the complete confidence that they would be able to read the full text of the message. The only proviso here is that the recipient had access to a web browser.

In addition I can see one other advantage to your proposed scheme that has not been mentioned, the email system becomes inherently more secure. Since the sending server must actively hand over the email it can record that this has been done and tell the recipient if the message had been read before. Although as with anything else strong cryptography would be required to ensure to ensure that nobody could get hold the authentication token (and thereby read the email) it would be possible for the sending server (providing you trust it) to tell you authoritatively that nobody else had retrieved the message contents.

I was the original poster. You got it basically right in your synopsis too.

To recount, my idea is for when someone sends a mail message, the message itself goes onto the local mail server and the header for the message would go to the recipient. The recipient's mail client would then download the message. However, it would be possible for the mail server to delete the message _before_ the mail client ever sees it in which case, the mail server tells the client this and the client would then throw away the header and the end user _would_never_even_know_ that the mail (spam) had been sent. This would be totally transparent. It would also allow of course, for sender of the mail to be able to tell if / when a mail message had been picked up. (Not read but simply picked up.)

One of the big advantages of a system such as this is that you know for 100% certainty where the spam (or other e-mail) is coming from. You don't have to spend time looking through forged headers etc. in order to send a complaint to the ISP.

ISPs on the other hand would be capable of canceling spam after it had been sent and before it was picked up by the end user. For example, someone send's 100,000 spams from an AOL account. Somebody notifies AOL that they received spam from the offending person, AOL looks and AOL then is capable of cancelling all unpicked up spams -- before they are ever delivered to the end user. Alternatively, AOL could also simply look on their servers and say: "hey, we have 100,000 messages that are waiting to be picked up, we had better look into this" and then make a determination from that point as to whether the mail should be canceled -- again before anybody (or very few) sees the spam.

Blacklists could easily be created too where the site is blacklisted for only a certain period of time. So after three days (for example) the blacklisting would go away automatically. This avoids the problem that many ISPs have where they get blacklisted due to a single user and then they can't figure out how to get off the blacklist. Using this approach, the blacklisting would only last for as long as the spambarrage continued.

Blacklists would easily be able to be created within certain organizations or groups of people who have similar "moderating" views rather than trying to make one (or very few) blacklist(s) meet everybody's needs as is now the case -- and often hurting people's ability to send and receive legitimate mail.

The protocol could not only specify what server the mail came from but also the user. So, for example, if someone were spamming from AOL, it might not be a great idea to blacklist AOL but only that user from AOL. This would work for mail systems where you know it is a legitimate business but with a few unruly users.

So, using this technique, it would be possible for a spammer to get a few spams out but it would be nearly impossible for them to spam very many people before it was caught by their ISP and canceled or the user/ISP was blacklisted for a period of time.

For the most part, I don't like Online anything, eg. the WWW, and all the things that have been pushed into a WWW interface. What this does is to take e-mail, and REQUIRE online reading. If it resides on the sender's server, you have to depend on the sender's server to remain online, and you have to be online yourself. Forget about archiving mail... As soon as that server goes down, all your e-mail sent from it will be gone.

Of course, you COULD have the e-mail downloaded locally, but guess what... you've just turned it back into SMTP again.

The only problem with this is scalability. Sure SMTP has had its problems, but the nice thing about SMTP is you control the server. You control how fast mail comes in, from who it comes in, how fast people can give you e-mail, and how fast you give it out to subscribers/recipients. All of these schemes seem to remove that control from your machine.

Instead of adding a band-aid solution to spam, let's sit down and list what we need for an a-mail server. Scalability, reliability, fault tolerance, expandability and distributed servers top my list. I'm sure that there are other better ones out there too. If you're going to revamp the protocol, try to get everything in the first time, and let's try to get it right.

The fundemental diference that protects most communication systems from Spam-like abuse is that the sender is responsible for a majority of the costs of the message. Yeah, there are telemarketers and junk postal mail, but the are seriously limited by the fact that there is a noticiable cost assosciated with each additional message they send. The fact that it costs money to send such communications makes it impractical to bother people with offers with an extremely low reponse rate.

SMTP/POP3 e-mail presently leaves the cost of holding the message during the wait for the intended reader to be available on the receive-side server. The spammer doesn't even have to maintain a constant and consistant Internet connection.

Under the current system, a sender can send 100 MB of messages in an hour without penalty. However, a receiver who gets 100 MB of messages in an hour usually will find any other messages sent to them bouncing.

Requiring the message be held on a sender-side server instead would transfer the costs of sending a large volume of e-mail onto the sender, and therefore discurages the practice better than any law ever could.

I don't get it. Presumably, a spammer is sending out a million copies of the same e-mail. So he sends out a million 'hey, pick up your mail' notices to these new mail servers... and when the people go to check their mail, their servers go and talk to the spammers, which stored it remotely. Now, why does the spammer's server need to store more than one copy of the e-mail? Where is the increase in cost?

It's not just the storage space for the message. It's also the bandwidth costs. That's the nice thing about this approach. It doesn't bother ordinary users, but is death for spammers. Instead of making a send raid on a few SMTP servers, they have to keep their servers running while a million readers all come calling. It's like they have set up a DDoS attack on themselves. It would be fine for non-spam businesses like amazon.com to do that, as they maintain some whompin big servers. But it would kill the small spammer, as the capital outlay for spamming would go up a lot.

Because the spammer server would have to transmit the million payload messages over a course of a few days rather than a hit-and-run instant. The spammer now has the responsilbity of keeping his server online, and can't exactly rely on an somebody who carelessly left open relay anymore.

Moreover, it's likely that in the first few seconds 1000 or so of the million will see that e-mail, identify it as spam, and a few dozen of that 1000 will put that server on multiple blacklists. To the 900,000 remaining people who subscribe to any one of those blacklists, their software drops the notification into the bit bucket, and the payload never makes it.

So now, an attempt to reach 1,000,000 people only connects with 11% of that... and 90% will not even bother with anything more from that sender (be it the username/domain combo or the whole domain depending on the blacklist listing) again until the blacklist operators say otherwise.

Currently the spammer is likely to be sending a few thousand copies of the email to someone else's mail server, each specified as being for a few hundred recipients. The mail server expands this to a million copies.

Right now we're on a receiver pays system because if somebody sends you an e-mail... you're the one responsible for paying somebody to hold that e-mail until you're ready to pick it up. If you were to get 20 MB of e-mail in a day, most ISPs will cut you off.. you won't be able to get any more until you sign on and clear out your mailbox.

If the sender is responsible for keeping a server online and keeping up with the bandwith associated with what they've sent, then they've got to pay for the volume of e-mail they send.Now, if you're a typical user and only send out a handful of e-mails a day, you'll be fine. But if you send an outragous volume of e-mails, it's going to be your disk quota that fills up first and you'll have to retract some of your undelivered e-mail to send more.

You might get as much junk mail if the sender pays but you know what? You won't get the same quality of spam. You will get ads from your local grocery store instead of "MAKE MONEY FAST!!!!!!!" and "Add 12 inches to your penis!!"

Spam largely consists of cheap bullshit products, scams, porn, and crap that appeals to the insecurities of stupid people. Do you know why this is?

It's because the individuals who "run" the "businesses" in question are getting something for nothing. In other words, they're all just running scams. If you were to charge $0.05 for every e-mail sent, then it would become a cheap way to advertise, rather than a *free* way to advertise. Suddenly, sending out 5 million e-mails costs you $250,000 instead of $250 (if you include your own labour). For the paltry 0.01% (or less... I don't know what kind of numbers spammers get) response rate, suddenly it becomes a big losing game for everyone that's selling Viagra and porn and pyramid scams. That advertising would be replaced by people selling cars and groceries and books, and while they may still be capitalist scum, the products they're hawking are orders of magnitude less offensive than the crap that's currently being hawked by e-mail.

$0.05 per e-mail isn't going to break the bank for us regular joes - a couple hundred e-mails a month comes out to all of $10.00. Don't forget that it actually costs money to run an e-mail server, and it would be the people running that server that would be collecting the postage. It might even be a good idea to hand the whole system over to your friendly neighbourhood national post office.

Personally, I think this would be a small price to pay for vastly improving the quality (not to mention stemming the quantity) of the advertising in my mailbox. I'll also be happy with the fact that the people doing the advertising would be paying to support the e-mail system instead of working hard to bring about its collapse. As far as I can tell, this is a simple, elegant solution.

seriously tho. even if it is all legal-ified and I'D correctly, there will still be such things as ticking the little box to say you dont want any spam from service X,Y, and Z.In fact, the way online revenues are going i can see recieving/solicited/ spam as being the only way you will be able to read salon. if it's still going by then.

it would be nice(?) to have a better system but I never forget the age old adage of no system being tamperproof. Lots of enterprising folks enjoy anonymity for non-spam purposes, so naturally some form of workaround should emerge fairly quickly.oh lord i'm sounding like Toffler.

This has been explained before, but people still don't get it, so I'll keep trying.

> doesn't really solve anything.

It solves just about *everything*. Seriously, think it through!

> i mean, with disk space at $1/gig, what's this going to do?

It's not about the cost of the disk space. It's about accountability and stopping abuse.

It will eliminate the forged header problem. Only a notification is sent to the recipient's mail server, and when the recipient connects, his mail client checks the notification and knows exactly where to go to get the message.

It would be much easier to blacklist, since there would be no more open relays. You would be able to blacklist by domain, by IP address, or down to the USER name (for domains like AOL and Yahoo).

If someone sent hundreds of thousands of spams from any reputable ISP, that ISP would be able to *cancel* the spam BEFORE it was accessed by most users. If it was through a rogue spam friendly place, like I mentioned, it would be much easier to blacklist.

This new protocol idea also has a lot of advantages for listservs. Seriously, read it!

I'm amazed that there are so many people picking nits with this idea. We've all (mostly) said that we want to find a technical solution to spam, instead of getting the government involved. Well, this is precisely the technical solution we've been looking for. Let's get off our arses and implement it!

Laws might not be able to stop spammers, but protocols have a better shot.

Simply put, it's too easy to create spam over the SMTP protocol. The from, and reply to fields are completely free text, and have no requirement that they must be a reflection of the actual sender of the message.

However, if SMTP were to fall out of favor for a new protocol, that new protocol could start the rules over and require that the server that is named in the from field must confirm that the username provided actually sent the message. Spoofing for the use of spam would then become practically impossible.

Once we have a confirmed from address, it puts a responsiblity to stand behind each e-mail sent through a server. Moreover, once spam use has been detected a really reputable server operator could simply stop authenticating that sender's e-mail.. causing auto deletion before its presented to a user.

...new protocol could start the rules over and require that the server that is named in the from field must confirm that the username provided actually sent the message. Spoofing for the use of spam would then become practically impossible.

To me, trying to stop spam is like trying to keep people from copying digital media. There is always a way around it.

When you think about it, the two concepts "I'll let anyone try to send me mail" and "I'll only actually get the mail I want to get" are not very compatible at all.

I think most of slashdot understands this. The only reason there aren't a bunch of "wonder how long before Spam gets through whatever they come up with" mails is that we hate Spam, and want to believe it can be stopped.

How one Microsoft employee gets promoted to VP of E-services Technology:

"I propose we go with TMSP protocols instead of SMTP, because it will allow us to move goal posts, get on the same page, keep ahead of the game, reach out and manage expectations. Also, e-services facilitate gap analysis that is goal directed to overcome security contingencies in a consumer driven brand-limited distrobution channel. TMSP is also client-centric."

There is a SMTP command called STARTTLS which will enable SMTP over SSL. It's defined in RFC 2487. Sendmail supports it with a compile-time option, and so do most other MTAs. It's backwards compatible with normal SMTP.

Free email certificates [omegasphere.net] are available. Yes, the page says they are for outlook, but that is more of a reference to the fact that the ability to use them is built into outlook already.

Well, I guess you won't be receiving any email from me anytime soon (unless you don't block cable as well). I don't know why there are so many people who feel that those without T1 lines (or better) shouldn't be able to send and receive email without going through a 3rd party. Maybe it's just elitism ("I have a T1 so I won't be affected") or they really do get lots of spam and actually think blocking average people will help their problem.

But here's my suggestion: if you don't want to get email, don't operate a public email address. A public email address means that anyone (including spammers and other people you don't want to talk to) will be able to send you an email. That's the way it is. Spam is a problem, but it's a social one, not a technological one. And it's very hard to solve a social problem through technology (go ask the RIAA/MPAA).

It looks like you've somewhat chosen to not have a public email address. And that's your choice. I just hope you aren't in a position to force that on others. I also hope you don't try to pass off your private email address as a public one.

Personally, I like the ideal of the ability to communicate freely with others on the Internet (well, relatively, bandwidth costs money). Sure, spam is annoying, but only for the.1 seconds it takes me to detect and delete it (autopreview in *gasp* Outlook is great for detecting spam -- this is a feature I haven't seen in any other mail reader). And even then, I get very little spam, even on addresses I've had for more than 4 years.

So, by all means, continue blocking me. But remember that not all people on lesser connections are spammers. Some are just Linux users running their own mail servers (the way email was MEANT to be).

All that said, however, one solution might be some sort of trust based system (provided it's relatively free). If you could authenticate my server and tell that I'm not a spammer (because I'm not:), then you would receive email from me. And this would be a lot more robust than your current system of banning all those with consumer connections. The implementation would be key, though, and it would have to be available to everyone (not just those with money), or it'll never work.

If any mail infrastructure reorganization were done before the finding this mount of this
sendmail hole [slashdot.org]
, that would have been be a good way to have a mostly forced deploy of compliant mail servers around the world.

It think it is possible to move to a different protocol than SMTP by building a protocol over it, rather than throwing it out.

The article notes that one of the major problems is the filtering of genuine mail due to agressive spam filters necessitated by cleverer spammers. Consider this analogous to dropping some packets at the network layer. Just as the transport layer handles this problem, we can build a higher level protocol to handle filtered mail.

Note that having a mechanism to handle dropped mail allows us to employ agressive filtering: one that is sure to stop 100% of spam.

What I have in mind is as follows: when Bob receives a mail from Alice (i.e, it has passed through Bob's filter) the client software sends a confirmation mail back to the Alice. This is not a regular mail that the Alice will see in her inbox; it has a special header flag that marks it as a confirmation. Alice's client software keeps track of the confirmation messages; by looking at her "sent-mail" folder she can see which of her messages have not been confirmed (and are hence likely to have been mistaken for spam).

Finding that Bob has filtered her mail, Alice can either re-word it and send it again or do something like (assuming that Bob knows Alice): "Hi Bob, this is me, Alice. Your filter blocked this so I've rot13'd it to get past the filter. rot13 what follows to read my mail." Another option is to encrypt the mail with Bob's public key (assuming that spammers' scripts won't be clever enough to get your public key from your web page). Note that 99% of the time the mail is going to get through. You have to make that little effort to prove you are a human only once in a long while.

There is minor problem with requiring the receiver to send a confirmation message: Bob might check his mail only after a couple of days, during which time Alice may assume that her mail was blocked. There are 2 solutions: either Bob runs a script to filter his mail regularly, or else has his ISP implement his filter for him.

Note that this won't work if you have the receiver send a reply whenever the message did get blocked: the reply could itself get blocked etc. (This is called the red army - blue army problem in networking).

2. Mail client returns mail with some specificformat and ask for confirmation mail. It theother client supports it can autogenerate aconfirmation. Client would only autogenerateresponse if it knew it sent email for therequested confirmation.

As an added benefit, it would also send a huge amount of extra traffic to spoofed aol.com return email addresses. A new DOS attack is born every day!

This sounds better than the first post, where Alice tries to send mail again manually. In this setup, it sounds like Alice's mail gets through after the confirmation step.

Couldn't this be built in to the internet mail servers? They could always do this step, and stop forwarding mail that the return addresses don't think they sent?

Spam is tricky, but the average person can block most spam by filtering all messages from users not in their address book since the average grandma doesn't get messages from anyone but people she knows.

World of Ends [worldofends.com], recently [slashdot.org]
discussed [slashdot.org] on Slashdot, discusses why the simplicity (or stupidity) of the Internet is so useful. "The Internet isn't a thing. It's an agreement," they say.

That same argument applies to e-mail. Following their logic, it is best to leave SMTP alone. Simpler protocols are better. Leave the "value-added" pieces to the edge, and let the simple message transfer protocol alone.

The Internet was with base-level protocols that assumed everybody using the network would not abuse the network because they would always have something to lose, their jobs.

Therefore, a base assumption was made that every message sent would be a message the marked recipient would want to get. Clearly, that's not the case once you let untrusted public users onto the network... and DDOSes, Spam, and other unpleasantries result.

I hope that it involves authentication of some sort or another. IANAP - but they only way I can see to get rid of spam is to tell the SMTP server that you will allow mail to be delivered to you. If someone sends you an email - and you "unsubscribe" - they have to remove you from the list - the SMTP just hops it. If the SMTP servers themselves maintained a list of "unsubscribed" or blocked addresses - they couldn't send you an email.

I know - I don't write code - and this probably sounds stupid. But I don't really see any way of forcing someone to quit sending you email. SMTP is short and sweet - but it can't continue to just hop mail. It has to be checked somehow. And it would slow down the mass emailers a lot. Hopefully someone a lot smarter than I in this area can come up with something.

Watch for any attempt to impose digital certificates or other revenue generating schemes--wouldn't Verisign love it if now not only those who wanted SSL to work without presenting dire warnings to customers but everyone who wanted to be able to send email at all had to pay Verisign's extortion money for a certificate recognized by MSIE.

email is by far the most widely used of all Internet services. I belong to an organization many of whose members are retirees are on fixed incomes, and it is only within the last two years that the number of people with email has grown to a critical mass (about 2/3 of the membership).

Of members of the lay public who regularly use email as a means of communication do not have the level of technical comfort that most Slashdot readers take for granted.

Of people who use email, the percentage who know how to use a web browser is much less than 100%. The percentage who can google for information is much less than 100%. The percentage who can successful extract and decode an email attachment is much less than 100%. The percentage who can view a government form or a corporate brochure in PDF format and read it with Acrobat is much less than 100%.

And the average age of their computers and operating systems is much more than three years--and they're not likely to update their email programs.

Whatever is done needs to be 100% backward compatible with existing email clients, not requiring even simple upgrades, or an astonishing proportion of real-world Net users will be disenfranchised.

(And please, let's not have any facile expressions of contempt for AOL users or webtv clients or people who bought email appliances (that includes one of the retirees I mentioned).

Whatever is done needs to be 100% backward compatible with existing email clients, not requiring even simple upgrades, or an astonishing proportion of real-world Net users will be disenfranchised.

Whatever is done needs to be able to deliver email while effectively correcting the system of incentives that encourages spamming. Any other considerations, like it or not, can only addressed insofar as they don't interfere with acheiving that goal. The current email system is broken and an ever-increasing amount of noise is flooding into that system. The end result is that the delivery system (which, as you have pointed out, is important to so many people) is in the process of collapsing. You talk as if the choice is between a healthy email system and some new one that we don't really need, when it's really a choice between a system that will inevitably be rendered useless by spam volume and a new one. And that new one has to include whatever features are required to avoid a repeat performance.

If the best solution is all server-side (and some proposals are) they may be able to also get the kind of backward-compatibility that you feel is required. But, make no mistake, we aren't doing anyone any favors if we don't actually fix the current system, even if the fix does eventually require a client upgrade when the last parts of old system are finally phased out. If it is any comfort, I would expect that "AOL users or webtv clients or people who bought email appliances" will be the least-effected since their providers understand that market and have control of both ends of that particular client-server implementation.

The end-user never sees SMTP. SMTP can be thrown out the window as long as we still use POP3 and/or IMAP to retrieve the mails. Any solution that replaces SMTP could still provide POP3/IMAP mail retrieval for backwards compatibility.

A participant in the NANOG (North American Network Operators' Group) mailing list recently posted a Best Current Practice proposal [merit.edu] regarding spam to that list. He was fairly heavily flamed by some of the frequent posters on the list, but his idea (which has a basis in sociology) does have some merit.

He uses the idea of emergent structure. To quote, " if all (or even most) players expect other players to
act in a certain way, a predictable pattern of behavior emerges
which becomes compelling for all players. This is the way all
organizations work."

Replace "IETF" with "Microsoft" and you have this [slashdot.org]slashdot story a whopping two weeks ago. Of course the slant then was how evil Microsoft was for daring to make people pay for email (which of course was not true... the article was about email accountability to reduce spam).

When the protocols we all use now were developed, everybody trusted each other. There wasn't a real need for advanced security options. Nowadays, with the current commercialization of the net (which also provides me with my income) it looks as if the commercials are winning. By commercials I mean those who have absolutely no respect for other peoples right or bandwith. Let's not forget that spam isn't the only problem: dos attacks are a real threat too.

Due to the original designs being not real secure, I'm quite sure that the spam problem can not be solved without fundamental changes in the way we use email nowadays. Perhaps the policy regarding blacklisting can be changed: at this moment most people accept mail from everybody, but not from a few blacklisted sites. It's likely that this will be changed: we don't accept your mail unless we know who you are. Unfortunately, even then there will always be people who will abuse it. Hopping from one account to another, or sue-ing every single ISP that has the guts to disconnect their connection after spamming. In short: it's not simply a technical matter, their will be a need of *globally equal* legislation too. Legislation alone won't do the trick either. No, it's time for Mr Geek to marry Miss LawAndOrder.

Don't forget that the IETF is not the first to attempt to find a solution. RIPE [ripe.net] has its anti-spam workgroup for example.

Actually, it's the IRTF -- not the IETF -- that is undertaking this work. To quote from the IRTF home page [irtf.org] - "[Mission] To promote research of importance to the evolution of the future Internet by creating focused, long-term and small Research Groups working on topics related to Internet protocols, applications, architecture and technology."

Four days ago when this was mentioned on slashdot [slashdot.org], I posted the following summary of what had been discussed. Sadly, this summary is still pretty complete.

From what I take from all this discussion is that the only "solution" to spam is to do the types of things that we have been doing for years, but to do more of it and quicker. Use well run DNS blacklists (Spamhaus SBL [spamhaus.org], ordb [ordb.org], dsbl [dsbl.org], etc.), use good content filters (bayesian filters, etc.), use bulk mail detectors such as DCC [rhyolite.com] or vipul's razor [sourceforge.net], etc.) and per-user whitelists and blacklists.

Or, combine all of the above techniques by using SpamAssassin [spamassassin.org]

--

I've been subscribed to the list since near the beginning and have been following it fairly closely. Much of the discussion has been rehashes of old topics such as "what exactly is spam?", "make the sender pay something, either money or CPU", etc.

The most interesting discussions that I've seen so far are:

Mail transfer programs (MTA) such as sendmail, exim, qmail, etc., should keep track of sender-recipient pairs. The first time the sender-recipient pair shows up, sendmail (or whatever) should issue a "temporary delivery failure". This will force the sending mail transfer program to queue the mail and resend it later. This is completely backwards compatible and doesn't require end users to do anything.

Most spam specific programs will not queue and retry, and thus the spam will be dropped.

Spammers that use real mail transfer programs or open relays will need to be able to hold all their outgoing spam for a while, increasing the spammer's costs and slowing down the delivery of spam. Legitimate email will not be thrown out, it will only be delayed and only for the first time.

Of course, you don't really want the databases to remember every sender-recipient pair forever, nor do you want to remember pairs that were added by spam so this really isn't a "first time" database, but it is close.

Apparently the "canit" program already does this, but I had not heard of this technique before.

Spam filtering really needs to be done while the email is being received. Sendmail can already do this with the milter filter, but other MTAs should also. Most mail servers are I/O bound, not CPU bound so this really isn't much of a burden on the server.

If you filter during the email receive process, you can make the sending MTA do the bounce. This means that you will not have to deal with spammers forging "from" and "reply-to" headers. You won't have to clean up bounces that never succeed, nor will you be responsible for bouncing spam to another victim that the spammer selected for the "from" or "reply-to" headers.

Also, false positives will recieve a bounce message instead of just disappearing. This reduces the danger of important email being lost.

There are also several proposals to deal with ways of verifying that email being sent from a given IP address and claiming to be from a certain domain is actually authorized to send email claiming it is from that domain.

Right now, there are DNS records that tell you which IP addresses are valid to try and send email to for a given domain (the MX records), but many ISPs have different machines for sending and recieving email. There are currently no DNS records to tell you which tell you which IP addresses a domain will send email from.

The problem with this kind of proposal is that there are many people who think they have legitimate reasons to forge "from" or "reply-to" addresses. It also forces ISPs to make sure that every time they add a new outgoing mail server, they need to update the list of valid IP addresses. If they forget to do this, then only bleeding edge spam filters will detect a problem.

As other people have pointed out, this is not an IETF working group - it's an IRTF research group, and as such its purpose is to try to understand the problem and the solution space rather than to propose a concrete solution for standardization. Many of the folks in the group don't seem to have figured this out yet either. As a result there have been a number of naive proposals for how to solve the problem, along with a huge number of arguments of the form "your proposal is broken in X way".

But the group just got started a few days ago, and is only barely starting to move in any particular direction. Trying to make any predictions about the group's eventual output from what has been discussed so far is probably not a useful exercise.

...allow binary transfers. I can't tell you how much CPU time has been wasted by base64 encoding binaries, sending them over an inefficient protocol, and decoding them on the other end. yEnc does a good job but the whole encoding shenanigan is a major pain for anyone trying to send family photos or the latest AFI album. Please, IETF, make a better 8-bit clean push protocol, because SMTP is the only one we have.

And make the default text encoding UTF-8. The Linux Standards Base will soon make UTF-8 the default system encoding for Linux (well, the major ones that follow the LSB). Win & Mac are now Unicode-based. XML is UTF-8 by default.

Having the full character set in use by the OS (Unicode/ISO 10646) includable in a default email with no corruption or additional processing would certainly be a great feature.

This would allow even English speakers to have an enormously increased range of characters they can include in their text: math symbols, musical symbols, the full text of a Time Magazine article (professional publishers go way beyond Latin-1 in the characters they include in an article, and an email version these days usually corrupts the original to some extent), as well as mixtures of any other languages of interest.

XML in default form (UTF-8) could be included as simple text without corruption and without requiring that it be converted to an attachment.

If it's the default, UTF-8 will be quickly adopted by modern (non-abandoned) email clients and servers, meaning that they can all talk to each other in any language without reconfiguration.

Complain to your vendor. The BDAT ESMTP extension has been out for a while now - RFC3030 is an update of RFC1830, from 1995.

For the record, a number of vendors, Sendmail included, don't support BDAT yet because of a rather annoying corner case. It's unclear what you should do if you are a mail gateway, accept a message via BDAT, and then find that the mail server that you need to pass it along to doesn't support BDAT. Or to quote section 3 of RFC3030:

If the receiver-SMTP does not support BINARYMIME and the message to
be sent is a MIME object with a binary encoding, a sender-SMTP has
three options with which to forward the message. First, if the
receiver-SMTP supports the 8bit-MIMEtransport extension [8bit] and
the content is amenable to being encoded in 8bit, the sender-SMTP may
implement a gateway transformation to convert the message into valid
8bit-encoded MIME. Second, it may implement a gateway transformation
to convert the message into valid 7bit-encoded MIME. Third, it may
treat this as a permanent error and handle it in the usual manner for
delivery failures. The specifics of MIME content-transfer-encodings,
including transformations from Binary MIME to 8bit or 7bit MIME are
not described by this RFC; the conversion is nevertheless constrained
in the following ways:

1. The conversion MUST cause no loss of information; MIME
transport encodings MUST be employed as needed to insure this
is the case.

2. The resulting message MUST be valid 7bit or 8bit MIME. In
particular, the transformation MUST NOT result in nested Base-
64 or Quoted-Printable content-transfer-encodings.

Note that at the time of this writing there are no mechanisms for
converting a binary MIME object into an 8-bit MIME object. Such a
transformation will require the specification of a new MIME content-
transfer-encoding.

So we have options (1) and (2) that *may* work sometimes and have no existing standard method of implementation, and option (3) to drop back 10 and punt. Gives you warm-and-fuzzies about its reliability, doesn't it?

If you try to deliver a mail to user@domain.com, you have a system for finding out what server domain.com mail should be delivered to. This should also work the other way around, any mail from domain.com should come from one of these same server. Note that it doesn't have to actually do both, just put it on the same "acceptable" server list.

Now what happens to all the people that want to send mail from a different host? They need to auth with their "real" mail server. So if I want to send from otherdomain.com with a domain.com address, it'll have to go like this:

otherdomain.com -> domain.com -> recievingdomain.com

This would not stop spam per se, but it would stop fake from e-mail adresses, which is extremely annoying.

What you are describing is commonly known as a chain of trust. A common usage of this is in public-key infrastructure deployments in global organizations such as the military and pharmaceutical companies or in chained SSL/TLS servers.

In those types of environments (internal to an organization) there is ultimately a trusted root server that acts as the certifying authority, telling all servers and users within a cicumscribed domain that you can trust servers X, Y and Z.

The problem for e-mail and spam is that in cross-domain environments (external to an organization), this type of infrastructure does not currently exist. How do I know that I should trust mail from domain XYZ.com or ABC.com unless I have already negotiated a trust agreement with those domains?

If we only trust the servers, we implicitly trust that the administrators of the servers have authenticated the users. This is usually not the case, especially in the case of spammers.

Authenticating users would likely have to be done through a certificate-based system which raises all kinds of privacy concerns.

RBLs in conjunction with access controls (if well maintained by the community) could conceivable keep spam to a manageable level, except for all of the free e-Mail services. Hotmail, Yahoo, Excite, Netscape, need to charge everyone and enforce their AUPs rigorously, or their business model will be taken away if the IETF mandates a new standard.

The most frustrating thing about maintaining access lists is the majority of them come from ISPs that you can't block. The temptation lately to add aol.com to access lists is becoming very compelling, but as soon as you do that, all your clients will jump ship. It's irresponsible to restrict mail from such huge providers. But, there is a two-way street here: these players need to be responsible as well. It's very ironic that AOL can block 1 Billion spams to their customer base in one day, yet cannot seem to prevent this same user base from propagating well over 1 Billion spams to the rest of the world.

If a $10 non-refundable setup fee were charged each time you wanted a new e-Mail account this might be enough of a deterrent. Possibly not, but a better place to start than mandating protocol changes on a standard that has worked so well for 3 decades until the slime subverted smtp to the problem child it is today.

Additionally, I'd love to figure out why all these 3rd parties are spamming me with Anti-virus offerings!? If someone were doing that with a product or service I offered, I certainly would not compensate them for maligning the good name of my organization. So, can we compel Norton and others of their ilk to stop encouraging spammers to advertise for them. (Just an aside from someone who wonders where viruses come from, anyway...)

I've been using email on the net for 20+ years and have been as grateful as anyone that it hasn't cost.

However, of late, with other people making such gross abuse of the world's mail systems that I feel I am (and all of us are) paying anyway, it might be worth revisiting this question. I'm not 100% convinced that paying per piece of mail sent would be more expensive than, effectively, paying (in both time and in dollars spent on ISP infrastructure) per piece of mail received. I send a lot less mail than I receive, and I bet most people do.

Computationally, that's a lot of public key encryption in action. For sites that process large amounts of email, this is going to hurt. But let's say we can throw money and CPU at that problem. And I suppose we can do the same for the problem of the tens of millions of key/address pairs we'll need to store centrally. Not so bad, then.

Socially, the existence of anonymous email may be important and valuable. But I suppose anonymous remailers could appear and use their own corporate keys to signed messages in your scheme.

Practically, you'd need a way to prevent denial of service attacks against someone's email by generating sufficient fradulent 'bad reports' to cause their key to be centrally revoked. This seems bad.

Nothing totally insurmountable, but still pretty annoying to deal with.

(P.S. No packet on the internet is truly identified by its source in most cases. IP addresses can be forged fairly trivially. This doesn't really bear on your proposal, but thought you should know.)

About "denial of service atack"... i am not sure I understand you clearly. but I can predict the stuation when the "reporter" will send the same fingerprint again and again even when s/he's got only one message with signed by that key. In order to prevent such sorts of situations a very simple thing could be done: when the message is send originally, it's got assigned a message ID. So, when you report a bad fingerprint you attach also a message ID and thus CA cannot count twice the same fingerprint when you've got actually only one bad message.

But if you mean DoS as a real DoS for CA... With my proposals or without it CA servers must be rotected against DoS atack right?

About a human right to send the anonymous message. Anounoymous means "some part or all your persoanl identification is hidden". Hidden from whom? When you meet me on the street I have my right to refuse your request to know my name. Unless you are a cop. This is essential in any civilization: there are authorities for whom you must comply and identfy yourself if they ask for it. Of course only if they are asking. Of course they cannot share your information. And of course if you don't have your ID then immigration service (INS) may give you many troubles upto to suspending many of human rights and freedoms.

Same in email. If you wanna send anonymous message then take some sort of anonymous from CA (without actual name on it, but still unique and with backtracking to your actual ID). Until you are doing anything wrong (like spam) - nobody will guess who is behind the key. But once your key is revoked then some authorities may come (using backtracking info about your actual ID when you have originally ordered that annymous key) and ask you to pay for what you've done.

Speaking about money: make a good spam penalties and return all investments soon. Of course many spammers will shutdown the business. But the others will adapt. Good, dynamic and flexible, optimization of N (trigger theshold to fire the penalty) and penalty amount will keep money flow. From the other side, use some part of internet taxes. That's the way we pay cops, right? We do the business, we pay taxes, cops get some money from it and keep our business safe (at least they are trying to do so).

Spam is highly redundant commercial advertisement. And we don't want it. So the basic approach would be to exploit this redundancy to filter from the original message streams.

No. Not all redundant mail is spam. There are plenty of legitimate distribution lists out there.

However highly localized approaches like personal mail filters will always fail due to the high variety of spam.

Yes. However, if a high enough % of people point at an email message and say 'spam' then it's spam. People are very good at spotting spam- so a mechanism that records what people think about the mail and not deliver it to anyone that hasn't already read it would be very successful. If 90% of the first few thousand people to read an email say it's spam, the other million need never see it at all.