SPF, merely wish to put slight brakes on the system and ensure that only people who can get agreement from someone who owns a domain can set up a new e-mail system. More radical anti-spam activists mutter that if only everyone was required to use S/MIME certificates, we wouldn’t have a problem. Such proposals make it more bothersome to send e-mail, but they don’t actually negate the statement, so ultimately they will prove ineffective.
To be fair, SPF and the like would have one useful purpose, if they worked: they would prevent Joe jobs. That would be a good thing—speaking as someone who has been Joe-jobbed—but it would only impact an infinitesimal fraction of the spam messages out there.
More extreme sender-side validation for ensuring that not everyone can send e-mail, such as government-issued “e-mail licenses” that can be revoked if you spam, is just never going to fly with the average Internet user, and is politically impractical on a global Internet.
A real success in negating statement 3 would be to implement a system where only “good guys” could send mail. This is what Bayesian analysis software such as SpamAssassin and CRM114 tries to achieve. Unfortunately, there are major problems with this approach, which I will simply call “filtering”.
The first problem is that filtering only gets to stop the e-mail when it arrives at the destination server. At that point it has already wasted network bandwidth and disk space.
The second problem is that filtering is never perfect. “Never” is a strong word, so I’d better spend some time justifying it.
Not too long ago, many businessmen had highly intelligent filtering systems which operated on all their incoming communications. They called them “secretaries”. These filters were imbued with full human intelligence, a complex understanding of natural language, and the ability to notice subtle cues. Yet in spite of their sophistication, ‘secretaries’ regularly made mistakes. They would bring messages to the boss’s attention, only to have him grumpily reply “Don’t bother me with any more of that nonsense”; or worse, they would fail to notify the boss of an important message.
My definition of spam is that it is defined on a per-user basis: spam, to person X, is whatever person X says it is. That may not seem like a useful definition, but it encompasses most of the others I’ve seen, as well as matching common usage.
While I believe that we will eventually succeed in developing artificial intelligence with human-like levels of understanding, I don’t believe it’s going to be any time soon. Even if it was, I’m doubtful that we will ever produce AI with the kind of mind-reading intelligence necessary to work out whether something is spam to me.
So in the mean time, some non-spam mail gets blocked, and some spam gets through. Which is a big problem, because if you then take into account the marginal cost of sending e-mail, and the fact that filtering only happens at the receiving system, you realize something unpleasant: a filter which is 90% effective, merely means that the senders will spew 10× as much spam. Which means bigger disks, higher capacity network links, and the amount you have to pay to get an Internet connection goes up and up.
The only thing delaying meltdown is the slow rate of deployment of effective filtering. Those of us who are on the leading edge of that technology curve have been privileged to have a mostly spam-free inbox as a result; but as the success rate of filtering on the Internet as a whole goes up, so the volume of spam goes up. Which is why spam is now the majority of all e-mail traffic, according to surveys.
So I think it’s pretty clear that filtering isn’t the solution.
There’s another class of technology people are using to try and get to the blissful state where only “good guys” can send e-mail. They set up a system to reply to incoming e-mail with another e-mail, the “challenge”. This asks the sender to click on a URL if he’s a “good guy”. If he does, his original message is passed on.
Now, there are a ton of technical problems with this approach, ranging from it breaking mailing lists to interoperability between competing “verification” systems. There’s also the problem that we already know that spammers are criminals, so what’s to stop them lying? Plus, sending a response to incoming spam actually confirms your e-mail address to the spammer, resulting in more spam; and you have to send it with your real e-mail address, or else valid recipients won’t know it’s a valid challenge. But I’m going to ignore all those problems and point out a more fundamental one.
The moment sender verification becomes commonplace enough that people start responding to it, you’re going to see a ton of messages like this:
Hi. I recently changed e-mail addresses and wasn’t able to transfer my whitelist from my old ISP, so you’ll have to re-confirm that your message isn’t spam by clicking on the below link…
The link, of course, will take the user to a porn site or a site selling penis enlargement cream.
A few weeks after that, every Bayesian filter out there will have learnt that anything that looks like a verification request, is actually spam. Sender verification will become totally ineffective, because either people won’t receive the requests that they verify, or if they do receive them they won’t want to click on the links.
In fact, if there are any spammers reading this, please go ahead and send out your next few spam runs crafted to look like messages from TDMA. That way we can be rid of this whole stupid C/R nonsense sooner rather than later.
So I have concluded that filtering doesn’t work. Or rather, it is impossible to make it work with the level of near-perfect efficiency that would be required to avoid having it simply result in ever larger volumes of spam.
And so we come to the final option, the elephant in the living room, the solution so horrible that people will engage in angry mockery rather than even discuss it: Maybe, just maybe, we need to make people pay to send e-mail.
As a recent New York Times article points out, there’s historical precedent. The postal service originally delivered mail at the recipient’s expense, until abuse of the system made it necessary to introduce postage stamps.
Now, there’s a lot of implementation detail that needs discussing, and I have plenty of ideas around that—my purpose in writing this article was simply to explain how I have reached the following conclusion:
The only way we are going to end e-mail spam is by making senders pay some amount to send each e-mail.
Sure, we can continue the filtering arms race, but ultimately it’s counter-productive. There is only one viable long-term solution to spam, and it’s an economic one.
What form this solution should take, I discuss in part two. © mathew 2017
© mathew 2017