Looking for self-hosted web-based email recommendations

I use Tiny Tiny RSS as my feed reader, instead of Google Reader or any other public cloud reader. I like it because it is web-based (I can use it at work, at home, on mobile) and because I maintain the operating environment (a VPS with a LAMP stack). If I want to move to another VPS provider, I can export my database and set up shop somewhere else. I maintain my own backups, and my data is my own.

I use Evolution as my mail client at home. All the mail is stored on my one local machine. This means I can't manage my email when at work or on my phone, I can't look back at old messages. I can send from an address or two away from home, but I'd have to BCC myself to make sure I get a copy to store later in Evolution. I don't want to keep my mail on Google or my current email host, because I'd like to be able to switch hosts easily and maintain my own data.

I'd like a web-based email app much like TTRSS. It would be something that I can maintain myself. It would allow me to send as a number of addresses, and have decent email search and mailbox/folder capabilities.

I don't think I've ever seen an actual web-based email client in the way I think you're asking, i.e. one where you can enter pop3 or IMAP info and pull mail from another source (and doesn't run a local mail server), or one that will read your Evolution mail db and make it available via a web interface. Most web front-ends only let you access your mail account that is being stored in the mail transit agent (sendmail, postfix) on the local system.

That said, you could run a proper mail server. For web front-ends to mail servers, look at Squirrel Mail and RoundCube.

What? No. Most webmail interfaces DO let you pull over IMAP, rather than directly from a local store. Roundcube, specifically, will even let you use arbitrary, user-specified remote servers and simply act as an ad-hoc IMAP client, if you leave it configured that way (where you can enter in your own domain settings). (SquirrelMail also pulls over IMAP - really, the tough thing is if you DON'T want your client to use IMAP... Sqwebmail is about the only webmail package I'm aware of that WILL read directly from maildirs.)

The one thing Roundcube is missing, AFAIK (I might have missed something) is the ability to send as an arbitrary mail address - ie, send a mail from "iamjacksalias@jacksmail.com" while logged in as "jack@jacksmail.com".

This said: be aware that Roundcube doesn't have the best security history. You will need to keep on top of updates if you use it (or, really, any such client). Such is the price of rolling your own solutions that are web-accessible.

The one thing Roundcube is missing, AFAIK (I might have missed something) is the ability to send as an arbitrary mail address - ie, send a mail from "iamjacksalias@jacksmail.com" while logged in as "jack@jacksmail.com".

You can now. I get an "edit identities" link in 0.7 for that purpose.

Horde+Imp is the only other major alternative I'm aware of. I don't like Squirrelmail's interface, but I've fiddled with Horde and Roundcube a fair amount. Horde can probably do more, overall, but Roundcube is easier to work with. The documentation is better, the mailing list is more responsive, and the plugin system is easier to work with if you have to graft on custom features.

I guess I wanted my current host to remain my POP and SMTP server, and have my web service pull from the POP server and store the mail locally. Instead, maintaining the patches on my own POP/SMTP server might be annoying, but doable (I run a separate SMTP server for another project of mine).

The difference is, I understand how mysql databases (and flat files) work, as far as backups and restores. If I run my own POP/IMAP server and store the mail in there, I'll have to learn where all that stuff goes, and how to back it up and fix it when it breaks.

edit It looks like Roundcube might be damn close to what I was looking for. I'll mess with that first.

You can make it work with fetchmail, although I believe you need an MTA as well for final local delivery.

In theory you'd have incoming mail pulled from the provider to Erorus' VPS via fetchmail, postfix (or another MTA) delivering it to local mailbox(es), and dovecot (or another IMAP server) providing access for Roundcube.

The VPS would really be the final destination for messages, but the general headaches related to exposing MTA to the internet would be offloaded to the provider.

POP isn't going to work well for what you want. It's just not designed for permanent storage with remote access.

If your current host offers IMAP access, that's what you're going to want to use - in your client, as well as in Roundcube or whatever.

I want to store no mail on my mail server. I want to store mail on my VPS. This probably means storing it in a database or something. This way, when I want to switch mail hosts, I just change a value on my webmail client (like I do with my desktop client) and I don't have to worry about moving old mail here and there.

Alternately, I can run my own mail server.. but if I can continue to offload the maintenance of a public-facing mail server, I think I want to do that.

POP isn't going to work well for what you want. It's just not designed for permanent storage with remote access.

If your current host offers IMAP access, that's what you're going to want to use - in your client, as well as in Roundcube or whatever.

I want to store no mail on my mail server. I want to store mail on my VPS.

OH! Well, that changes the color of things significantly. If you want the mail stored on your VPS, then you want to set up Postfix, Dovecot, and optionally Roundcube. You want your current host to merely act as the public facing MX, which will then relay privately to your mailserver. (You don't list your own VPS in public DNS as an MX for the domain, so that spammers don't try to hammer it.) Ideally, you'll use iptables to completely firewall off port 25 from anybody BUT the public MX (your current host).

You could also use a fetchmail setup, but... yuck. For a lot of reasons. Better just to do a forwarding MX setup as described above.

You could also use a fetchmail setup, but... yuck. For a lot of reasons. Better just to do a forwarding MX setup as described above.

Agreed, but that setup requires some cooperation from the current mail host. The sole advantage of fetchmail is you can set it up yourself should the current provider be unwilling or unable to set up forwarding. If the current host is something like gmail or even a lower end webhost plan, setting up a relay might be an option.

You can make it work with fetchmail, although I believe you need an MTA as well for final local delivery.

In theory you'd have incoming mail pulled from the provider to Erorus' VPS via fetchmail, postfix (or another MTA) delivering it to local mailbox(es), and dovecot (or another IMAP server) providing access for Roundcube.

The VPS would really be the final destination for messages, but the general headaches related to exposing MTA to the internet would be offloaded to the provider.

My method is:

ISP Pop Account -> Fetchmail -> Dovecot IMAP Server

The IMAP server provides email for Roundcube, a GUI email client (Thunderbird) and also Alpine. (Alpine is handy when I'm sshing in).

There's sqwebmail - I can't vouch one way or another for how well it's maintained. It's an actual *binary* that runs CGI, IIRC - it's been quite a few years. It's very VERY fast - accesses maildirs directly, no IMAP ability - but the interface is very old-school and turns off most end users (I don't really mind it much, but I'm a long way from a typical end-user).

Not sure why the particular abhorrence for PHP - most of the web runs on it, and I'm really not aware of php in and of itself having a much worse track record than anything else that will fit the bill.

The idea of putting a public facing PHP website on my server gives me the willies.

Stick it behind a HTTP auth stanza? That'll kill 99.999% of automated bots. And if you're feeling ultraplus paranoid about having PHP available in general, use PHP behind FastCGI(which itself is running as a very unprivileged user) and only enable the CGI handler on the HTTP auth'd part of the site.

bots are not what I worry about. If I have to worry about bots then I know that things are _really_ terrible. Beyond redemption.

Quote:

FastCGI(which itself is running as a very unprivileged user)

I would have a VM dedicated for this particular task. System access is not what I would be concerned about, a attacker does not need to get to that point to be serious trouble. What worries me would be a attacker having access to my email, passwords, and have the ability to modify the website to try to install malware on PCs that I am using to check my mail.

I know that it's possible to write decent PHP apps, but it's still not ideal.

I am just curious if anybody has any recommendations. Zimba looks interesting.

Right, so it's not PHP you have trust issues with but the PHP ecosystem. Not sure there's really a solution to that, although I find it vaguely amusing you'd trust something like Zimbra(at least I assume Zimbra, Zimba doesn't exist), which has an enormous code base, over something like SquirrelMail.

*) Run web apps as standalone applications in a user account that listens to localhost-only.*) Use Apache reverse proxy to access those applications*) use kerberos http auth*) Make it only available over SSL (http auth stuff is inherently insecure otherwise)

Now that is technically 'secure'. The problem is that Apache and such can be very complex and sometimes difficult to configure correctly. I believe I do get it correct, but it's not necessarily true. Especially when you have to do weird things to make poorly written applications work properly behind a proxy.

Now I am _willing_ to install a php web app with such a configuration. I just don't like to. I was hoping that somebody would have some experience with something other then php applications.

^^ I think that's severe overkill TBH. Simply using auth_basic in a <Location /> section in an SSL vhost is plenty to do the trick - there's no weirdness about it, no quirky way around it if something's odd in the configs, <Location /> is <Location />, it covers the whole vhost, and that's really all there is to it.

You are, of course, depending on Apache itself to be secure - but you have to depend on SOMETHING, and Apache at least is VERY heavily maintained and very easy to update. (Not to mention which, you're really two-factoring it - an attacker must compromise both the auth_basic AND whatever built-in login functionality your web app has.)

^^ I think that's severe overkill TBH. Simply using auth_basic in a <Location /> section in an SSL vhost is plenty to do the trick

It's S.O.P.

I have kerberos at home so I take advantage of it. That way I don't have to maintain a separate database of passwords for each application. In Linux it requires some configuration on the client side, but it is not terrible. Both firefox and Chrome can be made to kerberos-aware.

I do agree that SSL + http basic auth is sufficient. But it's easier to manage passwords from a central location and single sign on is a nice feature.

Quote:

there's no weirdness about it, no quirky way around it if something's odd in the configs, <Location /> is <Location />, it covers the whole vhost, and that's really all there is to it.

I like using location with reverse proxy and stand alone web applications when ever possible. It makes all my applications automatically 'kerberos aware', more or less. I don't want to put a lot of effort into figuring out how hook each application up to my ldap server or such nonsense. Plus I don't have to worry about conflicting requirements in the web server and such things. In addition I can take advantage of cach'ng in the reverse proxy so that it reduces the amount of strain on the app server.

So on and so forth. I can update applications piecemeal and move them around from system to system if necessary without having to reconfigure the clients or change URLs and such things. It just requires a slight change to the location statement.

Now the biggest problem with this approach is that many applications are poorly written and require URL re-write rules to work properly on the reverse proxy. This can be a PITA sometimes.

Quote:

You are, of course, depending on Apache itself to be secure

Yes. I don't run custom versions of Apache and I want to avoid compiling any sort of custom "_mod" or anything like that to make it easy to keep up to date. But like I said it's complicated to configure sometimes and it's difficult to know if you did it correctly. I try to keep the Apache side as simple as possible, but it's still makes me a bit nervous,

So it's just another layer to security.

Quote:

- an attacker must compromise both the auth_basic AND whatever built-in login functionality your web app has

That's not true.

Currently it seems as most PHP since 2004 are vulnerable to remote code execution just as long as the attacker is able to reach a URL whose content it generated by CGI script. So they do not need to defeat the app login.

FOR EXAMPLE:

For PHP all they need to do right now is have access to a PHP script on a web server that is executing them via CGI.

Currently every PHP application over CGI since 2004 is vulnerable to remote code execution through a simple edit of URL arguments. CGI RFC says that using a URL argument it is possible to pass command line arguments to your scripts. In 2004 a PHP developer removed the the sanity checking that made it possible to run PHP safely in as a CGI script. He did this because many of their regression tests were broken and it was easier to remove the sanity checking then fix the tests. Nobody replied to his email explaining why it was necessary.

PHP has recently released a patched version that was suppose to solve this issue, but they didn't design the fix correctly and a modified version of the attack was still possible. In response to that they had a recommended configuration change for administrators to mitigate the issue, but their recommendations were incorrect and still allowed a slightly modified version of exploit to work.

All of this is very recent, as of this last week.

Now if you are using fcgi then your safe. Not because fcgi is better security-wise, but it's just dumb luck on the part of administrators if they chosed/were able to use this fcgi configuration. Also depending on how you run your local php cfg file it may or may not help to prevent remote code execution, which again is not because one configuration is 'more secure' then the other.. it's just dumb luck.

That's not to say that next week there isn't going to be a trivial exploit against PHP for people running fcgi or anything.

Unfortunately this is not the first time this sort of thing happened. It's not the first time PHP had remote exploits or other security issues that PHP devs then released patches for that didn't fix the issue and released configuration recommendations that were incorrect.

2. RewriteRule on php.net: The RewriteRule posted on PHP.net is useless. I can’t go into much detail here but it does not mitigate the issue properly.

3. RewriteRule on bugs.php.net: The comments in the original advisory show a RewriteRule that was posted on bugs.php.net originally. I tried this against an unpatched PHP 5.3.3-7+Squeeze8 and it seems to correctly filter all malicious query strings, including my variations. It has a tiny bit of overshoot though (for example, passing negative integers to the query string will trigger a 403), but it’s not practically relevant, I think.

4. Binary Wrapper and Option-stripping patch on eindbazen.net: The guys who wrote the original advisory wrote a patch and a php-cgi wrapper that strips all “-xyz” options respectively skips option processing when in CGI mode. This works well, too.

5. Patch “third option” on eindbazen.net: This patch does not properly mitigate the issue.

Who gives a crap what the default is? By default Redhat or Debian doesn't serve webmail either.

The fact of the matter is that using CGI is a perfectly and 100% valid configuration. Even when PHP and CGI is configured the way it should be it will still allow trivial remote code execution.

The PHP response was to release a broken patch that did nothing to address the real problem. Once people pointed out that the patch didn't work then the PHP folks told people a work around configuration to block the attack that actually didn't work work. What is more they actually did implement the correct sanity checking needed for CGI, but removed that code in 2004 because it broke some of their _regression_tests_.

You can't make this stuff up. If the PHP devs themselves can't figure out how to use PHP in a secure manner then what chance do I have?

If you want a half a dozen other examples of why PHP sucks, that have nothing to do with CGI, then I can provide it. It's not a matter of this is a one time mistake.. which will crop up once in a blue moon with any software. I've seen this pattern of fail many multiple times with PHP. It's just software with a terrible track record.

Now I didn't want to get into _why_ PHP sucks.

All I was asking was if anybody had any recommendations for non-PHP webmail.