"It Just Works" is more than just a slogan, it's a way of life!

Main menu

Post navigation

Dell’s Linux Engineering team, based in Austin, TX, is hiring a Senior Principal Engineer. This role is one I’ve previously held and enjoyed greatly – ensuring that Linux (all flavors) “just works” on all Dell PowerEdge servers. It is as much a relationship role (working closely with Dell and partner hardware teams, OS vendors and developers, internal teams, and the greater open source community) as it is technical (device driver work, OS kernel and userspace work). If you’re a “jack of all trades” in Linux and looking for a very senior technical role to continue the outstanding work that ranks Dell at the top of charts for Linux servers, we’d love to speak with you.

The formal job description is on Dell’s job site. If you’d like to speak with me personally about it, drop me a line.

As new upstream maintainer for the popular s3cmd program, I have been collecting and making fixes all across the codebase for several months. In the last couple weeks it has finally gotten stable enough to warrant publishing a formal release. Aside from bugfixes, its primary enhancement is adding support for the AWS Signature v4 method, which is required to create S3 buckets in the eu-central-1 (Frankfurt) region, and is a more secure request-signing method usable in all AWS S3 regions. Since releasing s3cmd v1.5.0, python 2.7.9 (as exemplified in Arch Linux) added support for SSL certificate validation. Unfortunately, that validation broke for SSL wildcard certificates (e.g. *.s3.amazonaws.com). Ubuntu 14.04 has an intermediate flavor of this validation, which also broke s3cmd. A couple quick fixes later, and v1.5.2 is published now.

I’ve updated the packages in Fedora 20, 21, and rawhide. EPEL 6 and 7 epel-testing repositories has these as well. If you use s3cmd on RHEL/CentOS, please upgrade to the package in epel-testing and give it karma. Bug reports are welcome at https://github.com/s3tools/s3cmd.

I took a few months off from dealing with my spam problems, choosing to stick my head in the sand. Probably not my wisest move…

In the interim, the opendmarc developers have been busy, releasing version 1.3.0, which also adds the nice feature of doing SPF checking internally. This lets me CLOSE WONTFIX the smf-spf and libspf2 packages from the Fedora review process and remove them from my system. “All code has bugs. Unmaintained code with bugs that you aren’t running can’t harm you.” New packages and the open Fedora review are available.

I’ve also had several complaints from friends, all @yahoo.com users, who have been sending mail to myself and family @domsch.com. In most cases, @domsch.com simply forwards the emails on to yet other mail provider – it’s providing a mail forwarding service for a vanity domain name. However, now that Yahoo and AOL are publishing DMARC p=reject rules, after smtp.domsch.com forwarded the mail on to its ultimate home, those downstream servers were rejecting the messages (presumably on SPF grounds – smtp.domsch.com isn’t a valid mail server for @yahoo.com).

My solution to this is a bit akward, but will work for a while. Instead of forwarding mail from domains with DMARC p=reject or p=quarantine, I now store them and serve them up via POP3/IMAP to their ultimate destination. I’m using procmail to do the forwarding:

This introduces quite a bit of latency (on the order of an hour) for mail delivery from my friends with @yahoo.com addresses, but it keeps them from getting rejected due to their email provider’s lousy choice of policy.

Tim Draegen, the guy behind the excellent dmarcian.com, is chairing a new IETF working group focusing on proper handling on “indirect email flows” such as mailing lists and vanity domain forwarding. I’m hoping to have time to get involved there. If you care, follow along on their mailing lists.

I’m choosing to ignore the fact that domsch.com is getting spoofed 800k times a week (as reported by 8 mail providers and visualized nicely on dmarcian.com), at least for now. I’m hoping the new working group can come up with a method to help solve this.

Do your friends use a mail service publishing DMARC p=reject? Has it caused problems for you? Let me know in the comments below.

The existential question, asked by everyone and everything throughout their lifetimes – who am I? High school seniors choosing a college, college seniors considering grad school or entering the job market, adults in the midst of their mid-life crisis—the question comes far easier than the answer.

In the world of technology, who you are depends on the technology with which you are interacting. On Facebook, you are your quirky personal self, with pictures of your family and vacations you take. On LinkedIn, you are your professional self, sharing articles and achievements that are aligned with your career.

What about on the myriad devices you carry around? On the smartphone in my pocket, I have several personas—personal, business, gamer (my kids borrow my phone), constantly context-switching between them. In the not-too-distant past, people would carry two phones—one for personal use and one for work, keeping the personas separate via physical separation—two personas, two devices.

If you have ever attended the Ottawa Linux Symposium (OLS), read a paper on a technology first publicly suggested at OLS, or use Linux today, please consider donating to help the conference and Andrew Hutton, the conference’s principal organizer since 1999.

I first attended OLS in the summer of 2003. I had heard of this mythical conference in Canada each summer, a long way from Austin yet still considered domestic rather than international for the purposes of business travel authorization, so getting approval to attend wasn’t so hard. I met Val on the walk from Les Suites to the conference center on the first morning, James Bottomley during a storage subsystem breakout the first afternoon, Jon Masters while still in his manic coffee phase, and countless others that first year. Willie organized the bicycle-chain keysigning that helped people put faces to names we only knew via LKML posts. I remember meeting Andrew in the ever-present hallway track, and somehow wound up on the program committee for the following year and the next several.

I went on to submit papers in 2004 (DKMS), 2006 (Firmware Tools), 2008 (MirrorManager). Getting a paper accepted meant great exposure for your projects (these three are still in use today). It also meant an invitation to my first exposure to the party-within-the-party – the excellent speaker events that Andrew organized as a thank-you to the speakers. Scotch-tastings with a haggis celebrated by Stephen Tweedie. A cruise on the Ottawa River. An evening in a cold war fallout shelter reserved for Parliament officials with the most excellent Scotch that only Mark Shuttleworth could bring. These were always a special treat which I always looked forward to.

Andrew, and all the good people who helped organize OLS each year, put on quite a show, being intentional about building the community – not by numbers (though for quite a while, attendance grew and grew) – but providing space to build deep personal connections that are so critical to the open source development model. It’s much harder to be angry about someone rejecting your patches when you’ve met them face to face, and rather than think it’s out of spite, understand the context behind their decisions, and how you can better work within that context. I first met many of the Linux developers face-to-face at OLS that became my colleagues for the last 15 years.

I haven’t been able to attend for the last few years, but always enjoyed the conference, the hallway talks, the speaker parties, and the intentional community-building that OLS represents.

Several economic changes conspired to put OLS into the financial bind it is today. You can read Andrew’s take about it on the Indiegogo site. I think the problems started before the temporary move to Montreal. In OLS’s growth years, the Kernel Summit was co-located, and preceded OLS. After several years with this arrangement, the Kernel Summit members decided that OLS was getting too big, that the week got really really long (2 days of KS plus 4 days of OLS), and that everyone had been to Ottawa enough times that it was time to move the meetings around. Cambridge, UK would be the next KS venue (and a fine venue it was). But in moving KS away, some of the gravitational attraction of so many kernel developers left OLS as well.

The second problem came in moving the Ottawa Linux Symposium to Montreal for a year. This was necessary, as the conference facility in Ottawa was being remodeled (really, rebuilt from the ground up), which prevented it from being held there. This move took even more of the wind out of the sails. I wasn’t able to attend the Montreal symposium, nor since, but as I understand it, attendance has been on the decline ever since. Andrew’s perseverance has kept the conference alive, albeit smaller, at a staggering personal cost.

Whether or not the conference happens in 2015 remains to be seen. Regardless, I’ve made a donation to support the debt relief, in gratitude for the connections that OLS forged for me in the Linux community. If OLS has had an impact in your career, your friendships, please make a donation yourself to help both Andrew, and the conference.

My team in Dell Software continues to grow. I have two critical roles open now, and am looking for fantastic people to join my team. The first I posted a few weeks ago, is for a Principal Web Software Engineer, a very senior engineering role. This second one is for a Java web application developer (my team is using JBoss Portal Server / GateIn for the web UI layer). Both give you the chance to enjoy life in beautiful Austin, TX while working on cool projects! Let me know if you are interested.

Dell Software empowers companies of all sizes to experience Dell’s “Power to Do More” by delivering scalable yet simple-to-use solutions to drive value and accelerate results. We are uniquely positioned to address today’s most pressing business and IT challenges with holistic, connected software offerings. Dell Software Group’s portfolio now includes the products and technologies of Quest Software, AppAssure, Enstratius, Boomi, KACE and SonicWALL. For more information, please visit http://software.dell.com/.

This role is for a unique and talented technical contributor in the Dell Software Group responsible for a number of activities required to design, develop, test, maintain and operate web software applications built using J2EE and JBoss GateIn portal.

Role Responsibilities
-Develop server software in the Java environment
-Create, maintain, and manipulate web application content in HTML, CSS, JavaScript, and Java languages
-Unit test web application software on the client and on the server
-Provide guidance and support to application testing and quality assurance teams
-Contribute to process diagrams, wiki pages, and other documentation
-Work in an Agile software development environment
-Work in a Linux environment

Preferences
-Experience with Atlassian products, continuous integration
-Be willing and able to learn new technologies and concepts
-Demonstrate resourcefulness in resolving technical issues.
-Advanced interpersonal skills and be able to work independently and as part of the team.

One of the things I’ve started to enjoy as a people manager at Dell, more than as an “individual contributor”, is that I have a lot of say in what skills my team needs, and can look for people I really want to have on my team. I’ve got one such opening posted now, for a Principal Web Software Engineer. Please contact me if you or someone you know would be a good fit.

Dell Software empowers companies of all sizes to experience Dell’s “Power to Do More” by delivering scalable yet simple-to-use solutions to drive value and accelerate results. We are uniquely positioned to address today’s most pressing business and IT challenges with holistic, connected software offerings. Dell Software Group’s portfolio now includes the products and technologies of Quest Software, AppAssure, Enstratius, Boomi, KACE and SonicWALL. For more information, please visit http://software.dell.com/.

Come work on a dynamic team creating software that impacts enterprise customers on a day-to-day basis. Great
opportunity to work on cutting edge technology using evolving development tools and methods.

This role is for a unique and talented technical contributor in the Dell Software Group responsible for a number of activities required to design, develop, test, maintain and operate web software applications built using J2EE and JBoss GateIn portal.
The position requires strong technical and creative skills as well as an understanding of software engineering processes, technologies, tools, and techniques including: Red Hat JBoss Enterprise Application Platform (or similar J2EE frameworks), JBoss GateIn, HTML5, CSS, JavaScript, jQuery, SAML, and web security.

Role Responsibilities
-Develop high-level and low-level design documents for modules of the web application
-Provide technical leadership to a team of software engineers
-Develop server software in the Java environment
-Create, maintain, and manipulate web application content in HTML5, CSS, and JavaScript
-Unit test web application software on the client and on the server
-Provide guidance and support to application testing and quality assurance teams
-Support field sales on specific customer engagements
-Work in an Agile software development environment
-Work in a Linux environment
-Experience with Atlassian products, continuous integration,
-Experience with Integration Platform-as-a-Service (Dell Boomi)

Requirements
-8+ years of experience designing and developing Java-based web applications
– Have a deep understanding of the components and capabilities of J2EE and be able to help make architectural decisions on component choices
– Posses visual web application design skills, implementation knowledge in internationalization and localization, HTML, CSS, JavaScript, RESTful interface design and utilization skills, OAuth or other token-based authentication and authorization
-Willing and able to learn new technologies and concepts
-Demonstrate resourcefulness in resolving technical issues
-Advanced interpersonal skills and be able to work independently and as part of the team.

In parts 1 and 2 of this series, I’ve explored the current best practices for authenticating outbound email, validating inbound email, and my own system configurations for such.

When not busy with my day job, I also serve on the board of our neighborhood youth basketball league as registrar and co-webmaster. As part of these roles, I maintain the email infrastructure, and send most of the announcement emails.

I had made a point of “warming up” the IP address for this server. I’ve been using this same IP for several months, and it has a returnpath.com Sender Score of 99, so I thought I’d be in the clear.

We have about 1500 parent email addresses on our announcement mailing list, with over 160 of them going to a single domain – austin.rr.com. No surprise there – Roadrunner is a very popular ISP. The problem is this: Roadrunner’s mail servers don’t appreciate when my mail server sends an email, via this mailing list, to their inbound MX mail server. Even after mailman splits the message up so there are only 10 recepients per message. The first few messages get through, the rest get put on hold (SMTP 4.x.x try again later), allowing only a trickle of messages per hour from my one IP address.

During the Austin Snowpocalypse last week, we needed to get an announcement out to our parents, that, because schools were closed, and because we rent all our court space from the area schools, our practices and games had to be cancelled for the night. I sent that note around noon. It took until 6pm before all the @austin.rr.com emails were allowed through – just in time to get notice of a cancelled 6pm event. Note – my message came from a valid SPF mail server, had a valid DKIM key attached, and the DMARC policy is “none”, so it wasn’t blocked at that level. It was blocked because my mail server’s IP address isn’t allowed to send more than a few messages to Roadrunner subscribers each hour.

Roadrunner does provide a way for you to request relaxed rate limits for your IP. I followed their process, which uses Return Path’s Feedback Loop Management service, but my request was denied, no explaination given. Perhaps they know it’s a cloud service IP, which in theory could be given to another customer at any moment. I’ll file a request with my ISP to see if they’ll sign up with ReturnPath to be responsible for their netblocks on behalf of their customers. Not sure how well that’ll go over – it could make a lot of work for the ISP mail technicians.

One other alternative is to use an outbound mail service such as Amazon Simple Email Service, Mailchimp or SendGrid, in order to get my mails out to our player’s families in a timely manner. Mailchimp appears to have the disadvantage of needing to migrate all my mailing lists to them, instead of my existing GNU Mailman setup. SendGrid has better pure SMTP integration, and with some sendmail smarttable hacking, I could probably make that work. All three involve some increased cost to us, in the months I send a lot of announcements.

Do you send bulk mail from a cloud service? How do you ensure your mails get through? Leave your comments below.

In part 1 of this series, I relayed a bit of my story about my use of SPF, DKIM, and DMARC to try to reduce the spam being sent as if from my personal domain, while increasing the odds that legitimate mail from my domain gets through.

In this part, I describe how these are actually implemented in my case.

First, let me describe my email setup. I have one cloud-hosted server, smtp.domsch.com, through which all authentic *@domsch.com email is sent. Senders may be either local to this server (such as postmaster@ which sends the DMARC reports to other mail servers), or may be family members who use a hosted email service (as it happens, all use GMail) as their Mail User Agent. Users make an authenticated connection to smtp.domsch.com, which then DKIM-signs the messages and sends them on toward their destination MX server. These users may also be subscribed to various mailing lists which would break (fail to get their legitimate message through to the expected receivers) if SPF policy were anything except softfail.

Outbound, user-authenticated mail from *@domsch.com should be treated differently than inbound mail. Outbound mail requires only a DKIM milter to sign each message. Messages are signed with a DKIM key, published in my DNS:default._domainkey.domsch.com. 7200 IN TXT "v=DKIM1\; k=rsa\; s=email\; p=(some nice long hex string)"

I publish a DMARC DNS record so I can get reports back from DMARC-compliant servers._dmarc.domsch.com. 7200 IN TXT "v=DMARC1\; p=none\;
rua=mailto:dmarc-aggregate@domsch.com\; ruf=mailto:dmarc-forensics@domsch.com\;
adkim=r\; aspf=r\; rf=afrf "

Inbound mail to *@domsch.com should pass each message through an SPF milter which adds a Received-SPF header, a DKIM milter to check the validity of a DKIM-signed message which adds an Authentication-Results header, and the DMARC milter which decides what to do based on the results of these other two headers, and sends results to DMARC senders.

smtp.domsch.com runs CentOS 6.x, sendmail, and a variety of milters. On outbound mail, it runs opendkim. On inbound mail, it runs smf-spf, opendkim, and opendmarc, before sending it on to its final destination. My sendmail.mc file is configured as such to allow the different milters to run depending on direction – outbound or inbound:

Why do the milters listen on a local TCP socket, instead of a UNIX domain socket? Simply, they don’t yet have SELinux policies in place that let them use a domain socket. Once these packages are properly reviewed and included in Fedora/EPEL, we will adjust the listening port to be a domain socket.

Of these milters, opendkim and opendmarc seem to be properly maintained still. smf-spf, for its whole ~1000 lines of code, has been largely untouched since 2005, and its maintainer seems to have
completely fallen off the Internet. All my attempts to find a valid address for him have failed. There are a variety of other SPF filters, the most popular of which is python-postfix-policyd-spf – which as the name implies is postfix-specific, and as I noted, I’m not running postfix. Call me lazy, but sendmail works well enough for me at present.

These milters are currently under review (smf-spf, libspf2, opendmarc) in Fedora and will eventually land in the EPEL repositories as well. opendkim is already in EPEL.

If you are using SPF, DKIM, and DMARC, what does your configuration look like? Please leave a comment below.

We all dislike email spam. It clogs up our inboxes, and causes good
engineers to spend way too much time creatively blocking or filtering
it out, while spammers creatively work to get around the blocks. In
my personal life, the spammers are winning. (My employer, Dell, makes
several security and spam-fighting products. I’m not using them for
my personal domains, so this series is not related to Dell products in
any way.)

I recently came across DMARC, the Domain-based Message Authentication,
Reporting & Conformance specification. One feature of DMARC is that it
allows mail receivers, after processing given piece of mail, to inform
an address at the sending domain of that mail’s disposition: passed,
quarantined, or rejected. This is the first such feedback method I’ve
come across, and it seems to be gaining traction. Furthermore,
services such as dmarcian.com have popped up to act as DMARC report
receivers, which then display your aggregate results in a clear
manner.

A DMARC-compliant outbound mail server provides several useful bits of
information. 1) The domain publishes a valid Sender Policy Framework
(SPF) record. 2) The domain signs mail using Domain Keys Identified
Mail (DKIM). These are best practices now, in place by millions of
domains. In addition, the domain publishes its DMARC policy
recommendation, what an inbound mail server should do if a message
purporting to be from the domain fails both SPF and DKIM checks. The
policies today include “none” – do nothing special, “quarantine” –
treat the message as suspect, perhaps applying additional filtering or
sending to a spam folder, and “reject” – reject the message
immediately, sending a bounce back to the sender.

A DMARC-compliant inbound mail server validates each incoming message
against two things: compliance with the Sender Policy Framework (SPF)
and checks the DKIM signature. The server then follows the policy
suggested by the sending domain (none, quarantine, or reject), and
furthermore, reports back the results of its actions daily to the
purported sending domain.

I’ve been publishing SPF records for my personal and community
organization domains for several years, in hopes this would cut down
on spammers pretending to be from my domains. I recently added DKIM
signing, the next step in the process. With these two in place,
publishing a DMARC policy is very straightforward. So I did this,
publishing a “none” policy – just send me reports. And within a few
days, I started getting reports back, which I sent to dmarcian.com for
analysis.

What did I find?

On a usual day, my personal domain, used by myself and family members,
sends maybe a hundred total emails, as reported by DMARC-reporting
inbound servers. My community org domains may send 1000-2000 emails a
day, particularly if we have announcements to everyone on our lists.
That seems about right.

In addition, spammers, mostly in Russia and other parts of Asia, are
sending upwards of 20,000-40,000+ spam messages pretending to come from my
personal domain, again as reported by DMARC-reporting inbound
servers. Hotmail’s servers kindly are sending me reports for each
failed message they process thinking they were from me – a steady
stream of ~3600/day. No other DMARC servers have sent me such forensic
data yet.

Spam source by country for the last week

For several days, I experimented with a DMARC policy of “quarantine”,
with various small percentages from 10 to 50 percent. And sure
enough, dmarcian reports that the threat/spam mails were in fact
quarantined. It was really cool to wake up in the morning, check the
overnight results, and see the threat/spam graphs show half of the
messages being quarantined. It’s working!

However, dmarcian also reported that some of my legitimate emails,
originating from my servers and being DKIM-signed, were also getting
quarantined. What? That wasn’t what I hoped for.

It turns out that authentic messages were in fact being forwarded –
some by mailing lists, some by individuals setting up forwarding from
one inbound mail address to another. Neither of which I can do
anything about.

This isn’t a new problem – it’s the Achilles heel of SPF, which DMARC then inherits. Forwarding email through a mailing list typically makes subtle yet valid changes while keeping the From: line the same.

The Subject: line may get a [listname] prepended to it. The body may
get a “click here to unsubscribe” footer added to it. These
invalidate the original DKIM signature. The list may strip out the
original DKIM signature. And of course, it remails the message,
outbound using its own server name and IP, which causes it to then
fail SPF tests.

Sure, there are suggested solutions, like getting everyone to use
Sender Rewriting Scheme (SRS) when remailing, and fixing Mailman and
every other mailing list manager. Wake me when all the world’s email
servers have added that, I will have been dead a very very long time.

So, I switched back to policy “none”, and get the reports, aggravated
that there’s nothing I can directly do to protect the good name of my
domains. It’s hard both knowing the size of the problem, and knowing
we have no technological method of solving it today. Food for
thought.

In part 2 of this series, I will describe my system setup for using
the above techniques.

Do you use SPF? Do you use DKIM? Do you publish a DMARC policy? If so, what has your experience been? Leave comments below.