Posted
by
Soulskill
on Saturday October 19, 2013 @06:07PM
from the out-with-the-old dept.

An anonymous reader writes "NFTables is queued up for merging into the Linux 3.13 kernel. NFTables is a four-year-old project by the creators of Netfilter to write a new packet filtering / firewall engine for the Linux kernel to deprecate iptables (though it now offers an iptables compatibility layer too). NFTables promises to be more powerful, simpler, reduce code complication, improve error reporting, and provide more efficient handling of packet filter rules. The code was merged into net-next for the Linux 3.13 kernel. Iptables will still be present until NFTables is finished, but it is possible to try it out now. LWN also has a writeup on NFTables."

And the iptables docs haven't even been finished yet. I was at the North Carolina Biotechnology Center at the Linux Expo in 1997 when one of the speakers that was talking about iptables promised they would write docs for it. I think I was the only teen girl and only black female there, so if you were there, you'll probably remember me. How about finishing what you start rather than screwing the users with half-ass unfinished projects?

Don't you know? Open-source software doesn't need docs, because the best docs available are the sources.

It's the most comphrehensive, but also one of the more incomphrehensible not fluent in code.

Yes, because then you can say that subtle bugs are actually features! It is great!

Seriously, I know both of you are joking. But this is a bad joke that should be put down once and for all. Documentation describing the intended function of a program can help you find the bugs that cause inconsistent behavior. Using source as documentation is not even an option the most skilled programmers. As long as we do not have mind-reading skills there is no way of knowing what the original programmer intended.

I've always found the iptables tutorial from frozentux to be reasonablly comprehensive, maybe it's missing some really fancy stuff but the important stuff about the theory of operation and what the targets do is all there.

Well, there's a point in abandonning a project that can't even document itself.

But I'd disagree. Iptables was a huge success, and the fact that the official docs isn't that good was eclipsed by how powerfull the software is. But there's a point when you can't simply add features to an old software anymore, and needs to start from scratch. Looks like we are at that point.

NFTables is brought to you by a group of codes created when Alexey Kuznetsov decided to replaced the low level linux network stack for Linux 2.2 to make it more like what Cisco provided in IOS. The result added whole pile of new functionality to Linux (eg routing rules), and a shiny new highly module traffic control engine. Alexey produced a beautifully written postscript documentation [smc.edu] for the new user land routing tools (the "ip" command), and 100 line howto [columbia.edu] for the far more complex traffic control engine tools (the "tc" command).

Technically it was a was tour de force. But to end users it could at best be called a modest success. Alexey re-wrote the net-utils tools ("ifconfig", "route" and friends) to use the new system, and did such a good job very few bothered to learn the new "ip" command even though the documentation was good and it introduced a modest amount of new features. But real innovation was the traffic control engine, and to this day bugger all people know how to use it.

At this point it could have gone two ways. Someone could have brought tc's documentation up to the same standard Alexey provided for ip, or they could ignore the fact that almost no one used the code already written and add more of the same. They did the latter.

It was also at this time the network code wars started in the kernel. Not many people know that a modest amount of NAT, filtering and so on can be done by Alexey's new ip command. But rather than build on that Rusty Russell just ported the old ipfwadm infrastructure, called it ipchains (and later replaced it with iptables). There was some overlap between Rusty's work and tc, and this has grown over time. For example the tc U32 filter could do most of the packet tests ipchain's introduced over time on day 1. Technically the modular framework provided by tc was more powerful than ipchains, and inherently faster. Tc was however near impossible for mere mortals to use even if they had good documentation. There were some outside efforts to fix this - tcng [sourceforge.net] was an excellent out-of-tree attempt to fix the complexity problems of tc. But in what seems like a recurring theme, it was out of tree and ignored. In contrast, Rusty provided ipchains with the some best documentation on the planet. In the real world the result of these two efforts are plain to see - while man + dog uses iptables, there maybe 100 people on the planet who can use tc.

Another example of the same thing is IMQ [linuximq.net]. IMQ lets you unleash the full power of the traffic control engine on incoming traffic. (Natively the traffic control engine only deals with packets being sent, not incoming packets - a limitation introduced for purely philosophical reasons). IMQ was very well documented, and heavily used. The people who brought you tc had a list of technical objections to IMQ. I don't know whether they were real or just a case of Not Invented Here, but I'd give them the benefit of the doubt - they are pretty bright guys. So they replaced it with their own in-kernel-tree concoction. (For those of you who don't follow the kernel "in-tree" means it comes with the Linux Kernel. An out-of-tree module like IMQ means at the very least you have to compile the module source, and possibly the entire kernel.) For a while this discouraged the developers of IMQ so much they stopped working on it. If you follow that link, you will see it's back now. Why? Because the thing that replaced it had absolutely no documentation. They never do. So no one could use the replacement. Again, in the end, the thing code that was documented won the day.

By now you might be guess where this is heading. We have two groups in the kernel competing to provide the

Don't worry, iptables and arptables aren't going to magically disappear. A ridiculous amount of infrastructure depends on both, and the nftables announcement is severely over-hyped. Having alternatives is a good thing, and it doesn't mean the sky is falling.

somebody decides they have a better way, and rather than keeping the two available until one stops being maintained they go and dump one as 'inferior'

To be fair, the kernel developers have (to my knowledge) never done this. If you have ever compiled a kernel yourself, you will have seen that new features are flagged as "experimental", older features as "deprecated", and defaults are applied judiciously.

You will most likely find that it is your distribution that is most guilty of foisting bleeding-edge, half-tested stuff on to its users. Linus and the kernel devs are (and have to be) almost fanatically conservative.

That's not the fault of "progress", it's just a Linux thing... Same thing happened with audio, file systems, and much more.

The BSDs:

* haven't changed their audio systems since their inception.

* Kept their file systems backwards-compatible for decades, and did not have a flood of XFS/JFS/ReiserFS/etc. options. There have been changes recently, but incredibly few by comparison.

* Used the powerful and simple IPF as their stateful firewall dating back before many/.ers were born... at least 1993 or so. Only changed to PF (with very similar syntax) after IPF's license was changed, and all the BSD still use it. There are some alternative projects, but again, even with several BSDs, there's still less churn than with Linux.

And they're down to 1.1% [w3techs.com] of all web servers, all FreeBSD. From the list of "Popular websites using FreeBSD" only one is in Alexa's top 500 and that's php.net [alexa.com]. The Alexa rankings:

It is literally less than a handful (the top four) that means BSD even still has a presence and 80% of that is probably just one site. I guess BSD code is lots of places like in OS X and embedded and routers and whatnot but BSD is practically dead as a server (cue and queue the Netcraft and Monty Python jokes, please take a number). Who, at this point, would be interested in building a new network stack for BSD? I guess Juniper would since they use it for Junos, but honestly not that many others...

BSD is interesting to me. I tried to use it for an internal netdisco install. I've been using Linux for at least a decade and I've built plenty of LAMP servers. Sure enough, I got NetDisco working and everything seemed great, until I decided to run and update on the system...

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/nutshell.html#introduction-nutshell-users [freebsd.org]
You also forgot some biggies, like Netflix, oh and Apache themselves. Sampling an OS's usage numbers off of how many public facing web servers are out there will give you very biased results. I have two FreeBSD servers running OpenBGPd and OpenOSPFd, and two that are NFS servers, there is absolutely no web server on them. They are ROCKS of stability. This is just FreeBSD, a partner ISP I work with runs OpenBSD route reflectors.

Only changed to PF (with very similar syntax) after IPF's license was changed, and all the BSD still use it... there's still less churn than with Linux.

The BSD's are definitely more stable. Linux makes more progress, sometimes by adopting other projects' work when it's better. There's no way to have both rapid progress and stability, so it's good that the community has a choice (I avoided saying 'communities' on purpose).

I've been using BSD for routing and firewalling for about a decade, first by m0n0wal

cat iptables | ip2if_compile | iftables_decompile
Passing it through a compiler to the iftables virtual machine and then decompiling/describing avoids some "that phrases does not translate" problems. If one can't compile arbitrary iftables code for the vm, then the vm is formally incomplete (:-))

Kernel 2.4 works fine for my needs. You kids today have no idea what it is like upgrading thousands of computers at work! Especially when you have to justify to a beancounter to upgrade an IP table that has worked fine since October 2001 and already works. It is an enterprise standard that works so why fix what isn't broken?

Last thing I need is another confusing IP table interface designed for teenagers.

With a modern AV I should be just fine if I do not go to questionable websites.

As long as iptables/ipchains works, and/or you don't have a ton of open ports, there's really no problem running old kernels.80% of the routers in the world are running some really old kernels and have/will never get updates. Baring any newly discovered backdoors,they are as secure today as they ever were.

All malware today uses ports 80 and 443. Port-based firewalling is a meaningless ritual from the previous century.

I think you're confusing cause and effect, if we didn't have port based firewalls we'd still have Blaster-style worms spreading like wildfire. Because we've locked things down to a few approved ports, naturally that's where they try getting in.

Well, that was really my point. Modern malware has nothing to do with looking for an open port with a vulnerable service listening on it, and hasn't for quite some time. The malware uses port 80/443 in the sense that the "vulnerable services" are now browser plug-ins, and users who download and run stuff.

Heck, the continuous hassle of negotiating with IT over ports has moved most legitimate enterprise software onto only ports 80 and 443 as well. On reason why "everything's a web service" now is to get IT

We kids have no idea what its like upgrading thousands of computers at work because unlike you, grandpa, we use [Ansible [ansibleworks.com] / Salt [saltstack.com] / Chef [opscode.com] / CFEngine [cfengine.com] / Puppet [puppetlabs.com]]. And making changes to thousands and thousands of machines takes seconds to send out to all of them. A bit more time to verify, and any that are stuck can be rebuilt from scratch in a few more moments without even worrying about why it didn't work the first time.

Second point: why would you need some kind of interface to your firewall rules. Its a text f

Not to mention, keeping a lot of said management facilities operational can very often waste as much time as they save. Another example is proprietary switch stack clustering protocols. They cost as much to manage their quirks and work around their defects as they save, unless you have a truly massive and abnormally homogeneous set of systems.

It's as useless as having the option of having car windows painted black, but people wanted it so it's there (so I've heard - I'm an end user not a developer of this).When NAT came in it was a pity we needed that shit due to a lack of numbers instead of having everything adressable, and now for some reason people like the smell of that shit. They think it smells like security.If you think I'm wrong please spend at least five minutes learning how a firewall works and look up router on wikipedia or something before you reply. You should work out from that that the devices that provide the security will still be upstream whether you have NAT or not.

The only thing I've ever heard that makes any sense is topology hiding. Not worth breaking the internet IMO, but it's the only thing about NAT that I can understand why people want, and can't really be done another way.

There is so much depth to iptables that not one in 10,000 people ever used, that getting your head around the basicswas always a problem of separating wheat from chaff. You could literately route packets in circles, for what purpose I can't imagine.

I suspect the Netfilter folks haven't removed any of this, and merely hidden it.

If you weren't already +5 informative, I would have up-voted you. pf has syntax so logical it's almost like speaking English. Then, in comparison, you have to memorize a variety of command flags to get anything done with iptables.

Mind you, personally i'm a FreeBSD user and (I think?) you can't actually get iptables for *BSD, and I don't have much use for a complicated firewall setup,

You do know that most commercial software firewalls are Linux-based, right? They do not advertise this, but Checkpoint, Juniper,... all Linux iptables under the hood. The problem with OpenBSD is that hardware support sucks.

The second point is however, that the rule syntax is _not_ the reason for the replacement.

I take it you don't live in a country where revolutions have taken place. A perfectly enforced law which no one dares break, may not be a bad idea to replace... Your argument is refuted. Protip: Try not to speak in absolutes, or you'll almost always be wrong.

Yes, the alarm bells go off in my head as well when someone says "rewrite", does this person know WTF he's talking about? I know most battle tested code isn't all that pretty and that many people like to rewrite what they don't understand, don't follow their ideas of good code, aren't following their design patterns or isn't generic or flexible enough. Worst are those that take a look at a convoluted mess and decide it's too complex to under

Your horse and buggy _are_ broken for most applications in the modern world. Sure, they do their original job, but the job of transportation has changed. Iptables is not broken, it does the job it is supposed to do today just fine.

Iptables took me a while to get my head around back in the day (which I had to do in a hurry) but had a few advantages over ipchains. Maybe this one is a bit less confusing for newbies or has some other advantage? Let's give it some time.

You misunderstand what "broken" means. Rather obviously it means "broken for its main application". Iptables is not broken for its main purpose. (No, this is not about "great", we are talking engineering here, not marketing.) That horse carriage is as it does not adequately fulfill the task of transporting goods and people today in most instances. The rest of your posting is just cheap polemics that try to distort what I wrote. Pathetic.

What's the point of rearchitecting it so that it can handle large rulesets efficiently if the whole thing will just be abstracted away in a VM anyway? What's next? a javascript interpreter in the kernel?

People say that, yet immediately turn around and say you should not expect a program written for Windows 95, or Linux in 1995, to run on a modern computer.

There is a difference between incremental upgrades and wholesale rewrites. Windows 7 (to take one example) isn't a new OS written from scratch, but an incremental improvement on NT, which dates back to 1993. Rewriting from scratch is almost always a bad idea [joelonsoftware.com].

People say that, yet immediately turn around and say you should not expect a program written for Windows 95, or Linux in 1995, to run on a modern computer.

Win 95 programs still run great on windows 7. I use a few of them regularly and the Linux ABI is fully backwards compatible. I expect not to see shit break.

Old code is great when changing it is going to inconvenience you personally, but replacing it with something newer is just fine and not a problem when it is only going to inconvenience someone who is not you.

Its called discipline. There is no technical reason existing shit must break to make progress. If you want to develop something new to replace something old just provide a compatibility layer for existing interface as Linux has always done.

moving packets in and out of kernelspace will kill performance.. Well, I guess we'll see, anyway. iptables is used for more than just someone's dsl gateway, and even there, the hardware in use for those is already on the lean side.

Will NFTables still allow packets to be selectively (eg certain TCP SYN packets) passed to a user-space filter which both mangles the packet and dynamically changes the NATting? With IP tables this was relatively simple (both defining the Iptables rules and writing the userspace filter)

There might be spots where this is true, but in others it will improve performance, e.g. skipping unneeded operations that occur on all rules like incrmenting conters. Remember, iptables is actually somewhat of an abstraction itself.

I haven't been keeping up with how much intelligence vendors are putting in ethernet cards these days, but as far as I know, actually having firewall rules use on-card features has sadly lagged way behind what is offered. It would seem to me that, on the one hand, using a simp

The problem with that is now you have no means of getting into your machine remotely over ip after the vendor fucks it up. Vendors shouldn't be disabling firewalls as permanent solutions, but while troubleshooting, it does make sense to do it temporarily in order to ensure the firewall is not at fault. If your system is a highly sensitive target, you should already have means in place to troubleshoot problems without exposing yourself. Tell the vendor the procedures for that.

1) IPCHAINS was nice, simple, and usable. IPTABLES has stuff scattered all over the place. This may affect me more as a Gentoo user who configures his own kernel. I have to remember to...a) enable Netfilterb) enable "Advanced netfilter configuration" so that I can specify multi-port matchesc) check off the necessary items in "Core Netfilter Configuration"d) check off the necessary items in "IP: Netfilter Configuration"That's on a simple home system that doesn't attempt NAT/Masq/Routing/etc.

2) A problem with putting detailed specifications into the kernel is that when I want to enable new features (not just new rules), I have to tweak the kernel, rebuild it, and reboot. If we had to do this with new MTAs or crons or other system programs, there would be a huge outcry. Moving this out of the kernel looks logical.

Is NFTables suitable as a generic packet classifier, or is it strictly limited to packet filtering? Van Jacobson's net channels offer the possibility of extraordinary improvements in efficiency and performance, great simplification of drivers, ease of development, and much improved flexibility. The one missing piece is a flexible packet classifier. While NFTables looks like it incorporates many of the essential ideas, it isn't clear wether it is built with this in mind. If not, I'd like to see this fixe

For good reason. That's just reflecting the fact that a program has to check a series of instructions. The code can't check multiple conditions at the same time. Branching out into a series of tables for different things is the best way to reduce the unnecessary checks by filtering out those that do/don't apply. My first rule is always to allow RELATED/ESTABLISHED packets, so only the first packet of any new connection goes any further.

openwrt puts that related/established rule first and it sucks. If I want an internet curfew I want it to take effect immediately, not when the current connection is finished. Same for an IP address that trips up the malware triggers. It has to stop and it has to stop now, not when the payload has finished downloading.

Every firewall GUI for Linux is underdeveloped or abandoned and lacks complete control over all functions.

Different use cases need different features. The abandoned guis were made by people who didn't understand this when they began their projects. The development taught them that lesson.

No one should be forced to manually configure this, it should be point and click similar to free firewalls for Windows (excluding discussion on app based blocking in Win).

one-click solutions to complex problems are always bad. That said, simplistic solutions are simple to configure with iptables the way it is.

I hope someone develops a GUI for "NFTables", because manually configuring iptables (using ufw, or its lack of complete control/fine tuning gui or some other method) sucks. Some assume you know all about Linux networking.

Well, if you're designing a complex solution for a complex networking problem, you will need to know a bit more about the guts of a system regardless the platform. A gui flexible enough

The tendency to try to abstract activities that are essentially programs into a set of non-program-like objects has generally led to accumulation of byzantine cruft. This is especially true in the network packet processing and authentication domains. This isn't just a problem with GUIs. CLI utilities often want to just present the user with "objects" like rules and lists, and try to conceal flow control concepts by nesting contextually meaningful layers of these objects -- the meaning sof these layers of

I have tried several GUIs for iptables, and also a few firewall scripts for both a few servers and my notebook. In the end, I have always been frustrated.

Finally, for my rather simple needs, I have a simple (~ 100-150 lines) file in "iptables-save" format, and a short custom init script which basically does iptables-restore from that file or saves to it.

I am not convinced that a GUI would be clearer / more readable / more flexible than that. At least, the ones I tried were not.

Snapgear had a pretty good GUI for iptables with an option for command line rules if you needed them.Of course once you start doing something repetitive it's hard to beat a script with a loop or even cut and paste instead of ticking hundreds of boxes.

I don't think fucking needs a new GUI. The current touch-based interface works just fine. Most people don't need any documentation for it, but if you really need it, I think there's a lot of third-party stuff explaining every fucking detail. There are even videos demonstrating its use, look under "porn".