Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

NotInHere (3654617) writes In 1996, Markus F. X. J. Oberhumer wrote an implementation of the Lempel–Ziv compression, which is used in various places like the Linux kernel, libav, openVPN, and the Curiosity rover. As security researchers have found out, the code contained integer overflow and buffer overrun vulnerabilities, in the part of the code that was responsible for processing uncompressed parts of the data. Those vulnerabilities are, however, very hard to exploit, and their scope is dependent on the actual implementation. According to Oberhumer, the problem only affects 32-bit systems. "I personally do not know about any client program that actually is affected", Oberhumer sais, calling the news about the possible security issue a media hype.

A cynical man would say that none of those four things you listed are not that important.:) I think only "compressed init filesystem" is widely used in PCs, and we would have enough space to leave it uncompressed too.

I'm old enough to recall when many people argued we didn't have to worry about various (then theoretical) JPEG vulnerabilities because they would be "extremely hard to exploit". But once it becomes known that something is possible, people have repeatedly proven themselves extremely clever in figuring out how to accomplish it.

If I was on the Rover team, I might not worry - but terrestrial users of LZO compression should at least start thinking about how to ameliorate this.

In this case, it's not just "extremely hard to exploit" (which means the NSA had it done 10 years ago and the other black hats 5). It appears that it's impossible -- to cause the overrun requires an compressed block size larger than the affected programs will accept. (of course, this doesn't preclude the possibility of other bugs which allow a larger compressed block through)

File system drivers in general are not properly security vetted. You can do interesting stuff to a Linux box if you put ext4 on a fake device and start messing with what is on the disk while it is being read. Many device drivers have similar problems; you could find a Linux device driver with a problem and make a fake piece of hardware resembling the real thing while exploiting the bug.

This is pretty much unfixable. While most core OS code is of a high quality these days, there is just too much driver code

Whether you consider this issue is hype depends on your answer to "if a tree falls in a forest and there's no one to observe it..." thought experiment.

The author of LZ4 has a summary with regards to LZ4 [blogspot.co.uk] (both LZO and LZ4 are based on the LZ77 compression and both contained the same flaw) - that the issue has not been demonstrated as being exploitable in currently deployed programs due to their configuration (a rather angrier redacted original reply was originally posted [blogspot.co.uk]). So at present this issue is severe but of low importance. If a way is found to exploit this problem on currently deployed popular programs without changing their configuration then this issue will also be of high importance but since this issue has now been patched hopefully newly deployed systems wouldn't be vulnerable.

Given the number of security issues related to buffer over-runs, I wonder whether C/C++ should provide a safe buffer that would help alleviate these issues? Sure it might compromise performance slightly, though it might be acceptable when faced with the alternative of unexpected issues due to an unforeseen buffer overrun.

Sigh. Every time someone suggests online that perhaps we should be trying to put better bounds-checking constructs into languages, someone else will flame-on and say "programmers should just well, program better!". True, yet a pipe dream at the same time -- as we see by history.

Fact is, yes, programmers should be doing proper bounds-checking. But programmers are people, and people suck (I'm a member of both sets, so I can personally corroborate this). It's time we stopped saying over and over that "Programm

And to further the argument: Is a glass manufacturer lazy/stupid/careless when he sells non-bullet proof glass for $X, and not makes it a point to only sell bullet-proof glass at $X * 100?

The same way I just have to accept that the door to my balcony is not going to stop a man with a ladder and a sledgehammer and ~15 minutes time, I have to accept that "normal" computer security won't keep me 100% save, unless I invest some time and effort myself, or pay someone to do the effort, way above "the norm" to mak

Because it's not the right solution for every problem, and if you make languages that *force* this kind of behaviour, the shitty programmer will just put their bugs elsewhere.The solution is to simply write better code.

The folks who designed ALGOL already thought abouts this and have it. They even built entire Mainframe operating systems using ALGOL variants. They had fine-grained security protection inside each program, each library. Not just the ÂMMU will hopefully contain any C-based exploit at the process levelÂ. They sold this as the MCP OS since 1961. They still do. The Russians have a similar OS and so did Britains ICL corporation.

But you now what ? Cheap and nasty C, Unix, MSDOS, Windows have somehow eli

When I see that expression "C/C++" used in this particular context it raises my hackles, because it indicates the writer does not understand the difference.

In C the programmer is free to USE a buffer safely, by doing his own bounds checking. In C++ the programmer is free to use C++'s non-overflowing dynamically-allocated/self-growing constructs, as well as a simple C style array or fixed-size-allocated buffer. C++ makes it substantially easier to use a buff

Three words: goto considered harmful.
Safe code isn't only about the ability to make it safe. It's about a set of rules the environment or language imposes on the programmer to make it *easier* to make it safe and *harder* to make it unsafe. The programmer can still choose the language, so it's a self-imposed set of rules.
Like garbage collection. Of course any decent programmer can do his own garbage collection. But if you can keep the programmer from having to do that, you (1) free him up to do thin

Why still program in C? Simple: C is easy to glue to everything, just because it assumes not too much about the data structures. And because you have reinvent data validation ( safe buffers) for every interface again, there is a huge risk there.

The obvious solution is to use proven libraries for these problems (opensll, libzzo;) . However if one of these libraries has a bug, (or is not obvious to use) the problem is in many programs at once.

Once you start checking bounds and counting references and making strings safe and cleaning memory and garbage collection you're in the realm of ObjC, Java and other higher languages. They exist, they are available and can be used to implement any algorithm imaginable. Yet programmers still use C, Assembler and even PROM...

If we're talking about stuff in the kernel, we're talking about needing performance with minimal runtimes. This usually rules out the usual garbage collection. At that point, you probably should consider C++, which provides string and array equivalents with bounds checking but is as efficient as C.

But that's what I mean. However, C++ is slower than C, C is simpler to implement and virtually any platform has a C compiler but it doesn't do a lot of things out of the box. You choose the tool you need and best suited for the job. I can't program a PIC in JavaScript, but I can do a website.

C++ is very rarely slower than C (particularly if you can disable exceptions), and is sometimes faster (std::sort vs qsort). It doesn't actually require a complicated runtime. These are important differences between it and Java.

If you need C-like performance, want security, and have a good C++ compiler (g++, clang, to some extent VC++), C++ is a very good choice.

How could you think switching c++ would solve any issues knowing a translator for the programming language g van be compiled in c, the old Rusan joke about the garagekey. Your antivirus has no chance when it comes to a pay load being in g that then gets translated back to c because it has already exsepted it in the door to asymbly line (native language) slick key that garagekey