Trend Micro Chairman Claims Open Source Software Is Insecure

“Steve Chang, the Chairman of Trend Micro, has kicked up a controversy by claiming that open source software is inherently less secure. When talking about the security of smartphones, Chang claimed that the iPhone is more secure than Android because being an open-source platform, attackers know more about the underlying architecture.”

…it just sums up perfectly one of the reasons why I think antivirus manufacturers should be banned from the face of the Earth. As soon as possible, and definitely.

Now, if only we could get a similar quote from the head of McAfee… You know, just for the fun of it.

I sometimes wonder whether the majority of the worms/trojans/virus’s that exist out there are the result of anti-virus companies creating these infections to justify their continued existance in much the same way that one see’s in the US where ‘conditions’ are created to sell medication – if you’re shy apparently it isn’t personality trait, it is apparently a ‘treatable illness’ :/

Really, Trend Micro seemed like a step above McAfee and Norton years ago as a company, but they’re quickly decimating their own image with their latest actions. Now they’re right up there with the big boys as a company I would never recommend, and in fact would recommend against. I don’t remember if I ever paid for and used their suite in the past, but now I hope I didn’t.

Well, there’s always Microsoft Security Essentials, which so far is probably better at staying out of its way and nagging you (no paid “subscriptions”). Too bad little Windows fleas like these guys would have a shit fit and cry “antitrust” to the legal system if Microsoft bundled their malware/virus protection program with Windows, as in an ideal world should be done. In an ideal world, no matter what some pathetic “anti-virus” company says, improved security of an operating system should always be allowed, under any circumstances… even despite antitrust concerns. It should be an exception.

give the guy a break. What else would anyone expect him to say? He should have kept his “words of wisdom” to himself though.

The guy doesn’t deserve a break. He’s sitting on a giant cushion of cash that continues to increase in size as he spreads completely bullshit FUD upon people. And don’t forget his company’s past actions. Remember the patent lawsuit against ClamAV? Yeah… shows how much they care about YOUR security, when they don’t want you to be able to protect yourself from viruses with an anti-virus program that THEY didn’t make. F*** them. They’re every bit as bad as the “big two” anti-virus companies these days. They’ve made it very clear that the security of their bottom line is more important than the security of their customers.

Of course closed source is inherently more secure, after all nobody ever found a crypto fuckup that exposed the root key of the PS3, while the nasty open source openssl is clearly vulnerable and no respectable site would ever use it as security layer!

What say you? OpenSSL has been fixed eons ago and Geohot released the keys? NONSENSE!

The only “validity” I see in his speech comes from the fact that while in Linux security patches arrive quickly to the end user, in Android you have to wait not only for Google or someone else to patch the Android source, but then for your phone manufacturer and wireless provider to release said patch to end users. An exploit can take longer to get patched on a mobile phone so it can cause more harm than on desktop or server Linux.

But that’s not a flaw of the open-source model. Only one of the Android ecosystem.

Agreed hence I think the whole idea of calling ‘Android’ opensource is a giant fraud from top to bottom – to me ‘open source’ means grabbing a phone and upgrading it without the need to having install a ‘root kit’ just ot get access to a device that I paid for.

With that being said being open source doesn’t automatically make it more secure any more than something being closed source making it automatically more secure.

If you code two systems with equal amounts of similar buffer overflow vulnerabilities, I’ll grant that you’d exploit the open source one first.

However, the attacker’s advantage to exploit the open source program decreases with the number of non-malicious people that view the code. So the open source security is a function of the amount of people there are reviewing the code. It may start off less secure than the closed source one, but become more secure over time.

The closed source one may have less people reviewing it. And thus less chance to remove the vulnerabilities. This is especially compounded if they developers believe its less vulnerable due to its closed source. Prior to XP Service pack 2, Microsoft had a culture of insecure coding and insecure review system. They’ve gotten a lot better because they don’t believe what this clown said. They know they have cross hairs on them, and attackers have become very good at probing for vulnerabilities in closed source binaries.

Just a while ago I had a zyxelrouter at a customer that had a “security app” from trendmicro. The thing kept popping up in the middle of any programmsession on windows, it took me 2 days and a lot of emails to get this removed.

To be suitable for low-level programming, a programming language should have very low runtime requirement and not hide the CPU’s power. This is why makes C and derivatives so attractive.

Putting some checks each time a pointer is accessed or modified, as an example, is not acceptable at kernel level, nor is dropping pointers altogether. The best we can do is having “smarter” compilers, which do a more in-depth analysis of the code and notice more suspicious behaviors. But that would result in massive compilation slowdowns.

For higher-level layers, using more safe languages is doable, on the other hand. But at this level, there is something much more important which we don’t do yet : massive sandboxing. Limiting app capabilities to what they need in order to operate is by far the best way to minimize the impact of exploits (because there will always be some, no matter which languages people code in)

Ada, Modula-2, Modula-3, Oberon, Alef have proven that you can have a more safe programming language and write OS with them. The amount of written assembly was no different if the OS were written in C.

Sadly from these list, only Ada survived and thanks to DOD.

Many programmers prefer to save typing than having their programs perform safely. Only if you never studied proper OS design can you be lead to believe that C is the only way.

There were OS being written in higher level languages before C came into existence, and surely there will

have other systems languages eventually replacing it.

I like C, but I really feel it is about time to get it replaced with a safer systems programming language.

That is why I really hope Microsoft gets successful with Singularity ideas. I am also watching how Go and D evolve over time.

Ada, Modula-2, Modula-3, Oberon, Alef have proven that you can have a more safe programming language and write OS with them. The amount of written assembly was no different if the OS were written in C.

Sadly from these list, only Ada survived and thanks to DOD.

According to my brother who had to use it in university, Ada is probably the most annoying language he ever used in his life, making the most simple thing insanely complicated to write. Maybe we should investigate this if we want to understand why so little people are using it nowadays.

Let’s not get into conspiracy theories. If all those languages you mention have disappeared, it’s because they failed to deliver in some way. I sure loved cutting my teeth on Pascal Object, but I can also understand why the world around me has chosen C(++) instead.

Many programmers prefer to save typing than having their programs perform safely.

If this way of thinking is so widespread among programmers, and there’s no way to change it e.g. by educating them differently, then the tools must change to adapt themselves to the programmer, and not the reverse. Be it by creating a language which saves typing, is powerful, AND performs safely, or by putting better compiler checks on “unsafe” languages.

Only if you never studied proper OS design can you be lead to believe that C is the only way.

Well, where I studied OS design, there was no mention of a specific programming language. The examples happen to be written in C, for obvious reasons, but that’s all.

There were OS being written in higher level languages before C came into existence, and surely there will

have other systems languages eventually replacing it.

Before C came in, there were overall a huge lot of OSs written in Assembly. What C managed to do was to introduce a big enough improvement over Assembly that it convinced many people to use it.

The problems which high level languages always have when used at a low level are :

-Realtime requirements

-Performance

-Control

C managed to give very high performance and a fair amount of control to developers, without forcing them to write a 40MB interpreter in Assembly first which would more or less totally void the point of using C at all. Plus it was more fun to play with than Assembly. That’s why it was so successful.

I don’t doubt that someday, a programming language will do to C what C did to Assembly. But it really has to address those three points and be as fun or more fun to use than C in order to succeed.

For my kernel, I mainly use C++, but I can understand why many people are not using it : its runtime requirements are quite high, which means that I either have to carefully avoid some language features or to implement some support code before the most trivial things work. And by today’s standards, C++ really is a low-level language…

Before C existed, there were already a few operating systems written in BCPL, ALGOL and PL/I, even FORTRAN dialects, just to name a few old friends to everyone here that is old enough to remember them.

For example, do you know that the first versions of MacOS were written in a mixture of Pascal and Assembly?

C’s success is a consequence of UNIX’s widespread. At the time everyone wanted to play with UNIX, and coding for UNIX meant using C.

I am quite sure that without UNIX, C would never had become popular.

That was the main problem with the referred languages. For a language to be a successful systems programming language, it needs to be the official programming language for a successful operating system.

That was the main problem with the referred languages. For a language to be a successful systems programming language, it needs to be the official programming language for a successful operating system.

There’s something which puzzles me in this conclusion. If I remember well, UNIX was not initially C-based, right ?

So why did Ritchie et al. decide to create C ? What was wrong with existing system programming languages on these days ? Why didn’t they use the official programming language for a successful operating system instead of baking their own ?

So why did Ritchie et al. decide to create C ? What was wrong with existing system programming languages on these days ? Why didn’t they use the official programming language for a successful operating system instead of baking their own ?

Apparently, B – the successor to BCPL and predecessor of C – was an awkward fit for the PDP-11. Take a look at the section entitled “The problems of B” from Ritchie’s historical treatise:

The fact that some people do have coded OSs in C# or Java in practice does not necessarily means that it is a good practice as a whole. I mean, I’m sure that some people have also written OSs in BASIC in the past just for the fun of it…

Unless, of course, there’s a way to write some heavily stripped-down C#/Java code, without all the management overhead, for the lowest-level parts. I think I’ve read somewhere that it’s what Singularity does. But that more or less voids the point of using those languages at all, in my opinion, since you’d get something like C(++) with a slightly tweaked syntax. In fact, it’s even a bad idea, since it gives developers a false sense of security, and frustrates them when they realize that the simplest features of such languages are library-based.

Removing all the useless features which make mainstream desktop OSs gigabyte-large + stripping down kernels to the point where their sole task is to manage user processes + testing vital components heavily would be simpler and more effective, in my opinion.