Indeed, its advantages are multiple, compelling and well known. Among the most compelling are that it is free, it is open to everybody, users can customize it to fit their needs and there is a community of thousands -- perhaps millions -- of eyes on the code to spot bugs or flaws so they can be fixed quickly, before they are exploited by cybercriminals.

When the source code is, "open to the world, you are going to have multiple eyes viewing the same configuration," said Andrew Ostashen, security engineer at Redspin, "so if issues arise, the owners will be able to remediate faster."

Still, world conquerer or not, a number of security and legal experts, while they agree in general with Ostashen and are not issuing blanket condemnations of open source, continue to warn both organizations and individual users that it is not perfect, or even the right fit for everybody.

It is critical, they say, to be aware that some of the characteristics that make it so attractive also make it risky. Obviously, if the flaws in code are exposed for all to see, criminals can see them as well. And even millions of eyes on open-source code is not a guarantee that every flaw will be found and fixed.

"There have been claims that open source software is inherently more secure due to the openness and 'millions of eyes that can review the source code," said Rafal Los, director of solutions research at Accuvant. "This was thoroughly debunked by bugs like Heartbleed and others."

Indeed, Kevin McAleavey, cofounder and chief architect of the KNOS Project, somewhat sardonically refers to it as "open sores."

"Open source publishes the source code, and many eyes claim to review it, thus exposing any possible bad code," he said. "And yet ... Heartbleed. The defective code was right there for those 'many eyes' to spot since its release in February 2012, yet nobody spotted it until more than two years later, after the exploits had become overwhelming."

Another example he and others cite is the "Ghost" exploit in GNUTLS, which dates back to 2005 but was discovered only last year.

"Again, nobody ever spotted that one either until after exploits were piling up like cordwood," McAleavey said. "There was also the "Shellshock' exploit in the BASH shell, which similarly was published, seen by many eyes and dates back to version 1.03 since 1989."

That is because millions of eyes doesn't mean all those eyes are qualified to spot flaws.

"Just because you have a critical mass of people reviewing the code, are they qualified to do so?" asked Aaron Tantleff, a partner at Foley & Lardner. "There are no credentials to speak of, or certification that can be given to code reviewed by the open source community."

That is McAleavey's view as well. "Just because the source code is there doesn't mean that all of those eyeballs understand what the code actually does, or does incorrectly," he said.

And even if flaws are spotted and patches created, that doesn't guarantee they will be installed in every device or system that could be affected.

Tantleff said recent history is proof. "One need not look back very far to find examples of the risk of open source in one's environment," he said. "Park 'n Fly and OneStopParking.com suffered from attacks due to an open-sourced based security vulnerability that existed in the Joomla content management platform.

"A security patch had been issued well before the attack, but unfortunately the patch was never installed," he said.

McAleavey, who said he started working with Linux, one of the most popular open-source operating systems, when it came on the scene more than 20 years ago, said this problem exists largely because open source tends to exist as, "two separate entities."

In the case of Linux, "there is the 'kernel team,' which is the primary operating system itself, and then there are 'application maintainers,'" he said.

"Any changes to the Linux kernel itself still has to be approved by Linux (creator Linus Torvalds) personally or through one of his handful of trusted kernel maintainers. They, and only they, determine what happens to the core kernel OS itself," he said.

"But they have no interest whatsoever in what happens among the literally thousands of other open-source developers who maintain a single application or 'package' -- also known as 'distros' or 'distributions' -- of Linux. They're pretty much on their own."

That, he said, has led to "absolute anarchy in userland. And that's not good for stability or security. No one is in charge."

Los said closed-source software is "just as susceptible to being 'abandoned' as open source," but noted that the incentive to maintain and update commercial or proprietary software is there, "if the vendor truly cares for their product quality."

But, like McAleavey, Los said open-source components used in commercial applications, "are a massive problem, primarily because they're forgotten. Take, for instance, the OpenSSL library and the issues that popped up when a series of major flaws were discovered in it. Open-source and commercial software alike fell victim to the dire need to patch, but where OpenSSL was used in commercial applications, many of the end users simply weren't aware that it was there and so didn't know it needed to be patched."

Tantleff notes the same problem. "Just because a patch exists, doesn't mean the problem is gone. Someone still has to install it. Generally, there's no 'autoupdate' for open-sourced applications," he said.

The benefit of customizing or modifying the code to fit the needs of a developer or organization can boomerang as well, he said. All those modifications lead to, "many modified or forked versions of open sourced programs. Many of those modified applications are re-published for the world to use. The question then becomes which version do you use? Sometimes you cannot tell."

That, he said, can mean that a user or developer, "thinks he has the patched application, but in reality installed a version that was based on the unpatched application, and the vulnerability remains."

Open to modification also means it could be open to mischief. Tantleff said the code, "could be later injected with malware, or worse, specifically written to address an issue that people are seeking open-source applications for, with malware hidden inside from the start -- malware by design. Unfortunately, none of this is theoretical as there are examples of each of these."

Finally, a community of thousands makes it difficult to hold anyone accountable for legal or compliance problems. "If nobody's in charge, who do you sue?," asked McAleavey.

Of course, defenders of open source say the community surrounding it can be much more dependable than a company with a proprietary system. Writing in CMS Critic, Daniel Threlfall noted that, "if the single company managing the proprietary system goes under, what then? An open-source CMS, on the other hand, has a life of its own. No one entity owns it. Thus, there will always (presumably) be a support network and stable foundation upon which it can exist. The community is the stability."

How, then, can users get the best out of open source while avoiding the worst?

The answer may not be easy, Tantleff said, but it is relatively simple: "Companies should treat open source like all other software," he said.

It starts with knowing the source. "It is important that one knows where the software is coming from -- that it's a trusted source," he said. "You should also gather diligence on the software from other trusted sources."

Even if the source is trustworthy, "they need a proper, controlled process and program for vetting software before deploying it within the enterprise."

Finally, "they also need a program in place to manage and monitor the software once in the environment, including making sure the organization is aware of vulnerabilities, available patches, and most importantly, ensuring that the patches are installed," he said.

Copyright 2016 IDG Communications. ABN 14 001 592 650. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of IDG Communications is prohibited.