Uncategorized

Originally posted on www.capitalgazette.com.By Rick Hutzell. Just weeks after Northrop Grumman got approval to begin building a new breed of mobile radar systems for the Marine Corps, the Corps has asked the defense contractor in Linthicum to change the operating system.

The Department of Defense announced a $10.2 million contract modification Wednesday to change the operator command and control software on its G/ATOR radar system Microsoft Windows XP to a Defense Information Systems Agency compliant Linux OS.

Ingrid Vaughan, director of the program, said the change would mean greater compatability for laptop computers used to control the system in the future.

In a statement released Friday, she said Microsoft Windows XP is no longer supported by the software developer and the shift to a DOD approved Linux operating system will reduce both the complexity of the operating system and need for future updates.

Northrop Grumman got the go-ahead from the Marines in October to build the first four ground/air task oriented radar systems after years of development, a contract worth $206 million. If fully adopted by the military, the system could be worth $2 billion to the company.

The original proposal issued in 2005 specified that the unit should use Microsoft, but left open the option for the Marines to change it at a later date.

While the G/ATOR units are being assembled in Linthicum, most of the work on the new operating software will be performed in Syracuse, New York.

Northrop Grumman developed G/ATOR as a short- to medium-range radar system that can pick out rockets, artillery, mortars, cruise missiles, unmanned aircraft systems and other targets. It is already testing it for possible shipboard use by the Navy.

Northrop Grumman, with some 7,000 employees in Linthicum, is Anne Arundel County’s largest private employer.

Microsoft has patched a critical 19-year-old data manipulation vulnerability that’s been lurking in every version of Windows — both server and client operating systems — since Windows 95 (MS14-066). Windows has not released a patch for the now unsupported Windows XP.

This critical bug in Windows SChannel, Microsoft’s implementation of SSL/TLS, is remotely executable and could be used to run malicious code on vulnerable systems by sending specially crafted packets to a Windows server. It has been rated a 9.3 on the CVSS scale. The vulnerability, called “Winshock” by some, is next on the list of bugs exposing SSL/TLS installations — like OpenSSL’s Heartbleed (for which Microsoft did release an XP patch after support officially ended) and the vulnerability in Apple Secure Transport released in the spring.

“Is WinShock as bad as ShellShock and Heartbleed?” asks Gavin Millard, EMEA technical director at Tenable Network Security. “At the moment, due to the lack of details and proof of concept code it’s hard to say, but a remote code execution vulnerability affecting all versions of Windows server on a common component like SChannel is up there with the worst of them.”

So far, no exploits of Winshock have been reported in the wild.

Winshock was discovered and privately disclosed by researchers at IBM X-Force in May. As Robert Freeman, manager of IBM X-Force Research, explained in a blog post:

This complex vulnerability is a rare, “unicorn-like” bug found in code that

[Internet Explorer]

relies on but doesn’t necessarily belong to. The bug can be used by an attacker for drive-by attacks to reliably run code remotely and take over the user’s machine — even sidestepping the Enhanced Protected Mode (EPM) sandbox in IE 11 as well as the highly regarded Enhanced Mitigation Experience Toolkit (EMET) anti-exploitation tool Microsoft offers for free.

Freeman acknowledges that exploitation is “tricky.” He describes how the vulnerability, which originates in “some very old code within the OleAut32 library,” can be exploited remotely via the Visual Basic Script present in all versions of Internet Explorer since IE 3.0.

This bug is significant, he says, because it shows that critical vulnerabilities can be overlooked for nearly 20 years. “It indicates that there may be other bugs still to be discovered that relate more to arbitrary data manipulation than more conventional vulnerabilities such as buffer overflows and use-after-free issues.”

Bromium Labs security researcher Jared DeMott tells us:

One of the interesting bits in this story is that Microsoft is not really saying exactly how bad this bug is for the client. The vulnerability bulletin provided calls out servers as the potential victims, but the SSL/TLS stack is used every time your browser connects to a secure website, which most are these days. And it would be straightforward for an attacker with details of this vulnerability to host a malicious site that offers “security” via the bogus SSL/TLS packets. Could a malicious website exploit IE with this bug? Until someone reverse engineers the patch, we’ll have to wait to hear about how bad it is.

Millard says that “no proof of concept code has surfaced yet, due to Microsoft thankfully being tightlipped on the exact details of the vulnerability.” Nevertheless, “it won’t be long until one does which could be disastrous for any admin that hasn’t updated. It is of critical importance that all versions of Windows are updated due to the ability of attackers to execute code on the server remotely, allowing them to gain privileged access to the network and lead to further exploitation such as infect hosts with malware or rootkits and the exfiltration of sensitive data.”

Joe Barrett, senior security consultant of Foreground Security says that Winshock “will most likely be the first true ‘forever-day’ vulnerability for Windows NT, Windows 2000, and Windows XP. As Microsoft has ceased all support and publicly stated they will no longer release security patches, enterprises who still have Windows 2000 and Windows XP machines will find themselves in the uncomfortable situation of having an exploitable-but-unpatchable system on their network. Security researchers and blackhats alike are most likely racing to get the first workable exploit against this vulnerability, and the bad guys will begin immediately using it to compromise as much as they can. As a result, enterprises need to immediately deploy the patch to every system they can and also begin isolating and removing the unpatchable systems to prevent serious compromise of their networks.”

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Originally posted on opensource.com. About this time last year, I laid out some trends I saw for the coming year in government take up of open source software. Looking back now, it appears those trends are not only here to stay, they are accelerating and are more important than ever.

In particular, I wrote that “open source will continue to be the ‘go to’ approach for governments around the world” and that “increasingly, governments are wrestling with the ‘how tos’ of open source choices; not whether to use it.”

Recent developments in the United States highlight these points.

First, the White House (via OMB and the Federal CIO) has issued a Digital Services Playbook—described in some quarters as “something of a marvel for an official government policy: it’s elegantly designed, has clear navigation, and is responsive to any device you choose to view it upon.” It is well worth a read.

At its core, the Playbook is about more agile use of reusable software and processes that focus on the customer. Central to that approach is its emphasis on open source. The final ‘play’ in the Playbook captures the notion of ‘Default to Open’. Play 8 encourages agencies to ‘Choose a Modern Technology Stack’. “In particular, digital services teams should consider using open source, cloud based, and commodity solutions across the technology stack, as these solutions have seen widespread adoption and support by the most successful private-sector consumer and enterprise software technology companies.” It clearly states, “Consider open source software solutions at all layers of the stack.”

Of course, none of this is entirely new. One can find echoes of all these points in earlier Administration policy statements. For example, its ‘Shared Services‘ strategy clearly calls for use of open standards in data and information exchange and states clearly the technology principle that “open-source software solutions should be included in alternatives analyses.” (If there is one concern I have with the Digital Services Playbook, it is that there is an ‘old school’ statement that “open source solutions are [to be] evaluated alongside commercial solutions when technology choices are made,” a throwback to the days when there was confusion on this front. In fact, the US government has long recognized that open source software is, in fact, commercial software.)

The Digital Services Playbook bears strong resemblance to the principles driving the United Kingdom’s (UK) Government Digital Service (GDS), announced in 2013. As Mike Bracken, the head of the UK’s DGS said in an interview, “The principles by which we work are nothing more than applied common sense in the Internet age. If they make sense, use them: they’re for everybody.” The same can be said for the US government’s Playbook.

Second, the Administration also announced two other initiatives this summer. One was the creation of 18F, which will be housed at the US General Services Agency (GSA). Also known as “Digital Services Delivery,” 18F is a self-described ‘open source team’ that encompasses the Presidential Innovation Fellows and an “in house digital delivery team.” 18F has published a policy which clearly states as its mantra to:

Use Free and Open Source Software (FOSS) in our projects and to contribute back to the open source community

Create an environment where any project can be developed in the open

Publish all source code created or modified by 18F publicly

And, on August 11, the White House announced a new U.S. Digital Service, which it described as “a small team made up of our country’s brightest digital talent that will work with agencies to remove barriers to exceptional service delivery and help remake the digital experience that people and businesses have with their government.” It is the Administration’s intention that the two groups “will collaborate closely.” The U.S. Digital Service will, as far as I can tell, be the proverbial shepherd herding the cats.

My colleague, Gunnar Hellekson, Red Hat’s North American Public Sector Chief Technology Strategist, has posted a thoughtful blog: U.S. Digital Service is Born. It is well worth a read, as it highlights both the challenges and opportunities facing these recent initiatives. As he says, “the questions of talent, agency appetite for change, procurement reform, and the bureaucratic home are all implementation details.” Yes, it’s about the how of open source software (and IT reform generally); it’s not about the whether.

These initiatives, particularly 18F and the U.S. Digital Services, are just getting started. By any measure they are works in progress. While there are some lessons from the UK experience to draw on, as one report indicates, “unlike the United Kingdom’s Digital Government Service, the United States has not created a singular new entity with a large budget and spending authority. Nor has it hired dozens and dozens of top technologists at high pay grades who then set about building core digital services for the country, although 18F merits comparison. Instead, the USDS will work with federal agencies as they create or upgrade services and products.”

The question for the US, however, is not merely staff size or budget, per se. Rather, it is assessing the ‘gap’ or problem where it can make a difference. And making sure that the lessons from prior US government efforts to develop open source software are not lost.

As I laid out in my post last year, I assessed government’s growing use of open source software and observed, “If government IT professionals rely solely on ad hoc rules or seat-of-the pants judgment, this exposes government agencies to significant risk that is not, at present, properly documented or understood.” I identified at least three areas where the ‘how to’ of open source needs to be considered:

There are distinct risks associated with choosing a freebie/insourced model for use of open source software. In particular, community/freebie projects or insourced projects are likely to lack key security certifications, regular updates, support from third-party vendors, and interoperability with your critical applications.

Relying on freebie/insourced open source software effectively means a strategy of relying on internal support for critical mission which is unknown territory and potentially expensive, given the difficulty of obtaining and retaining qualified IT and management personnel.

We could see a repeat of the failures and long-term costs associated with ‘government-off-the-shelf’ (GOTS) solutions. Although the projects may be, technically, commercial items as generally understood by governments, they present the same risks and economic liabilities as government-off-the-shelf software.

In my interview with David A. Wheeler, the long-time recognized leader in advising and working with the US government on issues related to open source software, he elaborated on the last point. “Project forking is still a big problem. … Government employees who are officially managing the project may be smart in general, but they often know little about software. Obviously, managers who don’t understand what they’re managing are often easily fooled. For example, government managers often don’t realize that most software costs are in maintenance and typically do not understand that maintenance costs can be greatly reduced (through sharing) if changes are released back to a larger community. … Part of the problem is that in most agencies, the easy thing to do is to create project-special forks, even though it is almost always the highest-cost and highest-risk approach for maintenance.”

As one step to mitigate that risk, Wheeler pointed to the open source software policy created by the Consumer Financial Protection Bureau (CFPB). In the CFPB approach, software developed using government funds must be released as open source software unless a special waiver is granted.

To their credit, 18F has built on that example and established as a key operating principle that it will publish all source code created or modified by 18F publicly. And the Digital Services Playbook in its ‘Default to Open’ play suggests for agencies to, “when appropriate, publish source code of projects or components online… and share your development process and progress publicly.”

Notably, this key Play advises agencies to “ensure that we maintain the rights to all data developed by third parties in such a manner that is releasable and reusable at no cost to the public… [and] that we maintain contractual rights to all custom software developed by third parties in such a manner that is publishable and reusable at no cost.”

In the end, 18F and the U.S. Digital Service will be successful if they set by way of example and show leadership with US agencies on the ‘how to’ of open source software. They need to focus on instilling best practices across government as they work to implement this key tenet of IT reform, centered on agility, reusability, and default to open.

This measurement of achievement may be as, if not more, important than any specific application or tool that emerges from their efforts.