Next steps for continuous network monitoring

A blend of new guidance, increased oversight and expected legislative reforms are collectively elevating the importance of continuous monitoring of government networks, a panel of security experts said Monday at the 30th annual Management of Change conference held by American Council of Technology and Industry Advisory Council.

All three developments reflect the conclusion that agencies must monitor their networks continuously and manage security risks more effectively — and move beyond current requirements to file what many agree are outdated security compliance reports under the Federal Information Security Management Act.

As the National Institute of Standards and Technology now sees it, and spells out in upcoming new guidance, agencies should take a three-tiered approach to continuous network monitoring, said panelist Marianne Swanson, senior advisor for information system security at NIST. Continuous monitoring needs to occur at the organization level, mission level and system level, she said.

Most of NIST’s network security guidance over the past 10 years has been at the system level, and focused on certification and authorization, she said. What’s needed now is assessment and authorization using a risk management framework.

Over the next few weeks, NIST plans to publish guidance on security strategy, performance metrics, risk tolerance and the frequency and types of monitoring controls agencies should consider using on an organizationwide basis, Swanson said. NIST also plans to provide recommendations for managing, configuring, gathering and reporting monitoring results at the mission level and approaches for implementing monitoring tools and assessing automated security controls at the system level.

Swanson stressed that, although automation is important, “continuous monitoring shouldn’t be all about automated controls,” arguing that some resources must still go toward management, personnel and physical security.

NIST also plans to release two other documents next month: One is a contingency planning guide; the other focuses on managing supply-chain risks. It will include recommended practices government agencies should follow in procuring software and hardware products — and which industry can use in developing supply chain practices that meet government requirements.

She said NIST won’t define when systems may be too compromised to continue operating. “NIST guidance will always say that the senior official makes the risk decision,” she said. But “they need the evidence on how the system truly is operating, so they can make a judgment. Right now, they don’t have that.”

How well agencies manage their overall network security risks was once the job of the Office of Management and Budget. That job now belongs to the Homeland Security Department, said Matt Coose, director of federal network security at DHS, who also spoke on the panel.

DHS is concentrating its efforts in four areas, he said: Assessing threats, through the U.S. Computer Emergency Readiness Team; influencing security policy; enabling agencies to execute those policies, through best-o- breed examples; and measuring how agencies are doing, discerning what’s working.

That effort is still only beginning to get traction, said Karen Evans, former OMB administrator for e-government and IT, who chaired the discussion. That’s in part because DHS is still getting up to speed with that responsibility and because agencies are still only obliged to follow current FISMA rules.

Efforts to reform the FISMA rules took an important step forward last week when a House Oversight and Government Reform Committee approved a bill requiring continuous cybersecurity automated monitoring.

But according to panelist Erik Hopkins, professional staff member on the Senate subcommittee that examines government information technology, it will likely be the end year before work on the FISMA Act of 2010 is completed.

In discussing Congress’s perspective on FISMA, he said, “It is protection of the information we care most about. But you have to define what the boundaries are…and what is a system.” By some definitions, he said, a spreadsheet could be considered a system.

“We want to know what the American public is getting back for the $10 billion (annually) spend on security,” he said.

Bob Dix, vice president of U.S. government affairs and critical infrastructure for Juniper Networks, voiced concern, however, that even with all theprogress that has been made because of FISMA, many agencies are still operating aging software systems that were never designed for the kinds of security demands being imposed on them today. And many other government systems are owned and managed by commercial parties, he said, putting them outside FISMA's full reach.

About the Author

Wyatt Kash served as chief editor of GCN (October 2004 to August 2010) and also of Defense Systems (January 2009 to August 2010). He currently serves as Content Director and Editor at Large of 1105 Media.

OPM is partnering with CSID to try to manage the fallout from a massive breach of some 4 million federal personnel records.

Reader comments

Thu, Jun 3, 2010
Scott Smith

Security monitoring is ONLY effective when an organization has established guidelines for what traffic/use "SHOULD" be on the network. By using a "white list" approach mapping approved applications to perimeter ports and shutting off non-essential access, monitoring usage and communications it places management is a much stronger position than trying to fend off the millions and millions of threat actions in the wild. Put another way, monitoring 12 approved business applications and 500 users is a lot easier than defending and watching out for 1,000,000s of stuff you want to keep out. Current alarm-response methods only perpetuate the madness and failed methods way too many still embrace.

Wed, May 26, 2010
Kevin
Dayton

Both pre-deploy independent certification/accreditation and real-time monitoring are important, but both only look for the 'known' risks per a checklist. (The alternative, noting all changes isn't realistic given today's diverse and complex systems.) Only professional hackers (Red and Black Teams) can find the 'unknown' weaknesses. One way to make automated monitoring easier is force resets back to a baseline -- essentially remove the persistance. There's many ways to do this - boot from CD/.iso, in a VM revert back to previous image, etc. By analyzing repeated deltas from baseline, trends and real anonmolies become significantly apparent.

Please post your comments here. Comments are moderated, so they may not appear immediately
after submitting. We will not post comments that we consider abusive or off-topic.