When system patches collide

Next year will be the tenth anniversary of the darkest day for AT&T's frame relay network. In April 1998, the network went down for 26 hours, infuriating high-value customers who relied on the system for their businesses. The cause? A routine software patch, applied to network infrastructure equipment, which triggered a pre-existing software flaw. The flaw caused the network to flood itself with packets in what amounted to a self-inflicted denial of service attack.

More interoperable systems have led most IT infrastructures to become more heterogeneous, exacerbating the dangers of poor patch management. Yet according to Donal Casey, principal consultant at IT consulting group Morse, eight out of ten companies he meets wave software patches through without any testing at all.

No wonder resource-starved techies opt for an easy life. Testing and managing system patches is one of the most challenging configuration management problems facing today's IT departments. "The general idea is to apply the patches singly to a test environment and traditionally run it for two to three days to ensure that you account for every scenario, and do power-downs and backups and all of the usual things," says Casey. The testing will be cumulative, with patches added to the mix over time, because you need to test everything together. If multiple vendors release multiple patches in close succession that leads to a lot of work - and a longer wait before patches can be deployed in confidence.

Testing versus productivity

Potentially long testing cycles are at odds with company needs. Companies want to install security patches as soon as possible to reduce their exposure, explains Adrian Davis, senior research consultant at the Internet Security Forum. By the time a vendor produces a patch to resolve a known vulnerability, the customer is already a step behind the criminals working on exploits. Testing puts them even further behind. "Patching is a game of catch-up," he says. "There is a window of vulnerability for organisations."

Consequently, patch testing cycles for those companies that do it at all are getting shorter. The average time taken to deploy a patch after release used to be 30 days, according to Chris Andrew, vice president of security technologies for patching management software vendor Patchlink. Last year the average period dropped to two weeks. "That's really not the 72 hours that's recommended," he says. "Most companies are doing a poor job of getting to that goal."

Things aren't that simple in real-world environments, where companies are likely to prioritise patches based on their criticality. "A critical patch may forego testing completely or go through a very streamlined process to get the patch installed in anywhere between 28 and 48 hours," says Felicia Wetter, senior principal consultant at INS, an IT consultancy currently being acquired by BT. Patches of medium urgency could take between 48 hours and a week. A routine system patch that doesn't address a security issue could wait longer, possibly even until the next iteration of the product, she points out.

Outsourcing patching

One way to help correct the balance between urgency and quality control is to use a professional patching service. These are available directly from operating system and application vendors and from third parties. As part of its move towards subscription-based software, Sun purchased patch management software firm Aduva last year and rolled its product into the Sun Connect service. The service manages and delivers pre-tested patches for Solaris, Suse and Red Hat.

For wider support (including selected application vendors), independent patch tool provider Patchlink provides a server product that connects to its multi-vendor patch subscription service. Companies might still find themselves having to roll in patches from elsewhere unless the service happens to support their entire supplier base, however.

In addition to providing pre-baked installation scripts, Patchlink will also test patches as best it can, according to Chris Andrew, vice president of Security Technologies. "But for best practice, we still recommend that customers test in their own environment," he admits. The company can't account for every potential IT configuration.

The arrival of Vista is unlikely to improve matters. Upon installation in early February, it installed perfectly and then, as Windows does, scuttled off to the internet to look for updates. In addition to Windows Defender signature updates, it downloaded four other patches resolving everything from application compatibility issues through to the performance of the phishing filter. Yet the ink was barely dry on the DVD.

"With any new piece of software, when a new version is released there's always that burn-in time where a lot of issues are identified that have been overlooked," says Wetter. Things will hopefully settle down after six months, she adds, but advises against immediate installation until the release of the first service pack.

"I would wait until service pack 1 came out, because I'd like to let an early piece of software get through that stage and get the kinks worked out before I install it on my system," Wetter says.

Microsoft now advises customers to "take a defence in depth approach" to system patches, testing software updates before deployment, and protecting themselves in the meantime using additional security measures. But what interim measures should companies be taking?

Companies like Symantec have taken steps to include pre-emptive security technologies in their products that try to spot behavioural traits rather than matching specific malware signatures. And Davis agrees that tools such as intrusion prevention systems can be a useful means of plugging the gap, but that companies shouldn't rely solely on them. IPS may also scare some small to mid-range customers off, as in-line intrusion scanning systems can be complex and expensive to implement.

Effective patch management

How can companies make patch testing and analysis as efficient as possible, no matter what their size and level of resource? Wetter advises them to roll the process into a larger configuration and change management strategy. When patches arrive, they can be graded according to their severity and the potential impact within the firm's infrastructure.

Oracle might patch a severe vulnerability in one of its products, for example, but if you're only running one instance of that product in a non-critical area of your business, which will mitigate the score you give it. On the other hand, a patch for a milder security flaw might affect a large proportion of the systems in your organization, including perhaps your ecommerce servers, which could increase their urgency. Once scored, patches can then be prioritized and placed in a change management schedule for testing and deployment, says Wetter.

Testing can be optimized even in environments where a duplicate set of systems is not available. Virtualised environments using tools like VMWare can help you to duplicate systems on a single box and make the testing and staging process more manageable. Another important asset is standardized builds, which will help to remove complexity when you're testing patches.

Businesses can also make patch rollouts more tentative, says Patchlink's Andrew, perhaps trying out tested patches on a low-risk group before extending it out to the wider IT infrastructure. "Maybe you'll push it out to a couple of file servers before you put it on your Exchange server and take down your email, for example," he says.

It's unlikely that customers will ever strike the perfect balance between urgency and quality assurance, but judicious planning and sensible configuration management will get them nearer. Those companies without the money to consolidate their diverse hardware and applications portfolios will need to make up for that complexity with a pragmatic and structured approach to patching - or risk unfortunate consequences.

Danny Bradbury

Danny Bradbury has been a print journalist specialising in technology since 1989 and a freelance writer since 1994. He has written for national publications on both sides of the Atlantic and has won awards for his investigative cybersecurity journalism work and his arts and culture writing. 

Danny writes about many different technology issues for audiences ranging from consumers through to software developers and CIOs. He also ghostwrites articles for many C-suite business executives in the technology sector and has worked as a presenter for multiple webinars and podcasts.