Marketing dept to blame for website crashes: official

Nice to be proven right, eh?

It is as you have always suspected: the marketing department is responsible for around a quarter of the overloads and crashes your poor company website suffers.

Website testing firm, SciVisum, spoke to marketing types in 100 UK-based companies, and found that 26 per cent don't ever mention planned online promotions to the guys and gals in the tech boiler room.

More than half admit they forget to provide a warning at least some of the time, and nearly two thirds of marketing bods confess to having no idea how many user transactions their website can support, despite an average transaction value of £50 to £100.

The consequence of this communications gap is not surprising: 73 per cent of companies reported website failures during marketing campaigns. Presumably the surviving few include the 22 per cent of companies who say they always talk to the tech team about such things.

Deri Jones, SciVisum CEO, says that while some of the gap can be attributed to the two groups traditionally not liking each other much, he thinks the problem really starts because marketing and IT approach things from such different angles.

"Marketing people have a tendency to blame the tech department when a campaign doesn't go as well as it should, but from a technology perspective, if the server hasn't crashed, everything is working well. IT measures server load, while marketing is looking for completed transactions."

Jones advises companies to consider the so-called user journey through the site, and says that it is essential that marketing and IT come together at the planning stages of any campaign to map out what this journey is likely to look like.

"The IT department needs to be able to plot this against the website design to ensure there are no hidden barriers to performance. Often, with knowledge of the journeys and the likely load levels, sensible code refactoring and configuration tweaking can give an order of magnitude throughput gains at the critical bottlenecks," he says.

He argues that a site can appear to be working just fine when it is under normal conditions, but oddities in the back end can trip users up when the site is under pressure. "If you look at the journey a user makes through a site, and try to follow that journey while simulating a high traffic load, you can find log jams in unexpected places," he concludes. ®