]]>One of my favorite domains to review in existing applications, because it tends to be so error-ridden, is … error-handling. Too many programmers regard a language’s exception-handling syntax as a solution rather than just a mechanism, so error-handling tends to be misguided or at least neglected. A little more attention in this area often pays off with far greater end-user satisfaction.

Perhaps the hardest part of handling errors is simply to remember that it is programming. I encounter many coders who appear to believe that it’s someone else’s job. In fact handling errors should be a routine part of definition and fulfillment of requirements. Here’s a parable about what often happens with even a single line of code:

True-life rework

An application needs to read a configuration file:

fp = open(CONF, "r")

While this is Python, what happens next is equally likely in Java, JavaScript (maybe with cookies), Java instrumentation, Perl, C#, or other common languages. At this point, the application “works”, and attention moves on from this particular line to more pressing matters …

… until the day CONF goes missing, and an end-user sees a traceback on her screen. That is clearly not acceptable, and someone quickly rushes

try:
fp = open(CONF, "r")
except:
pass

into production while hunting down CONF. It turns out that the user had launched the application from a bookmark no one had considered (or disabled cookies, or had customized the installation in an unexpected way, or done any of the other things end-users do). Folklore within the organization concluded that the error was “fixed”, and someone elsewhere coded in protection against the bookmarking …

… until the next time an end-user was clever enough to re-create a similar situation. This time, instead of appearance of a traceback, a distant part of the application broke down. Eventually, after too-much debugging effort, the code in the vicinity of CONF was upgraded to

try:
fp = open(CONF, "r")
except:
alert_user()
return

Business returns to normal …

… until the day an end-user sees a warning on his screen about bookmarks (or cookies, or missing initialization), and is more frustrated than ever, because he already did what the warning advises. After more too-difficult debugging, someone discovers there’s a rare possibility that CONF hasn’t been properly assigned. The coders begin to realize the hazard of a “naked except“, and more carefully qualify:

… until the day a sysad rationalizes networking in the back-office, and a critical file-share ends up with unexpected permissions. An end-user sees a warning about a condition that has nothing to do with firewalls, and is utterly frustrated until someone recognizes that IOError covers a multitude of causes. Soon our CONF reader looks something like

It’s still not done. This is far from the end. The last iteration above of what started as a single line would eventually toss at least two more as-yet-undiagnosed problems.

Something is clearly wrong. To reach this point involved multiple upset end-users and too-many late-night debugging sessions, and the “hot spot” of the initial openstill is not “bullet-proof”.

This is the point in a tale where I like to present a solution with almost miraculous powers. For this problem, though, there isn’t one; in fact, “error-handling” is so thorny that I’ve already collected a book’s-worth of material on the subject and its remedies. While there are plenty of tips along the way–no bare except-s, for instance–and articles like “Robust exception handling” do a good job of explaining the basics, the general problem simply lacks a magical solution. IT organizations need to recognize that “error-handling” demands its own analysis, requirements definition, testing, and maintenance. Customers pay for positive features, of course, not for nicely-handled errors, of course. Features-and-functionality need to come first; still, a majority of the time or at least attention in any particular session of use of an application can lie within its error-handling. Improvements in error-handling represent a great opportunity to eliminate distractions so that users can appreciate functionality. Often, the best way to help users see the value of the features in your programs is to make sure errors are handled professionally.

The short answer is that this was the best I know in March 2011 when I wrote these words; the preceding hyperlink to the protocol definition was only finalized eight months later, in December of that year, for instance. Several of the other facts about WebSockets are different now. It’s worth taking a few minutes to get those facts right.

HTML5 complications

WebSockets are a Web standard for full duplex–simultaneous “push” and “pull”–communications between browser and server which “… slash the latency inherent in XMLHttpRequest …”, as I mentioned this summer in “Three main choices for advanced communications in HTML5“. Whole classes of applications, including performance-intensive games and demanding interactive medical programs, were restricted to the desktop until recently. WebSockets makes these practical within a standard browser.

When 2011 began, most end users relied on browsers that did not support WebSocket. Worse, quite a few serious analysts suspected WebSocket compromised the security of the browsers that claimed to support WebSocket. While that didn’t necessarily mean that it was wrong to use WebSocket at the time, a decision to rely on WebSocket certainly wasn’t automatic.

Sites need to explain clearly and accurately when individual write-ups were written or published or posted. “Application Monitor“, the Real User Monitoring blog, does this, of course.

HTML5 is big and sprawling and full of activity. Don’t ever fall into the trap of believing that “Browser X supports HTML5″ is meaningful. You need to think always in terms such as, “which parts of HTML5 matter to me?” and “how can I gracefully retreat if this specific implementation turns out to be impaired?”

]]>IT industry trends this year are all aiming towards one goal – accessibility. We’ve all heard the phrase ‘cloud’ tossed around a few hundred times this year, and the phrase ‘private cloud’ has just recently stepped on the scene. However, the newer trends surrounding these phrases are anything but dull or regurgitated.

2013 is proving to be one of the biggest and most exciting years in IT innovation and here we discuss some of the most prominent topics. Without further delay, here are a few of the biggest trends to hit the IT scene this year.

The Ever-Changing Cloud
2013 is all about cloud computing. In fact, Forbes magazine predicts that almost 60 percent of all enterprises will be using some form of cloud computing before the year is over. Enterprises are enduring so much change that they are no longer able to make use of VPNs, so cloud computing is a must.

The prediction is that more businesses will use a hybrid version of the cloud that allows them to switch simultaneously between public and private clouds, which gives them a fair lead on their competition. For this new hybrid cloud to be as useful as predicted, there is a need for better security and a total revamp of current cloud-based apps. In addition, it is important to decipher the various interdependencies of these apps and the systems with which they interact.

BYOD is Back
Mobile devices are dominating technology and more companies are adapting the BYOD trend. Why is this? Because it is getting harder to get employees to put their devices away. So, instead of punishing employees for using their devices at work, companies are encouraging them to bring them in and use them to increase work productivity.

Companies that follow the BYOD trend created personal clouds that allow employees to access work-related apps. Not only this, but employees are able to access data from anywhere with an internet connection. This is great news for those in sales or IT. No more lugging around laptops to access your work database, it can all be accessed securely using your tablet or smartphone. However, it also raises a key question: How can those in charge of an organization’s IT systems ensure the consistent performance of their applications?

Security Anyone?
What about security? 2013 has that covered too. Forbes has coined the recent rush for security as the “new arms race” and rightfully so. It is, after all, the one thing that could make all of the above trends succeed or fail.

Companies are pushing for identity security with a two-factor authentication method. This may include passwords plus fingerprints, or retinal scans and voice access. Whatever these companies chose as their ‘beefed-up’ security protocol, cyber criminals or unauthorized persons will have a hard time accessing any sensitive information. IT Operations professionals must make sure that these extra security measures do not slow down performance, however.

It is clear that these trends introduce new challenges for IT Operations teams within the enterprise. Some of these challenges are already being addressed, while others still require a solution. Those teams who are proactive in analyzing their individual needs as well as reacting to these trends will see the most success.

Simple demonstration

Start with a peek at this example. Most will see a little digital clock which reports the time on one of our servers, updated every second; the background color of the display cycles red-yellow-green as the display updates.

What good is that?! In isolation, it doesn’t look like much; it was (barely) possible to create similar effects with browsers fifteen years ago, and there are surely already enough clocks in the world. As a model for applications that help in the datacenter, though, this small demonstration has a lot going for it:

server-side requirements are as minimal as can be. The supporting executable in this case is twelve lines of bash, and, at the cost of a little obfuscation, it could be reduced to five. No particular language or library is required; SSE is another one of the interesting features fully available to simple-minded CGI coders using twenty-year-old technology.

It’s also light-weight on the client side, requiring only a single JavaScript function definition.

Transport is by way of good-old HTTP, so SSE travels well through typical enterprise networks.

With one notable exception, which I explain below, the demonstration performs well in all sorts of environments, including smartphones, tablets, and a range of old and new laptops in our office.

SSE’s “push” is just the right kind of programming for many cases that interest me. I often need Web applications with the intelligence to display useful information, but also update that information as new results–a misconfiguration of a customer account, an overfull filesystem, congestion on a subnet, or so on–arrive.

What’s the catch?

With all this going for it, why didn’t I include SSE in my earlier list of established HTML5 communication methods? Internet Explorer (IE) doesn’t support SSE, and I’ve found no one who claims to know of a definite plan for a future version of IE to do so. In the context of conventional Web application development, that’s enough of a reason on its own to decide against even a documented W3C standard.

SSE has other blemishes. It’s had a reputation for memory leaks, which I’ve never made the time to analyze for myself. SSE is a unidirectional communication, and many of today’s sexiest applications seem to fit a two-way “chat” better.

Easy to start

Still, plenty of shims are available to extend SSE support to IE, memory leaks are correctable, SSE’s light resource demands are appealing, its minimalism is a good fit for all the devops who aren’t full-time programmers, devops staffs don’t require IE compatibility, and a simple push is the right approach for a lot of what we do in the datacenter. Try SSE for yourself; it should take only a few minutes to put together a “Hello, World!” that makes sense in your environment. With that in hand, you’ll be in a position to judge what SSE can do for your next projects.

]]>“Real User Monitoring” has probably convinced you by now that application performance is both important and inadequate; so what do you do about it? Accurate measurement and identification of a problem is the first step, of course. When you’re ready for the second step, here are three quite different approaches you can apply with proven records for delivering both quick results, and enough depth to reward you for months to come:

Low-hanging fruit: HTTP payload basics

Tammy Everts, for example, often teaches that images can be compressed, delivered in appropriate formats, and sensibly sized. That’s a great place to start any performance-optimization project.

Take advantage of what’s right in front of you: browser tools

Most front-end developers have a sense that browser debuggers are potent, and have improved a lot in recent years. They’re essentially indispensable for making the most of the functionality of HTML5, JSON, CSS3, and all the other technologies likely to appear in modern applications. Not so well-known is that nearly all of these debuggers provide rich performance introspection. With Chrome DevTools, for instance, a favorite of many colleagues, you’ll want to learn at least the Timeline, Audits, and heap profiling. Other browsers also go a long way in analysis of your code which leads to specific suggestions likely to improve performance of your application.

Network modelling

Last week I mentioned “Network emulation for the cloud“. It’s worth a few more words now. Shrinking downloads and easing browsers’ chores are two traditional but distinct ways to improve the performance of Web applications. Separate from these in both means and ends is engineering modelling of network traffic. With increasingly sophisticated and complex networking between providers and end users, however, prudent network modelling has become essential. You can read for yourself why author Nick Hardiman expects that “[i]n a few years … IT testers will wonder how they managed without it.”

Conclusion

You don’t have to put up with mediocre performance. Whatever the current shape and substance of your applications, it’s almost certain that you can take at least a couple of steps today to improve it enough to please your users.

‘Have a question about how the author performed his statistical reductions, or exactly what was in his datasets? It’s all there. You can review or extend the calculations yourself. So can the five, or fifty thousand, other people in the world qualified to judge the analysis. This is how science deserves to be done.

I recognize every generation imagines its own shufflings to be unimagined leaps. In this case, perhaps, the conceit is justified. The science of recent decades has been deeply plagued by faulty and improperly-analyzed data. The most certain fix for this defect is well-organized, transparent, teamwork or collective action: “open-source science“. IPython, a fascinating project of research neuroscientist Fernando Pérez, already has a strong record of achievement in encouraging scientists and engineers to share not just their ideas, but details of data technique.

IPython isn’t alone; such other open-source products as Sage, TeX, R, the Open Genomics Engine, and arXiv have great stories to tell, and a few proprietary products, including , SAS, and MATLAB, have nurtured scientifically-significant ecosystems of shared add-ons.

What’s this matter for working DevOps? Perhaps not much, at least not immediately. It does suggest a few tips, though:

There’s really no excuse for static presentations. I recognize conventional corporate culture puts a premium on colorful and visually-rounded charts. Long-term value comes, though, when live data are liberated and visualized constructively, often in ways their original custodians didn’t imagine.

I should make clear a bit of my own background first: Unix is my natural home. When I think of alternatives to Linux, Solaris and the *BSDs come first to mind. A fully competitive Windows, though, makes for a healthier engineering milieu. Even if you’re more committed to Linux than I am, you’ll want to know what “the other side” offers.

At the rack level, Windows is hard to manage, or, more precisely, its management is hard to scale. Bruce Payette, Principal Software Design Engineer with Microsoft, makes it utterly clear in his UCMS ’13 presentation, “DevOps, Desired State, and Microsoft Windows“: Unix bases configuration on declarative documents, and Windows historically has configured through imperative application programming interface (API) requests. The natural way to provision a Linux instance is for someone to copy appropriate *.conf and similar to where they belong; on Windows, the canonical way to configure involves someone pushing buttons on a graphical user interface (GUI). “Managed” in the Unix world generally focuses on text files–versionable, first-class objects in the filesystem, diffable, and human-readable–while Windows management tools do things like replicate opaque system images across a pool. That difference is the big reason in the datacenter that Linux sysads regularly handle a multiple of the number of servers Windows administrators are assigned.

PowerShell changes those terms. It has changed them already, and Version 4 promises to do even more. Payette’s video, mentioned above, explains this in detail.

What you should know about PowerShell

From a DevOps-Agile-Lean perspective, PowerShell plays within Windows much the role that sh does for Unix. While language fans argue the syntax of Ruby vs. Go vs. Haskell vs. …, PowerShell also has a number of interesting and useful characteristics, particularly in comparison to its sh relatives: it builds in remote execution; it natively has the ability to invoke not only external executables, but also libraries; it’s richer in typing than sh, and even facilitates object-oriented coding and piping; and, with Version 4, it builds in “orchestration” and configuration as a workflow-oriented keyword. These advantages reverse the “terms of trade” within the datacenter: with all this available out-of-the-box on every Windows Server host, the latter become easier to script and configure than the Linux workhorses that have built up the cloud over the last decade.

One additional aspect of PowerShell that I see rarely mentioned bears on the nimbleness that DevOps accentuates. In the absence of PowerShell, the way to accomplish chores on Windows servers is to write “console applications”–perhaps in C#, often in Visual Basic, sometimes in Python or other languages. PowerShell makes it easy for sysads to script solutions together without firing up Visual Studio or Eclipse or a comparably “heavy” project-oriented interactive development environment (IDE). While DevOps is supposed to be a unified team, there will be variations in style within it. PowerShell quite nicely fits the circumstances of the DevOps members who want to consume libraries of organization-specific services, without having to invest in everything involved in constructing those services for themselves.

A final milepost of PowerShell’s progress is that Version 4 facilitates integration with Puppet and related DevOps tools. While I generally find it a challenge to gain much from video presentations, I strongly urge you to watch Payette’s; you’ll see that it makes Windows Server a far more interesting choice for the scalable, manageable DevOps host in the datacenter of the future.

]]>http://www.real-user-monitoring.com/powershell-to-the-rescue-tactical-language-with-strategic-impact/feed/3HTML5 vs native an important, but not obvious, choicehttp://www.real-user-monitoring.com/html5-vs-native-an-important-but-not-obvious-choice/
http://www.real-user-monitoring.com/html5-vs-native-an-important-but-not-obvious-choice/#commentsTue, 09 Jul 2013 20:31:07 +0000Cameron Lairdhttp://www.real-user-monitoring.com/?p=6320Andrew McHugh is right to contrast the protagonists in […]

seamless, continuous updating of Web applications is a crucial part of the HTML5 experience in many markets; and

browsers are vastly better than they were even a year ago.

Too much of his comparison, however, looks backward.

Detour through storage

While McHugh, Technical Content Manager at SoftBear Software, correctly emphasizes the priorities above, parts of his comparison only confuse matters. His second paragraph begins, “HTML5 was developed as a solution to storage issues.” This is at best an extreme and misleading claim, something like finding the origin of the US Civil War in Kansas-based terrorism while neglecting mention of slavery, federalism, or trade policy. HTML5 does provide useful definitions for Web Storage, Offline Storage, and Indexed Database, but WebForms 2.0, canvas, and the multimedia specifications were far more compelling technical problems than client-side storage in the 2006-2009 time-frame he presumably has in mind.

Emphasis on HTML5 storage is particularly unfortunate in that it’s only a distraction from the true issue at hand, which is to compare HTML5 and native bases for application development now. Whatever their histories, both HTML5 and native now have good handset storage.

Far more than McHugh, I see convergence between the two approaches in their engineering capabilities. For McHugh and other commentators, HTML5 suffers because it doesn’t allow for monetization, it’s incapable of disconnected operation, and is far less secure than native apps, while the latter are expensive to develop and can be updated only with difficulty. Again, while differences in these domains were important just a few years ago, they’ve nearly vanished of late. Alexander Krug, CEO & Founder of SOFTGAMES Mobile Entertainment Services GmbH, even argues, for example, that “… HTML5 Apps are Easy to Monetize“. On the other side, advanced toolkits should largely solve difficulties in updating natively-developed applications.

Details matter

My own perspective is that both HTML5 and native technologies have improved so much that a selection between them depends crucially on engineering details lost in broad overviews. I personally often advocate for HTML5; at the same time, I know that a wise choice for or against it requires careful analysis of the specific situation of an organization. How does the background and culture of the development team mesh with the requirements of particular app stores? Can the devteam recruit effectively in the HTML5 or native communities? What marketing and distribution consequences are there for one or the other approach?

The fight for the future of mobile development has largely moved beyond bits and bytes; now it’s cultural.

]]>You’re monitoring a particular server for particular behavior–maybe patterns of filesystem usage, or the appearance in database server logs of specific faults, or cycles of CPU (central processing unit) usage. You’ve already scripted a little bash or PowerShell report that helps make sense of the events. It’s mildly tedious, though, to have to log in to the server every time you want to check in on progress–especially so because there are actually three servers that deserve oversight, and there are at least two other people on your DevOps team, and one manager, who also need or want to know what’s going on.

Longer term, it’s obvious what the formal “right thing” to do is: you define a Nagios plugin, or a Splunk filter, or SNMP MIB, and integrate the specific diagnostic under consideration in your larger monitoring framework.

Not too cold, not too hot

Those alternatives are uninviting, though: as refined as those technologies are, it takes expertise and care to code such extensions accurately. You have better things to do with the half-day that is the realistic minimum to get them right.

Web monitors are ideal, of course; it’s almost the definition of a modern sysad tool that it offers a Web view, because the latter are so easy to share (just pass on a URL) and demand no installation (what client display doesn’t already have a Web browser?). The trouble with a Web application is that you don’t have the time to learn PHP or Ruby or C# or whatever language your organization uses for Web pages.

The secret in all this is that you already know a Web language or two. Whether your servers run Linux or Windows, they almost certainly already have Apache or Internet Information Server or a close variant installed, configured with Common Gateway Interface (CGI) or perhaps its FastCGI or SCGI enhancements. You can leverage CGI to create small but useful Web pages with active content in about the same amount of time as it takes to read this article. The report you scripted to run on one server at a time instantly becomes available anywhere on the ‘Net that can reach that server.

I know perfectly well that CGI is obsolete; in fact, fifteen years ago I turned down a large project based on CGI because I was certain that the technology was diving rapidly to end-of-life. That it’s obsolete doesn’t mean it isn’t still useful, however. CGI can be a great choice for little one-off projects.

Plenty of Web tutorials are available to start you off at a “Hello, world!” level in your CGI career. The hardest part is generally to get the first CGI script to run satisfyingly, because so much about Web service is configurable. In a typical Apache installation, for instance, you might create a file called /var/www/cgi-bin/my_first.cgi with content

Exercise this page with a URL such as http://$SERVERNAME/cgi-bin/my_first.cgi. Make sure that you have set ownership and permissions appropriately for your configuration; in particular, you’ll probably need to chmod +x my_first.cgi.

Support the latest front-ends with the back-end scripting you know

Do you see what opportunities a simple approach like this opens for your DevOps team? Anything that you can script as a command-line or console application immediately becomes available as a Web application. This also means that you can marry your sysad scripting with all the functionality of modern HTML5. Not only can your CGI-based Web pages take advantage of basic HTML formatting such as <table> arrangements or colors, but you can use AJAX, geolocation, client-side programming, Web forms, and everything else HTML5 brings. It’s important to be clear on this: even though CGI is quite old and outmoded, it’s perfectly happy to serve up payloads from the latest standards. CGI lets you prototype “responsive” Web applications designed to be viewed on the trendiest mobile handsets. If your little monitors or reports need to be reworked some day for performance reasons into a more modern framework that you don’t know, that will be possible. In the meantime, though, you have benefited from extra weeks or months of CGI-based availability in a language you use every day.

By “overwhelming”, I don’t mean “poorly-designed” or “unusable”; I just mean that Web developers and network operators have relatively little experience in the full range of facilities modern browsers embed, and it’s easy for all the novelties to disorient all but the specialist. More programmers appear to have used AJAX, for instance, than can explain when it’s appropriate. This makes it a good time for a quick, high-level, and focused catalogue.

Architecturally, the Web’s original “pull” demand-response dialogue Tim Berners-Lee defined more than two decades ago remains the most important HTTP channel. It was only rather recently, during 2000-2006, that the major browsers standardized even XMLHttpRequest (the basis of so-called AJAX) as a basis for “push” and other communications models. Before this, the only effective alternative to retrieve more information without a page refresh was custom coding in Java.

HTML5 dramatically alters the landscape. This is in part because HTML5 itself is so sprawling: it has many parts, and several of them involve programmable networking.

Web Sockets slash the latency inherent in XMLHttpRequest; game authors have healthy appetites to try Web Sockets because, at their best, they’re far faster than XMLHttpRequest-based codings. Web Sockets aren’t as widely-supported yet, however; that’s the main impediment to their use.

Cross-site scripting has been, of course, a considerable security hazard for most of the history of the Web. The principal defense against it is browsers’ enforcement of “same-origin” policies. HTTP, XMLHttpRequest, and Web Sockets all default to same-origin restrictions. These policies ensure that Web pages only receive information presumed to be under the control of the authors of the Web pages. Web Messaging doesn’t default to a same-origin policy; instead, its security comes from co-operation. Both sender and receiver must name each other.

There will eventually be minor alternatives to these; the HTML5 File API might, for instance, be combined in specialized circumstances with external filesystem mounts, as a communication method. The Messaging API or Web SQL definitions have similar potential for communication with a special-purpose transceiver. XMLHttpRequest, Web Messaging, and Web Sockets, though, will form the foundation of communications in applications for years to come.

The most interesting of these applications will, of course, have major communication components. Whether your own role is in design, coding, or operations, keep in mind the important generalizations: Web Sockets are fast, Web Messages operate between domains, and XMLHttpRequest is best supported by tools.