Wednesday, January 28, 2015

I think I forgot to mention a lot of features in my CodeSite review, but there's one that just made me so happy. I actually smiled. While coding. I know eh?

Imagine this. I'm persisting some complex structure as XML and writing that to the DB. Writing that big mess out to a plain text log file quickly gets you into that "Drinking from the Firehose" mess. But what if your viewer just showed a quick summary like "Variable = <?xml version .....". Then you click it, and you can inspect that row "zoomed in", using the Inspector.Here is a sample:

Now, instead of feeling like I'm drinking from the firehose, I feel like the guy who can pick exactly the rows he wants, and find the items inside he needs. You really need to try sending giant blobs of text to CodeSite, as ONE CodeSite.Send, and using the Inspector pane, to realize, hey, there is NO reason to use anything else for logging.

It's that good.

I suggest that you have specialized DIFFERENT sets of IFDEFs and use these giant dumps to codesite, only when you need to use them, like this:{$ifdef TROUBLESHOOT_PERSISTENCE_XML} CodeSite.Send( 'LoadSomething', AXMLString );{$endif}

Delphi 20th Birthday Party Coming Up

Hey folks, did you realize that Delphi is going to be 20 in a few weeks? You know what would be a good idea to celebrate? Find your local Delphi user group or meet-up, and join them for a beverage, at some point in February 2015. I plan to do just that in Toronto.

Sunday, January 18, 2015

I'm going to talk about the importance of logging, and in particular, why I like to use CodeSite in my Delphi applications. Before I do that I want to place logging within a context. It is a tool, among other tools, available to you, to give you insight into your running code, so that you can observe the behaviour of your application, determine the correctness of your application, and even, give you some ability to reason about how to fix it, just from the output of the logs.

Fixing performance problems, validation problems, exception mis-handling problems (catching things you shouldn't catch, not catching things you should catch), detecting and fixing thread deadlocks, problems with control painting, or high level user interface behaviours like persistence of user customizations to the application's grids or toolbars, are things I've never found any way to wrap up in unit tests and then fix. Attempts to do it that way have just wasted time and money.

First I'll say that I wish I found and fixed all my bugs by wrapping all my application code in unit tests, and that TDD is so wonderful that nobody needs logging, and nobody ever needs the Debugger. Ha. What a laugh. I'll bet TDD people are keen on TDD precisely because they have, and still do, spend a lot of precious time in the Edit-Compile-Debug cycle.

Logging can be a way to reduce the amount of time you spend in Edit-Compile-Debug.

So, let's take it for granted that any bug that can be found by TDD practices probably should be found that way, and perhaps have been found that way already, and that leaves behind the bugs that don't get caught that way. For those bugs, there are three main debugging techniques that I rely on:

1. Making effective use of the Debugger, especially the advanced Breakpoint properties. Some people seem to think this is the only way to find and fix problems though. Set a breakpoint. Inpect a variable value. Step into and out of a complex code path. Yes, using a debugger is a critical part of being an effective developer, but sometimes, you can spend a long time chasing bugs, and even finding where a problem is. There are a lot of features in the debugger that, if you use them correctly can reduce the time to find your problems. One of them is the advanced features in the Delphi breakpoint settings window; Conditional breakpoints, and even non-breaking breakpoints, which can be used to generate some log messages without modifying your code:

2. Code Inspection. Read your code. Without running any code, and without even much searching, it is sometimes possible to locate the code that is malfunctioning and find the repair that is necessary, without even running the debugger, by a combination of good code reading skills, and a good working knowledge of where in the code you might go to look and do some code inspection. The question is, how do you find the code that was running right when the problem you experienced was happening. I find inspection useful only after my code is instrumented with trace logging, or when the failure happens in an area where I have just been, and made changes. My mental process is usually to detect all the cases that I have to handle, and see if all the states are detected properly, and then correctly handled. Many bugs can be found by determining that a correct bit of code to handle a particular scenario either doesn't exist, or can't possibly be run (even if it exists) because the code that detects it is not correct. I find in custom-control work, for example, that states like Disabled, and Hot state handling (the less common states for a control) are where a lot of the bugs in my custom controls lurk. Another kind of inspection that I like to do is reviewing the version control logs and looking at what's changed today (especially when I'm part of a team working on the code). Sometimes by reviewing the version control logs, and source code diffs, I can quickly zero in on a change that caused a problem.

3. Trace Logging, also known as "Debugging by Print Statements". There are a wide variety of techniques for logging, and CodeSite, my topic today is just one of the available ways to do it. A classic one, one I've used for a long time, is to use the Win32 API function OutputDebugString. You can see the messages from these right in your IDE, or using the free SysInternals DebugView tool which you can download from Microsoft. Another way is to roll your own little disk based logging library. I think we all think logging libraries are not very difficult to write properly, until we've written one. Open a file, append a line of text. If you don't have to worry about multi-threaded logging, and multi-process logging, perhaps you might make due with a pretty simply logging system, it probably has a global Critical Section that blocks every thread in your app for the 300 microseconds that it takes to write a log message, and that's probably fine for your needs. Probably most Delphi developers who have been working for a while have their own favourite logging system. I have my own:

I like one file per day, named in the form YYYY-MM-DD.log and I like logs to auto-clean after some sensible period like 30 days. That way if I turn on logging at a client machine, it's never ever going to fill the hard disk with my logging messages.

I also like to have multiple levels of logging, at least three; Information, Warning, and Error. I like the message to have the timestamp first, and the severity indicator second, followed by some kind of area or scope name, and then the message. Something like this:

2016-01-16 08:03.02.123 [I] [NETLISTENER] Listening on port 8081

Typically my logging library has a simple API that I don't have to think about. My own home-brew logging system is built around a function called Log:

Log( lvInfo, 'Listening on port %d', [ FServer.Port ] );

If I'm in a module I might have a log method that is private to the implementation section, that adds the facility (section) of the application, the 'NETLISTENER' above, and invokes a global callback function, if it's assigned:

What I wish is that I didn't have to wrap Log messages I only want on sometimes in any explicit {$ifdef}. I'd love it if I could declare inline functions as conditionally "no-op", so that I could switch out all the logging calls in my application in some non-logging build, just like Assert can. If I was using C++ this is a place where Log() would be a macro, instead of a function. Although I don't much like having to do it this way, I often find myself going back and conditionally defining my log messages, like this:

So I have my own logging library, and you have yours. Is Logging like Lasagna, something you should always make yourself, and not go for a ready-made? No. I think there's a time and a place for both things in your tool-chest.

Sometimes I will use my own home-brew logging system, and sometimes CodeSite, and sometimes both, even in one codebase. Already having a logging system in place, of sorts, isn't a good reason to ignore CodeSite. You might find your existing ad-hoc system, and CodeSite actually combine pretty seamlessly.

Why CodeSite?

If logging to a file is a few lines of your own code, and that doesn't require CodeSite, why do I find I like CodeSite so much?

Well the first thing I do in any codebase that has its own logging functions, is wire up CodeSite to my own internal logging system's handler function.

Just add CodeSiteLogging, to the uses clause, and call CodeSite.Send, and you've got the lovely Code Site live viewer application up on your screen. Live Viewing rocks. But then, OutputDebugString can enable the "live viewer" built into your debugger. So why is CodeSite's live viewer better than just adding an OutputDebugString call to your logging handler function? It's easier to read the live viewer window's output, and easier to find stuff I care about. This is the one area where all homegrown or the IDE-built-in OutputDebugString systems tend to fall apart, in my experience, is in finding the messages I care about and ignoring the ones I don't care about.

Another less common problem, is having to read several logs separately and combine them manually in my head. Code-site has a dispatching (routing) system that can combine messages from multiple applications and multiple machines, into a single log file. It also has ways to help me find the needles in the haystacks. I can use CodeSite to arrange live logging scenarios that help me see what I want to see to fix a particular problem, in a distributed multi-application multi-machine scenario. If I have to manually collate and combine results from multiple log times, that decreases the overall efficiency of the log file as a tool. I could have a server log, and two client-side application logs, and have to go find a message at 11:32:17 AM in several different logs. Now I need to check if all my computers are synchronized correctly, so that I know if 11:32:17 is slightly ahead or behind, for computer 1 or computer 2.

Another area where CodeSite really shines, is as an IDE wizard for adding logging messages quickly to your application. When I need to quickly add a lot of instrumentation (new trace messages) to a huge area of code, as an exploratory step, and then quickly take it all away. The CodeSite MethodTracer will automatically instrument an entire area of your code, including all calling functions in a call tree, and even insert {$IFDEF CODESITE_LOGGING}..{$ENDIF} block around the generated CodeSite.TraceMethod function calls. Seeing what functions call what functions in a structured way can be very enlightening. Adding the execution times for functions, even more.Here goes, a few clicks...

Hey, presto, millisecond time-elapsed values, just what I wanted..

The more complicated the data types you work with, the more a tool like CodeSite helps. You can send a whole record or object off to CodeSite, and then worry later about finding the values you need out of the object. When I'm working with a graphical control, this kind of time-series inspection is exactly the ticket. With a debugger you get a current value. With a logging tool, of any kind, even the roll your own kind, you get a series of points of information over time. The question for me is, do I want to spend a lot of time implementing logging features, or do I want a tool that adapts quickly to a variety of ad-hoc logging tasks over time, quickly.

Here are some of the tasks that I find I use CodeSite for instinctively, due to the efficiency of working with the live-viewer, and the variety of tools and facilities in CodeSite:

1. Anything you install into the IDE itself, I prefer adding logging and instrumentation, and then I visually watch my log output, and when this stuff misbehaves, I can see why it's wrong from the output of the logging screen. Seeing your messages INSTANTLY on screen is essential here. For IDE wizards or component debug logging, opening a file and searching in it is a waste of my time, so a live-viewer logging solution is the only sensible choice.

2. Any graphical drawing code, whether part of a component I plug into an IDE, or even the custom paint code inside an application context (whether the custom-draw events of a control, or a control like TPaintBox that is all about owner-draw). It's just useless to inspect this stuff with a debugger, and OutputDebugString techiniques are not good enough for my purposes. The ability to send a TRect, or a TPoint to CodeSite and visually see it are very useful:

3. Any time I want to have expandable, collapsible tree view of log messages, giving me better organization than in a flat log viewed in notepad. In the screenshot for the previous point, did you notice how there are little icons before each of the log messages? There's a little icon for entering a function. Another little icon for exiting a function. Next there's my Origin = message. I know that somebody set Origin and it's AFTER the exit of HasGlyphFor, but must be inside some other context that doesn't have logging.

4. Any time I have a system that has multiple programs, parts, pieces, services, runs on more than one computer. Honestly, I rarely co-ordinate logging from multiple sources, but when I do, it's like when I need to use the remote debugger, it can be a life saver on those days.

5. Any time I need filtering, organization by categories. Particularly nice is the Auto Organize feature, if you make use of the Category field in your log messages, something I urge you to do.

6. Any time I want to quickly instrument (add a lot of logging) that I might later revert out of my working copy. I use the CodeSite Method Tracer for that.

And now, if I have convinced you to try adding some more logging to your application, or to try out CodeSite, which you may have already (at least the Express edition) if you have a modern version of Delphi. CodeSite Express was first bundled with Delphi XE, and is still bundled today in Delphi XE7,

Now that I've stated why I love logging, I would like to discuss why logging becomes less useful to me than it could have been, no matter what tool I use, whether it's my roll-my-own file based loggers, or a tool like CodeSite. Yes folks, there are ways to do logging more and less effectively.

The Cardinal Sins of Logging

Logging is not a panacea. Making good use of logging tools is harder than it looks.

You can make your life easier by doing the right kinds of logging. I advocate a restrained, controlled approach, with a retrospective phase where you look at how you log and modify your approach if the approach is not paying rich rewards.

1. Logging too much. A log that contains 100 messages per second, or a log that contains 1000 useless messages that convey nothing to me, with one message in there that might tell me something interesting, is the most common form that logging fails. Curating your log messages is something of a skill, and removing useless log messages is something I think you should be even more ruthless about than removing dead code (commented out) or useless and wrong comments. If you can not use your logging facilities to see problems, because of all the noise, you need to do a reduction of logging. In logging, often less is more. If 99% of your log is noise, you may as well rip it all out and start over.

2. Not having ability to filter your logging. Maybe you added 500 log messages to your system's logging last year due to a bug you had that day. Now those messages are useless to you. The two ways I filter my logging are by category, and by severity. Category is a thing in my head that is coarser than unit scope, and finer than binary module scope. In a scientific application that deals with some kind of hardware communications, I might split the scopes up to Communications, Analytics, Database, and then various user interface Category values for parts of my GUI. The category concept is about grouping your log information to keep irrelevant details apart. The second thing is severity. Sometimes Information messages should be suppressed and so instead of just turning logging on and off, it could be set of Off, All Messages (Verbose), Normal (Warnings and Errors), or Quiet (Errors only).

3. Logging only to disk and never directly to screen. I find locating logs on the disk, opening them in an editor tool, and finding the rows that are recent and relevant a complete waste of my time. In fact, I find I ignore logs completely if I don't have a live viewer. Even if you didn't want to use CodeSite, at least adding an OutputDebugString line to your own in-house file logging code is friendlier in use than having to stop what you're doing, go browse in the windows explorer and find a log and then find the content in it.

4. Slowing down your end user's experience or otherwise degrading the application's primary role, due to logging. Your logging solutions, whatever they are, should be possible to turn off, and should not consume memory, or slow the user down, when turned off. If they do, and you can't fix it any other way, you should IFDEF the logging OUT of the code. In some systems, logging is not worth doing do disk, until a serious fault occurs. Then, and only then, perhaps some context may need to be written. In high performance applications, I have designed a circular buffer based trace logging system, that logs only to memory in a circular buffer. Then when some fault occurs that justifies the expense of persisting the log content, I save the incident file, with the preceding recent information and warning messages, from the circular buffer. Or I append the recent in-memory records to an exception traceback (which I might generate using JclDebug or MadExcept), for purposes of automated bug reporting. This approach is more appropriate to some application domains, than a simple disk based logging system.

5. Not cleaning your logs up or having a good log rotation and log naming system. I have seen log systems that are a pain to leave turned on for any length of time because they fill the user's disk up with gigabytes of noise, and they don't neatly switch files per calendar date, making it hard to keep a recent amount (say the last 7 days) of log data around, while wiping out data that is so old as to not matter any more (say, greater than 7 days).

6. Omitting Log headers. My favorite practice is to have daily files with the date in the file name, and the first line in the file should be the executable name (with full executable path) and the version. Something like this:

*** C:\Program Files\MyApp\MyApp.Exe Version 1.2.3.4

Going to Production - CodeSite works there too.

CodeSite is fully capable of working in a production environment, I just haven't used it much for that yet, as I also have my own, or my employers production file based logging code. Whatever logging system you use in production, remember before going to production, to add a way to configure your logging subsystem, and then have some configuration user interface, at a minimum, perhaps you will want the options to turn it off completely, or set a logging level (Verbose, Normal, Quiet).

Going to production doesn't cost anything either, since the tools, including the dispatcher, live-viewer and file-viewer are freely re-distributable. That's awesome.

Notwithstanding my advice above about logging too much being bad for you, there have been times when I wanted to generate a giant pile of log messages and then filter the data later. CodeSite is the best tool for that job that I have found yet, as well. I have found it's best to turn off the auto-scrolling feature in that case. What I wish that I had was a technique that lets me see only new errors as they occur, leave the auto-scroll turned ON and showing new error messages, and hiding all messages of severity less than error, until I pause the logging, and want to "zoom in". With auto-scroll on, and a high enough message send rate, I feel a lot like the kid in the movie clip above. Michael Richards (and Weird Al) should have got an Oscar for UHF. There is no justice in Hollywood.

Express versus full CodeSite Studio

CodeSite Express comes bundled with your XE through XE7 versions of Delphi has enough to get started with adding CodeSite.Send() stuff to your code, and includes enough Live Tracing functionality that you really should be using it.

The ability to send much more than just strings, the ability to send whole objects or collections in at a shot, are worth the upgrade alone. The high precision timing stuff and the TraceMethod technique are only available in the full Studio edition. There are even a few cool things in the full featured product list that I haven't used yet like conditional sending (SendIf).

Monday, January 12, 2015

I apologize for the lack of blog activity for the last few months. I've been in Crunch Mode at work, and various things are going on in my family life right now that have made it harder to blog about Delphi.

I remain an active Delphi guy, all day at work, and I'm happily using XE7 in production. I remain confident in the product's stability curve. XE6 was better and more stable than XE5, and XE7 has continued to be stable.

Here are some things I'm planning to write about soon:

1. The sad state of dUnit and the plan to resuscitate it. This plan is currently underway by the current maintainer of dUnit, Jeroen Pluimers. He plans to move the project from SourceForge to a public GIT based hosting system, and abandon Subversion for Git. I plan to be one of the contributors to that newly rescuscitated dUnit and I'll have my own branch (which I will use to submit pull requests, using GitFlow). I'm quite excited to see dUnit come back again. I will probably also include some notes about WHY I'm not interested in DUnit/X right now, and yet, I am quite interested in seeing the changes that went into DUnit2 getting merged up to the DUnit trunk.

2. A plan to relaunch the WANT program, also a Juancarlo Anez original, in a new form. I will probably be hosting the code at bitbucket, and I plan to use Mercurial to host it. I need to discuss the plan with my employers as I want to take some of our own in-house customizations and changes and external tools and build them right into WANT. Why? Because CI (continuous integration via build servers), IDE package installation, and unit and integration testing frameworks (like DUnit and DUnit/x) should be integrated.

3. I'm planning to write a blog post on how to I use logging tools, especially CodeSite, to help me with my debugging. I think that will be the next one.

Sorry for the quiet months on the blog, but this blog is back, and I've got some stuff to talk about coming up January 2015. Today is, by the way, my 45th birthday, so I might write a post a bit later this month about the issue of Getting Old, from the perspective of a Software Development professional.