Archive for the Category: '
Software '

A quick ol’ post to recommend this excellent piece of software, For the longest time I just used the built-in WordPress Editor to write, save, edit, etc. Posts. I tried some of the blog writer programs back in 2010 or so, but ended up just going back to directly working via WordPress. I found that more and more I was writing blog posts via EditPad Pro, saving the images for them in the same folder, and then rebuilding them via copy, paste, and media upload tools in WordPress, so I looked at the landscape for Applications that allow a Local software program to be used to edit and “arrange” a post, then post it. Sort of like a Frontpage for Blogs (the negative implications of that comparison notwithstanding, of course…). Open Live Writer is the program I found and have been using as a result of that search, and I’ve been quite happy with it so far. While it doesn’t quite seem to understand the blog theme, it still makes editing and arranging posts quite easy, particularly as I can practically just edit a document like I would a Word Document- insert pictures and all that, and then it will upload them to the WordPress media library automatically. Immensely useful. You can also open drafts and even older posts for editing using the program. It’s quite flexible and now I keep it open in my taskbar, so when a topic pops into my head I can write about it immediately- like this one, for example.

The only downside that I’ve noticed is that since it doesn’t recognize the theme it also doesn’t seem to “get” the crayon syntax highlighter I use, so when editing the code tags are rather boring. Though, they didn’t look any better in the WordPress editor so it’s not like I’ve lost anything, either.

implemented a new “Windows 10 Style” Menu Renderer. This is the default when installed on Windows 10, and will by default use the Windows 10 Accent Color as well. (No blur behind). It’s not precise and is more a stylistic imitation but it fits better with Win10 than the other Renderers (IMO)

As usual the latest source can always be found On github. And The Installer for 1.1 can be found Here.

Unit Testing. Books have been written about it. Many, many books. About Unit testing; about Testing methodologies, about Unit Testing approaches, about Unit Test Frameworks, etc. I won’t attempt to duplicate that work here, as I haven’t the expertise to approach the subject to anywhere near that depth. But I felt like writing about it. Perhaps because I’m writing Unit Test code.

Currently what I am doing is writing Unit tests, as mentioned. These are written using C# (since they are testing C#, so it makes sense), and they make use of the built-in Visual Studio/NET Unit Testing features.

In my mind there are some important ‘cornerstones’ of Unit Tests which make them useful. Again, I’m no expert with Unit tests but this is simply what I’ve reasoned as important through my reading and interpretations on the subject so far.

Coverage

making sure Tests engage as much of your codebase as possible applies to any type of testing of that code, and Unit Tests are no exception to that rule. It is important to make sure that Unit Tests execute as much of the code being tested as possible, and- as with any test- should make efforts to cover corner cases as well.

Maintainability

When it comes to large software projects, one of the more difficult things to do is encourage the updating and creation of new Unit Tests to make sure that Coverage remains high. With looming deadlines, required quick fixes, and frequent “emergency fixes” It is entirely possible for any Unit testing code designed to test this to quickly get out of date. This can cause working code to fail a Unit test, or Failing code to pass a Unit Test, because of changing requirements or redesigns or because code simply isn’t being tested at all.

Automation

While this in part fits into the Maintainability aspect of it, this pertains more to automation of build processes and the like. In particular, with a Continuous Integration product such as Jenkins or TeamCity, it is possible to cause any changes to Source Control to result in a Build Process and could even deploy the software, automatically, into a testing environment. In addition, Such a Continuous Integration product could also run Unit Tests on the source code or resulting executables and verify operation, causing the build to be marked a failure if tests fail, which can be used as a jumping off point to investigate what recent changes caused the problem. This can encourage maintenance (if a code change causes a failure then either that code is wrong or the unit test is wrong and needs to be updated) and is certainly valuable for trying to find defects sooner, rather than later, to try to minimize damage in terms of Customer Data and particularly in terms of Customer Trust (and company politics, I suppose)

I repeat once more than I am no Unit Test magician. I haven’t even created a completely working Unit Test Framework that follows these principles yet, but I’m in the process of creating one. There are a number of Books about Unit tests- and many books that cover Software Development Processes will include sections or numerous mentions about Unit Test methodologies- which will likely be from much more qualified individuals then myself. I just wanted to write something and couldn’t write as much about Velociraptors as I originally thought.

introspection into Types, Methods, and Parameters is a very useful feature for the creation of highly dynamic programs. One use for this ability is to allow generic methods to be highly expansive.

Take for example, a function that wants to read in an object, and initialize the properties and fields of that object to data stored elsewhere. Perhaps a database, or maybe a dictionary or some other data structure. At any rate, it wants to be able to instantiate and create an instance and populate that instance without actually knowing the specific type at compile time.

The first step to such a method would be to instantiate the class. The most straightforward way would be to acquire the parameterless constructor, and invoke it- the result will be the constructed object.

This approach does, however, have some undesirable curve-balls that can bite you. Observe in this example:

What is going on here? Why is the class instantiatable, but not the struct, even though they are exactly the same? Both can be instantiated using new SimpleClass() or new SimpleStruct() in the code- why does reflection claim things that don’t seem to make sense?

The reason appears to be partly because of how unreliably the constructor is called in certain circumstances by the CLR, as well as how defining a parameterless constructor in such a way pragmatically makes that struct “mutable”, which can cause issues when dealing with value-type assignment, (which copies the struct, rather than copying a reference to the struct). In order to stay as consistent as possible, the C# compiler does not emit a constructor for any value types and instead relies on the standard CLR struct initialization. As a result, you can’t define one yourself. Jon Skeet provides a far more in depth look at the underpinnings that are the cause of this particular result.

At any rate: the reason you don’t see a constructor when reflecting on a C# type is because C# doesn’t emit constructors on value types

Handling and dealing with Errors can be tricky. When your program crashes you want to fix it as soon as possible. One of the most valuable pieces of information- when the Error is generated from a .NET Program- is the Exception type as well as the stack trace, which can be used to try to determine the cause of the problem.

More prudently, of course, it makes sense to also log data. My personal approach is to implement a relatively simple class that will write log files to the Application Data folder, and overwrite old logs. It is implemented as a TraceListener, and merely needs to be invoked to start capturing Debug output to a file. The advantage here is that the Debug statements are removed in a release build; the downside is that the Debug statements are removed from a release build. Using the Trace class instead of the Debug class for writing debug output will allow that trace output to be captured by the “DebugLog” class, listed below.

The debug logs captured can be exceedingly useful- a more full implementation of a Debug handler, or unhandled exception handler, could very well zip up some of those files and E-mail them to an appropriate Support address.

But what about Exceptions that occur that you don’t catch? Sometimes you simply don’t catch all your exceptions- and it’s reasonable, having a catch-all Exception handler can cause just as many unexpected issues as catching only those exceptions you know how to handle. Thankfully, all .NET Exceptions are actually logged to the Event Viewer.

A Application that was simply too exceptional for this world… hyuk hyuk.

The information is therefore available; but guiding a client or customer through the process of wandering through the Event Log is a bit much. So it makes perfect sense to let an Application do this job for you. Thankfully, .NET has a rich capability in terms of inspecting the Event Log. I Created a quick Application to demonstrate (Includes source code).

The principles exercised in the project could easily be used to create a zip or other archive filled with useful diagnostic information in the case of an unexpected crash; that zip could even be automatically E-mailed to support (as I mentioned above). With such a wealth of information it can help eliminate a lot of time-wasting back-and-forth trying to determine the cause of a problem. If nothing else, something like this Application would make finding the proper information slightly easier (with the appropriate tweaks to make the Stack Trace visible).

Through quite a number of versions, Windows has included a tool called “MSConfig” for configuring your software setup. There is a lot of confusion about this tool. Some recommend using it only for diagnostic purposes, then reverting the changes made and using another startup tool.

I thought I would examine exactly what the tool does. For this investigation I will be using a Windows XP SP3 Virtual Machine.

I started with the quickest route: I chose selective startup and rebooted. I got this dialog when the system started:

This dialog appears after rebooting when using msconfig’s selective startup option.

Of interest is that this is actually just msconfig saying “hey, I done disabled some things because you said to, you can start me up and fix it if you wish”. This is probably a UI change they added because it could be possible for a user to use msconfig and forget about it. Looking into it, MSConfig simply inserts itself into the “run” key and autoruns at boot, presenting this warning. The other changes are made permanently in that services and other startup programs are not started; that is to say, entries in the run registry, startup folder, and appropriate services registry settings are deleted.

So the question is- how does msconfig magically recreate this data when you re-enable items? pretty simply- it simply copies the old values to it’s own data key, HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\MSConfig\startupreg. This key stores the information it has “unchecked” as well as current state information.

This is an interesting discovery. Most startup tools that provide the ability to check and uncheck elements have ‘backup’ mechanisms that simply move data keys around; when you uncheck they get moved from the Run key (for example); when you check them off they get moved back, etc. Then the entire listing is simply based of the aggregate of the two locations, with the elements in the actual Run key being checked and those in the backup location being unchecked. AutoRuns does this, as do most “startup” control panel programs. This is what makes things a bit weird, as many technical types suggest that MSConfig only be used for diagnostic purposes, and not permanent changes. Based on this research here it seems like such a tenet is not particularly helpful, as most of the functionality is in MSConfig, since it does not actually put it Windows into any special startup mode. The only basis for such a conclusion is that MSConfig starts with the system after it is used to make changes, and presents a warning that changes were made. You can tick the checkbox in the dialog and MSConfig will not show the message again, making the end result exactly the same as any other startup tool.

As such, my conclusion is that the consideration that msconfig is a “diagnostic” tool, and that other Applications are needed for more permanent changes is not based on how the program actually works but rather (mostly reasonable) suppositions about how the program works based on the dialog it shows when you restart. Nevertheless the end result is quite easily the same.

I’ll be honest. I hate most tune-up utilities. As far as my experience is concerned, they are designed for no other purpose than to separate people from their money; They provide almost basic functionality to the user but package it up in a shiney shell so it looks appealing and professional, but don’t actually provide even a quarter of the functionality you would expect.

Advanced SystemCare Free

This IMO is just that sort of product. To make things worse, the Free version (I don’t know if this is the case for their paid product) is essentially chock full of adware. Behold this page of the installation wizard for a good example:

Advanced Systemcare is totally caring for you when we offer to install 50 toolbars, change your default search provider, and help this adware infest your system.

The above page of the setup wizard suggests I install a good set of toolbars. These toolbars give nothing but ad revenue to ioBit. Arguably a fair trade given they offer it for free, but still- it essentially defaults to acceptance, because most people will not expect this and might treat it like a EULA. After installation it started some intro thing. It basically advertised some of it’s features. This was my favourite:

Oh yay I look forward to this.

The noted image is their advertising of “Registry Fix” Which they appear to purport as a feature of their product. Turns out it’s not something that is available in the free version, so they have been spared from what would no doubt be a rather thorough takedown of the tool. They left me with things to work with though.

I decided to run the tool. So I launched it.

Hurry buy the Pro version before you wise up!

I just started the application and it’s telling me about a “special offer”. This reeks of the old marketing trick of “but wait, there’s more” where they claim the Ginsu knives normally sell for 99.99 but they are providing a limited time special offer where you can get them for the insane value of 9.99 if you order now. I found their in your face nature amusing- I’ve not even used the Free product, and they are already in my face about it. So I decided to run this otherwise completely clean install through it’s little scan Care thing. Navigating through it’s menus and options felt sort of like trying to guide a blind man to the bathroom at an amusement park. There is so much to take in and enjoy but you can’t because you have a job to do. Which is my major beef with the overuse of skinning in applications like this; they seem to appear to make functionality and capability harder to view at a glance, replacing functional capabilities with formative ones.

I then ran their little Quick Scan:

If a clean install isn’t “Excellent” I’m not sure what reason they have for that.

We noticed you weren’t using some of our other products. This is a bad thing, so we’ll take advantage of people that trust us by admonishing them for not using them and causing a reduced rating.

That’s right. The reason the Clean Installation got a “Fair” health status rather than excellent was because I wasn’t using two of IoBit’s other products. Now, forgive me if this seems a bit uncouth, but are you kidding me? This would be like if Microsoft’s Windows Experience Index Dropped a few points because they detected you didn’t have Microsoft Word installed. Not to mention the noted “IoBit Malware Fighter” has in the past been determined to directly infringe on the excellent MalwareBytes Anti Malware tool, as well documented here. Given that I think I’ll give them a wide berth. It’s not even saying the problem is a lack of an Anti-malware of virus tool, but rather directly saying that me not using their Disk defragmenter and their anti-malware tool is the problem.

I wasn’t able to run their Registry Cleaner, which is a shame. I’m sure that could have been good and fun to rip apart. I’ll have to settle for other Registry Cleaners instead.

Many developers and programmers may describe their experience with a language or technology as them having “mastered” it. But is this really the case?

Take me as an example- I cannot think of a single technology that I feel comfortable saying I “mastered”; even those that I’ve worked with extensively, such as the C# language itself, or the .NET Framework, or the Java Base Class Library, I don’t feel safe saying I’ve “mastered”. The way I see it, “mastered” implies that for any question about that technology, you will have an answer for. I cannot, personally, say that I’ve reached that stage. In fact, it’s debatable whether any person reaches that stage about anything. C#-wise I’ve worked extensively with the framework, serialization, dynamic dispatch (via C# 4.0/5.0’s “dynamic” keyword).

“Mastered” for me implies that there is nothing left to learn for that person in that field; but when you think about it, that’s sort of impossible, particularly in regards to software development, programming, and even just computers in general. There is always more to learn, there is always ground left uncovered, and as you uncover that ground you reveal just how much you don’t know. One of my favourite quotes, attributable to Dave Ward, is:

“the more you know, the more you realize just how much you don’t know. So paradoxically, the deeper down the rabbit hole you go, the more you might tend to fixate on the growing collection of unlearned peripheral concepts that you become conscious of along the way.”

This is from Scott Hanselmans blog post on a similar subject- titled “I’m a phony. Are you?” In many ways, I find myself identifying with this- For example, I still often try to rationalize my reception of an MVP Award as “oh, it’s nothing” or “must have been a mistake”; that is, like the post describes and quotes, people that feel like an imposter. The MVP Award is not just given away to anybody, though. Best I can think of would be that my Blog posts with regard to C# Programming (and programming topics in general) were that good. Personally I have a hard time really believing that.

Much like Scott Hanselman describes in the post, I try to learn a new programming language- even if I don’t use it for a large project, the exposure to new concepts and problem domains can really improve my work in those languages that I’ve used for a longer period, and hold off stagnation. But even so, very seldom do I ever get deep enough into those languages to truly grasp their idioms.

And yet, at the same time, When I push myself into a language- I eventually catch on. For example- and I’ve written about this before- I languished with Visual Basic 6 for a very long time. I thought it was good enough- there is nothing better. This is what I told others, but who was I trying to convince, them, or me? I aim for the latter. I don’t know what prompted me to do so but I moved on to C#. Now in fairness even then I did at least give other languages a shot, but VB6 was the only one that felt “familiar”; that is, that I could easily type code in without looking up documentation. At some point I’d had enough. Visual Basic 6 was an old, dead, language, and so too would my experience if I didn’t do something. I moved to C# instead of VB.NET mostly out of my own self-interest; I still had many projects in VB6 and I (possibly erroneously) thought that learning VB.NET might cloud my ability to work on those projects. That, and C# seemed to be the de riguer standard.

My first applications were rather simple. I made an attempt to duplicate the functionality of HijackThis first- this was a learning experience, particularly since I was now able to use those OO concepts I knew about but was never able to leverage properly in VB6, which only had interface inheritance. I moved forward with a simple game. This was to become the first version of BCDodger, which is a relatively simple game. The major stumbling block here was with regard to the use of multiple threads and handling concurrency and handling repainting. I eventually looked at my ancient ‘Poing’ game, which was an arkanoid/breakout clone I had written in Visual Basic 6. It was terrible- the drawing code was limited, everything was ugly and there was very little “whizbang” to the presentation. I decided to make a variant of the game in C#, using a better design and a class heirarchy. This snowballed pretty quickly, and I now have a game that is able to dynamically load new components from scripts (which it compiles at startup), has a fully-featured editor that is able to find and show any class that implements a specific interface or derives from a given class in portions of the interface; for example the “add block” toolbar dropdown get’s populated with every single block in the game, categorized based on attributes on the appropriate classes.

With Freelancing, my results have been met with an overwhelmingly positive response, in one case, where I was tasked with using Semantic cleanup of a longest common subsequence algorithm, there was some back-and forth over what might be the best output given what we were working with; the client eventually responded that they did some research and are pretty sure it’s impossible. By which time, I had actually finished implementing it. Needless to say, the results were very positive. The same applies for a similar project where I created the Product Key registration and generation logic in a self-contained class.

The nice thing is that these projects exposed me to things I might not have dealt with before. WPF was a technology I didn’t use until it’s appeal in a given situation made it very much preferable. Full screen, resolution independence was a lot easier with WPF than it would be with Windows Forms- so now I have experience with WPF as well. Naturally the overwhelmingly boring nature of Semantic cleanup of the output of a longest common subsequence algorithm (I nearly fell asleep writing that…) meant I wouldn’t otherwise explore it; however it turned out to be very interesting as well as mentally taxing. There was, of course, some confusion. But each time I was able to look at it with freshly rested eyes (apparently staying up for 48 hours working on the same problem clouds your judgement), I was able to find some very obvious fix or addition to resolve that which I was working on. I guess it boils down to- if you just write what you <want> you will always try to work on something interesting, but not always useful. When working on an actual project- for somebody else, or as an otherwise professional undertaking, what you work on might not always be fun, but it will always be useful to somebody. So in some respects this rubs off; with my own projects, I try to make them fun to work on, but might not always pay the closest attention to how useful they are; in retrospect when working in a professional capacity, whether they are useful is the most important goal. A program that isn’t useful is useless by definition.

This isn’t to say the experience isn’t occasionally frustrating; when you provide a product or service to somebody or something, I like to think there is some sort of a promise of the workmanship of that product; for example on more than one occasion, a product has stopped suiting the original purpose; for example, this could be because it was not as scalable as it needed to be, or perhaps a database corruption has caused some other problems. Whatever the case, since the product no longer works for the original intent, I could never charge for the service of getting the product working again- because that almost seems like holding them hostage.

At the same time, however, part of me doesn’t like releasing new products, simply because it feels like a lifetime contract to keep working on and improving them. This is part of why the vast majority of projects I’ve worked on I’ve not released. and by “vast majority” we’re talking about nearly a hundred projects. Some big, some small; most of them unfinished in some fashion but most quite usable.

But I’m getting off the topic of this post. My point is, that I like to think I’m fairly awesome, but at the same time, not too awesome. There is plenty out there to learn, and the pursuit of awesomeness demands that I seek them out, absorb them, and then continue. Like a sponge that comes alive and wants to suction up all the water in the world, my task will never be completed. But that doesn’t stop me from trying anyway.

This applies in a general sense, as well; It’s not just programming that I’m very interested in, but rather tangential topics such as gaming and whatnot. I do also enjoy science-fiction- statistically, as a person who works in Software Development, this is more likely than not- as well as simply general knowledge. A good example being a tab I have open in my browser, where I was researching Eggplant. Why? I haven’t a clue. Just something I wanted to absorb and learn about, I suppose.

Conclusion: learning about Eggplants is unlikely to make you a better programmer. But it never hurts to stay versed on vegetables.

In order to compare various languages, I will be implementing a anagram search program in a variety of languages and testing and analyzing the resulting code. This entry describes the C# entrant into that discussion. A naive anagram search program will typically have dismal performance, because the “obvious” brute force approach of looking at every word and comparing it to every other word is a dismal one. The more accepted algorithm is to use a Dictionary or Hashmap, and map lists of words to their sorted versions, so ‘mead’ and ‘made’ both sort to ‘adem’ and will be present in a list in the Hashmap with that key. A psuedocode representation:

Loop through all words. For each word:

create a new word by sorting the letters of the word.

use the sorted word as a key into a dictionary/hashmap structure, which indexes Lists. Add this word (the normal unsorted one) to that list.

When complete, anagrams are stored in the hashmap structure indexed by their sorted version. Each value is a list, and those that contain more than one element are word anagrams of one another.

The basic idea is to have a program that isn’t too simple, but also not something too complicated. I’ll also admit that I chose a problem that could easily leverage the functional constructs of many languages. Each entry in this series will cover an implementation in a different language and an analysis of such.

One of the contributing factors to the amount of code here is the creation of a method to sort characters in a string as well as availing the use of foreach() with a stream contributes to this. Much of the size (compared to Python) can be attributed to the different typing of the language, too.

In Visual Studio’s default debug mode, all optimizations are disabled and the IDE hooks a lot of the activity of the program itself. My initial run took over 20 seconds to complete. I disabled the Visual Studio hosting process and activated the Release build, as well as changing the target platform from x86 to “Any CPU”. This reduced the time from 20 seconds to around 10 seconds; changing the implementation of the iterator method to read the entire file at once rather than one line at a time reduced it to ~0.8 seconds.

This might actually provide some insight into why so many detractors seem to have such an easy time thinking C# is slow. They will usually start up Visual Studio, make a quick test program, run it, and find it very slow, and then write off the language as slow, almost dismissively. This, without realizing that the language itself is not slow, but debugging a program tends to put a damper on things; what with the lack of optimizations in order to allow for better RTTI for the debugger.

In order to compare various languages, I will be implementing a anagram search program in a variety of languages and testing and analyzing the resulting code. This entry describes the python entrant into that discussion. A naive anagram search program will typically have dismal performance, because the “obvious” brute force approach of looking at every word and comparing it to every other word is a dismal one. The more accepted algorithm is to use a Dictionary or Hashmap, and map lists of words to their sorted versions, so ‘mead’ and ‘made’ both sort to ‘adem’ and will be present in a list in the Hashmap with that key. A psuedocode representation:

Loop through all words. For each word:

create a new word by sorting the letters of the word.

use the sorted word as a key into a dictionary/hashmap structure, which indexes Lists. Add this word (the normal unsorted one) to that list.

When complete, anagrams are stored in the hashmap structure indexed by their sorted version. Each value is a list, and those that contain more than one element are word anagrams of one another.

The basic idea is to have a program that isn’t too simple, but also not something too complicated. I’ll also admit that I chose a problem that could easily leverage the functional constructs of many languages. Each entry in this series will cover an implementation in a different language and an analysis of such.

Implemented in python, the solution is surprisingly short:

Python

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

importtime

#create empty Hashmap.

anag_dict={}

#start the timer

startc=time.clock()

#iterate through all the lines in the file.

withopen("D:\dict.txt","rt")aswords_fp:

forline inwords_fp:

#lowercase the word.

word=line.lower()

#create the key by sorting the letters in the word.

key="".join(sorted(word))

#if the given key is not present, initialize that key with a new empty list.

ifnotkey inanag_dict:

anag_dict[key]=[]

#add this word to the appropriate list.

anag_dict[key].append(word)

#print the execution time.

endc=time.clock()

print("time:",endc-startc)

this solution is nice and short (as I noted) because of the first-class dictionary and list support in Python, as well as it’s dynamic typing. The resulting program was run without the shown comments, to prevent parsing overhead (if applicable). over the course of several dozen runs, the speed of this solution averages around 1.2 seconds on my machine, running it via Python 3.2.3. The Python language provides a number of powerful constructs that often employ functional programming elements into traditional imperative language design concepts. I often hear the claim that Python is “Object Oriented” but have never really figured out why that is said. For example, the “join” function used here is not an object method, and the program itself does not belong to an object. Instead I typify Python as one language among many that provides the tools for a programmer to use any number of programming styles, without being forced to use a specific one. This is something that should be noted and even lauded, not the fact that it happens to support Object Oriented programming.