Posts tagged as '
C# ' ...

One of the fun parts of personal projects is, well, you can do whatever you want. Come up with a silly or even dumb idea and you can implement it if you want. That is effectively how I’ve approached BASeBlock. Iit’s sort of depressing to play- held back by older technologies like WindowsForms and GDI+, and higher resolution screens make it look quite awful too. Even so, when I fire it up I can’t help but be happy with what I did. Anyway, I had a lot of pretty crazy ideas for things to add into BASeBlock, some fit and were even rather fun to play- like adding a “Snake” boss that was effectively made out of bricks- others were sort of- well, strange, like my Pac man boss which attempts to eat the ball. At some point, I decided that the paddle being able to shoot lightning Palpatine-style wasn’t totally ridiculous.

Which naturally led to the question- how can we implement lightning in a way that sort of kind of looks believable in a mostly low-resolution way such that if you squint at the right angle you go “yeah I can sort of see that possible being lightning?” For that, I basically considered the recursive “tree drawing” concept. One of the common examples of recursionm is drawing a tree; f irst you draw the trunk, then you draw some branches coming out of the trunk, and then branches from those branches, and so on. For lightning, I adopted the same idea. The eesential algorithm I came up with was thus:

From the starting point, Draw a line in the specified direction in that direction at the specified “velocity”

From that end point, choose a random number of forks. For each fork, pick an angle up to 45 degrees of difference from the angle between the starting point and the second point, and take the specified velocity and randomly add or subtract up to a maximum of 25% of it.

If any of the generated forks now have a velocity of 0 or less, ignore them

otherwise, recursively call this same routine and start another “lightning” from this position at the specified velocity from the fork position.

Proceed until there are no forks to draw or a specified maximum number of recursions has been reached

of course as I mentioned this is a very crude approximation; lightning doesn’t just randomly strike and stop short of the ground and this doesn’t really seek out a path to ground or anything along those lines. Again, crude approximation to at least mimic lightning. The result in BASeBlock looked something like this:

Now, there are a number of other details in the actual implementation- first it is written against the game engine so it “draws” using the game’s particle system, and also uses other engine features to for example stop short on blocks and do “damage” to blocks that are impacted, and there are short delays between each fork (which again is totally not how lightning works but I’m taking creative license). The result does look far more like a tree when you look at it but the animation and how quickly it disappears (paired with the sound effect) is enough, I think, to at least make it “passably” lightning.

But All this talk, and no code, huh? Well, since this starts from the somewhat typical “Draw a shrub” concept applied recursively and with some randomization, let’s just build that- the rest, as they say, will come on their own. And by that, I suppose they mean you can adjust and tweak it as needed until it gets the desired effect. Or maybe you want to draw a shrubbery, I’m not judging. With that in mind here’s a quick little method that does this against a GDI+ Graphics object. Why a GDI+ Graphics Object? Well, there isn’t really any other way of doing standard Bitmap drawing on a Canvas type object as far as I know. Also as usual I just sort of threw this together so I didn’t have time to paint it and it might not be to scale or whatever.

What amazing beautiful output do we get from this? Whatever this is supposed to be:

It does sort of look like a shrubbery I suppose. I mean, aside from it being blue, that is. It looks nothing like lightning, mind you. Though in my defense if electricity tunnels through certain things it often leaves tree-like patterns like this. Yeah, so it’s totally electricity related.

This is all rather unfulfilling, so as a bonus- how about making lightning in Photoshop:

Step 1: Start with a Gradient like so

Next, Apply the “Difference Clouds” filter with White and Black selected as Main and Secondary Colours.

Invert the image colours, then Adjust the levels to get a more pronounced “Beam” as shown.

Finally, add a layer on top and use the Overlay filter to add a vibrant hue- Yellow, Red, Orange, Pink, whatever. I’m not your mom. I made it cyan for some reason here.

BASeCamp Network Menu, which I wrote about previously, was a handy little tool for connecting to my VPN networks. It, however, had one disadvantage- It was clearly out of place- the style was “outdated” for the OS themes:

BCNetMenu displaying available VPN connections.

As we can see above, the style it uses is more like Office 2003, Windows XP, etc. Since the program is intended for Windows 10, that was a bit of a style issue, I think. Since none of the other Renderers really fit the bill, I set about writing my own “Win10” style menu foldout ToolStrip Renderer; Since the intent was merely to provide for drawing this menu, I’ve skipped certain features as a result to make it a bit easier.

Windows 10 uses an overwhelmingly “flat” style. This worked in my favour since that makes it fairly easy to draw using that style. Windows Forms- and thus the ContextMenuStrip one attaches to the NotifyIcon, allows overriding the standard drawing logic with a ToolStripRenderer implementation; so the first step was to create a class which I derived from the ToolStripSystemRenderer. This attempts to mimic the appearance of many Windows 10 foldouts by first drawing a dark background, then drawing a color over top. However- the color over top is where things were less clear. We want to use the Accent Color that is defined in the Windows Display Properties. How do we find that?

As it happens, dwmapi.dll has us covered. However, it bears warning that this is currently an undocumented function- we need to reference it by ordinal, and since it’s undocumented, it could be problematic when it comes to future compatibility. It’s very much a “use at your own risk” function:

This function uses DWMCOLORIZATIONPARAMS, which we of course, need to define:

1

2

3

4

5

6

7

8

9

10

publicstructDWMCOLORIZATIONPARAMS

{

publicuint ColorizationColor,

ColorizationAfterglow,

ColorizationColorBalance,

ColorizationAfterglowBalance,

ColorizationBlurBalance,

ColorizationGlassReflectionIntensity,

ColorizationOpaqueBlend;

}

Once defined, we can now create a helper method that will give us a straight-up color value:

1

2

3

4

5

6

7

8

9

10

11

privatestaticColor GetWindowColorizationColor(boolopaque)

{

DWMCOLORIZATIONPARAMS parms=newDWMCOLORIZATIONPARAMS();

DWMNativeMethods.DwmGetColorizationParameters(ref parms);

returnColor.FromArgb(

(byte)(opaque?255:(parms.ColorizationColor>>24)/2),

(byte)(parms.ColorizationColor>>16),

(byte)(parms.ColorizationColor>>8),

(byte)parms.ColorizationColor

);

}

We allow for an “Opaque” parameter to specify whether the caller wants the Alpha value or not; of course, t he caller could always do this itself but the entire point of functions is to reduce code so may as well put it in this way. it takes the 32-bit integer representing the color and splits it into it’s appropriate byte-sized components through shift operators, and uses those to construct an appropriate Color to return.

Using this color to paint over an opaque dark background (the color used with the Taskbar Right-click menu, for example) gives the following Menu, using the new WIndows 10 Renderer I created:

Not a bad representation, if I say so myself! Not perfect, mind you, but certainly fits better than the Professional ToolStrip Renderer, so I don’t think calling it a success would be entirely out of band. A more interesting problem presents itself, however- When configured in the display properties to have transparency effects,The default Windows 10 Network foldout has a “Blur” effect. How can we do the same thing?

After unsuccessful experiments with DwmExtendGlassIntoFrame and related functions, I eventually stumbled on the SetWindowCompositionAttribute(). This could be used to set an accent on a window directly- including, setting Blur Behind. Of course, as with any P/Invoke, one needs to prepare yourself for the magical journey with some declarations:

If the Blur setting is enabled, then the EnableBlur function is called to enable blur; otherwise, to disable blur. In both cases, it tosses in the Handle of the ToolStrip that is opening, which, apparently, is the Window handle to the actual menu’s “Window”, so it actually works as intended:

I also found that darker colours being drawn seemed to be “more” transparent. Best I could determine was that there is some kind of transclucency key; the closer to black, the more “clear” the glass appears. References I found suggest that SetLayeredWindowAttributes() could be used to adjust the colour key, but I wasn’t able to get it to work as I intended; Since the main effect is that the “Disabled” text, which is gray, appears like more “clear” glass within the coloured blurred menu, I found it to be fine.

It will still be ideal to write additional custom draw routines in order to allow checked/selected items in the listing to be more apparent. As it stands the default “Check” draw routine appears more like an overlay on the top left of the icon, but it’s easy to miss; it would be better to custom draw the items entirely, and instead of a checkmark perhaps highlight the Icon in some fashion to indicate selection.

Yet another new feature introduced into C# 6 are a feature called Dictionary Initializers. These are another “syntax sugar” feature that shortens code and makes it more readable- or, arguably, less readable if you aren’t familiar with the feature.

Let’s say we have a Dictionary of countries, indexed by an abbreviation. We might create it like so:

1

2

3

4

5

6

Dictionary<String,Country>countries=newDictionary<String,Country>()

{

{"CAN",newCountry(Name:"Canada",Population:35160000,Area:9985000)},

{"USA",newCountry(Name:"United States of America",Population:318900000,Area:9834000)},

{"MEX",newCountry(Name:"Mexico",Population:122300000,Area:1973000)}

};

This is the standard approach to initializing dictionaries as used in previous versions, at least, when you want to initialize them at compile time. C#6 adds “dictionary initializers” which attempt to simplify this:

1

2

3

4

5

6

Dictionary<String,Country>countries=newDictionary&ltstring,Country>()

{

["CAN"]=newCountry(Name:"Canada",Population:35160000,Area:9985000),

["USA"]=newCountry(Name:"United States of America",Population:318900000,Area:9834000),

["MEX"]=newCountry(Name:"Mexico",Population:122300000,Area:1973000)

};

Here we see what is effectively a series of assignments to the standard this[] operator. It’s usually called a Dictionary Initializer, but realistically it can be used to initialize any class that has a indexed property like this. For example, it can be used to construct “sparse” lists which have many empty entries without a bunch of commas:

1

2

3

4

5

List<Country>countries=newList<Country>()

{

[0]=newCountry(Name:"Canada",Population:35160000,Area:9985000),

[300]=newCountry(Name:"Sweden",Population:9593000,Area:450295)

}

The “Dictionary Initializer” which seems more aptly referred to as the Indexing initializer, is a very useful and delicious syntax sugar that can help make code easier to understand and read.

It should not come as any sort of surprise that verifying that software works properly relies quite heavily on testing. While in an academic environment, it can be possible to prove that an algorithm will always work in a certain way, it is much more difficult to prove the same assertions with a piece of code implementing that algorithm; even more so when we consider the possibility of user interaction, which adds a arbitrary variable to the equation.

When dealing with large or complicated projects, testing needs to be done in layers. In addition to testing by using the same interface as the user and ensuring it all acts as expected, it is also reasonable to add code-level tests. These tests will run functions and methods or use classes within the program(s) and verify that they work as intended. This can be used quite fruitfully in combination with normal testing to identify regressions easier.

Some languages, such as D, include built-in, compiler/language-level support for Unit Tests. In D you can define a Unit Test within the block that you want to test, usually it is tied to a Class instance, as shown in the documentation examples.

Different languages, platforms, and frameworks will tend to do tests differently. Most languages and platforms don’t have the sort of Unit Test functionality we see in D, but as the industry has matured, and particularly with design concepts like test-driven development being pushed to the forefront, most development platforms have either had Unit Test functionality added to them by third parties, or had it added as an integral component.

Unit Test Projects

In Visual Studio, Unit Test Projects can be added to a solution. Test Classes and Test Methods are added to that project, and they can reference your other projects in order to test them. This is a particularly good approach for testing libraries, such as BASeParser. BASeParser is a relatively straightforward recursive descent Expression parser/evaluator, which was one of my first C# projects, which I had based off of a VB6 library of the same function and name. This is actually a very straightforward thing to test; we can have a test method merely have a set of expressions and the expected result, and verify that evaluating that expression provides the expected value:

By running a test- or integrating a test into your build process- you can have the new version of the program verified to provide correct results. If you rewrite a critical part of the program to improve performance, you don’t want to make the results wrong- having these sorts of checks and balances in place can let you know right away if a commit you made has just caused regressions, or broken expected behaviour. You can run these tests directly from the IDE, which is a very useful feature. A small marker appears next to the Method header, and you can click it to get test options- run the tests, run all tests in a class, debug tests, etc. The Debug capability is particularly valuable, as it can help you step into and determine why something is not failing. For example, the test shown above helped me to trace down a problem with operator precedence and function evaluation. And these are very basic tests, as well.

I’m also working on Unit Test related features and capabilities, which includes integration into Jenkins, but for obvious reasons I can’t be going and showing you that, now can I? 😉

C# has a number of useful comparison interfaces that we can use and implement, so this would seem to be a redundant post, wouldn’t it. We can use IComparable, IComparable, or even the .Equals() method and compare two objects relatively sufficiently.

However, in the context of something like a Unit Test, if we want to compare/assert that two objects are equal, we would ideally be able to output what is different between them- if we have two objects with many properties and merely compare them, then the developer will haveto debug to figure out which properties actually were different between them, whereas we pretty much have access to that information in the Unit Test. At the same time, we don’t want to have to write sophisticated comparison routines for every object type. Instead, it might be reasonable to try a more generic approach. If we want to compare two instances of objects we could merely compare their public, readable properties. While this won’t get everything, it means we can store the actual differences which can be expressed as part of debugging output or the output of a unit test.

An Array of issues

The major issue I encountered was handling of Arrays. I previously wrote about the task of serializing Arrays. The tricky part of dealing with Arrays is the same here, which is how we manage the Rank. another issue in this situation is how do we compare Arrays in a meaningful way if they have different ranks or dimensions? I cannot really think of a good way to do so so What I’ve done is merely ignore when the objects passed in are arrays. This means that, technically, the solution is wrong (since the arrays between two instances of an object may differ) but since I’m actually going to be using this code in a Unit Test consideration and would rather not spend dozens of hours merely working on a way to compare objects I think it will work to get started.

Effectively, we can use reflection to grab each property, then compare the values of the two properties in the two instances being compared. If they are different we can add the property to a list of Strings to return, indicating the properties that differ; otherwise, we don’t. Once returned, we can use another helper function to use the list to construct a more useful set of the differences between the two elements.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

publicclassComparisonHelper

{

/// <summary>

/// Accepts a result List of differences from the CompareProperties routine, and creates a nicely (hopefully) formatted listing of the values of the properties that differ between the two objects.,

/// </summary>

/// <typeparam name="T">Type of the objects which were compared.</typeparam>

//don't compare these properties if the property name is in our ignore list.

if(!IgnoreProps.Contains(lookproperty.Name))

{

ObjectfirstValue=lookproperty.GetValue(FirstItem,newObject[]{});

ObjectsecondValue=lookproperty.GetValue(SecondItem,newObject[]{});

booldifferent=false;

if(firstValue isIComparable<T>)

{

different=((IComparable<T>)firstValue).CompareTo((T)secondValue)!=0;

}

elseif(firstValue isIComparable)

{

different=((IComparable)firstValue).CompareTo(secondValue)!=0;

}

else

{

different=!firstValue.Equals(secondValue);

}

if(different)

{

CompareResults.Add(lookproperty.Name);

}

}

}

catch(Exception exx)

{

//likely a property with indexers, or maybe write-only or somesuch, so we'll ignore it.

}

}

returnCompareResults;

}

publicstaticboolCompareObjects<T>(TFirstItem,TSecondItem)

{

returnCompareProperties(FirstItem,SecondItem,null).Count==0;//return if no different properties.

}

/// <summary>

/// compares two FixedArrays.

/// </summary>

/// <typeparam name="T"></typeparam>

/// <param name="FirstArray">First Array</param>

/// <param name="SecondArray">Second Array</param>

/// <returns>A int list indicating indices that are different. If the two arrays are different, then the result will be all the indices of the larger FixedArray that are larger than the size of the Smaller FixedArray.</returns>

Unit Testing. Books have been written about it. Many, many books. About Unit testing; about Testing methodologies, about Unit Testing approaches, about Unit Test Frameworks, etc. I won’t attempt to duplicate that work here, as I haven’t the expertise to approach the subject to anywhere near that depth. But I felt like writing about it. Perhaps because I’m writing Unit Test code.

Currently what I am doing is writing Unit tests, as mentioned. These are written using C# (since they are testing C#, so it makes sense), and they make use of the built-in Visual Studio/NET Unit Testing features.

In my mind there are some important ‘cornerstones’ of Unit Tests which make them useful. Again, I’m no expert with Unit tests but this is simply what I’ve reasoned as important through my reading and interpretations on the subject so far.

Coverage

making sure Tests engage as much of your codebase as possible applies to any type of testing of that code, and Unit Tests are no exception to that rule. It is important to make sure that Unit Tests execute as much of the code being tested as possible, and- as with any test- should make efforts to cover corner cases as well.

Maintainability

When it comes to large software projects, one of the more difficult things to do is encourage the updating and creation of new Unit Tests to make sure that Coverage remains high. With looming deadlines, required quick fixes, and frequent “emergency fixes” It is entirely possible for any Unit testing code designed to test this to quickly get out of date. This can cause working code to fail a Unit test, or Failing code to pass a Unit Test, because of changing requirements or redesigns or because code simply isn’t being tested at all.

Automation

While this in part fits into the Maintainability aspect of it, this pertains more to automation of build processes and the like. In particular, with a Continuous Integration product such as Jenkins or TeamCity, it is possible to cause any changes to Source Control to result in a Build Process and could even deploy the software, automatically, into a testing environment. In addition, Such a Continuous Integration product could also run Unit Tests on the source code or resulting executables and verify operation, causing the build to be marked a failure if tests fail, which can be used as a jumping off point to investigate what recent changes caused the problem. This can encourage maintenance (if a code change causes a failure then either that code is wrong or the unit test is wrong and needs to be updated) and is certainly valuable for trying to find defects sooner, rather than later, to try to minimize damage in terms of Customer Data and particularly in terms of Customer Trust (and company politics, I suppose)

I repeat once more than I am no Unit Test magician. I haven’t even created a completely working Unit Test Framework that follows these principles yet, but I’m in the process of creating one. There are a number of Books about Unit tests- and many books that cover Software Development Processes will include sections or numerous mentions about Unit Test methodologies- which will likely be from much more qualified individuals then myself. I just wanted to write something and couldn’t write as much about Velociraptors as I originally thought.

Windows 10 has been out for a few weeks now. I’ve upgraded one of my backup computers from 8.1 to Windows 10 (the Build I discussed a while back).

I’ve tried to like it. Really. I have. But I don’t, and I don’t see myself switching my main systems any time soon. Most of the new features I don’t like, and many of those new features that I don’t like cannot be shut off very easily. Others are QOL impacts. Not being able to customize my Title bar colour and severely removing all customization options, for example, I cannot get behind. I am not a fan of the Start Menu, nor do I like how they changed the start screen to mimick the new Start menu. I understand why these changes were made- primarily due to all the Windows 8 naysayers- but that doesn’t mean I need to like them.

Windows 10 also introduces the new Universal App Framework. This is designed to allow the creation of programs that run across Windows platforms. “Universal Windows Application” referring to the application being able to run on any system that is running Windows 10.

If I said “I really love the Windows 8 Modern UI design and Application design” I would be lying. Because I don’t. This is likely because I dislike Mobile apps in general and having that style of application not only brought to the desktop but bringing along the same type of limitations I find distasteful. I tried to create a quick Win8 style program based on one of our existing winforms programs but hit a brick wall because I would have had to extract all of our libraries and turn it into a web service, then have it running in the background of the program itself. I wasn’t able to find a way to say “I want a Windows 8 style with XAML, but I want to be able to have the same access as a desktop program”. It appears that this may have been rectified with the Windows 10 framework, as it is possible to target a Universal app and make it, errm- not universal- by setting it to be a Desktop Application. I hope- though have as of yet been unable to determine if that is possible and it is looking more and more like it isn’t. This makes my use case- to provide a Modern UI ‘App’ that makes use of my company’s established .NET Class Libraries – impossible. This is because for security reasons you cannot reference standard .NET Assemblies that are not in the GAC. I was thinking they might work if they are signed in some fashion, but I wasn’t able to find anything that would indicate that is the case.

the basic model, as I understand it, mimicks how typical Smartphone “apps” work. Typically they have restricted local access, and will access remote web services in order to perform more advanced features. This is fairly sensible since most smartphone apps are based off of web services. Of course, the issue is that this means porting any libraries that use those sorts of features to portable libraries which will access a web service for the required task. (For a desktop program, I imagine you could have the service running locally)

I’m more partial to desktop development. Heck right now, my work involves Windows Forms (beats the mainframe terminal systems the software replaces!) and even moving to WPF would be a significant engineering effort, so I keep my work with WPF and new Modern UI applications ‘off-the-clock’.

Regardless of my feelings regarding smartphone ‘apps’ or how it seems desktop has been taking a backseat or even being replaced (it’s not, it’s just not headline worthy), Microsoft has continued to provide excellent SDKs and Developer tools and documentation, and is always working to improve both. And even if there is a focus on the Universal Framework and Universal Applications, their current development tools still provide for the creation of Windows Forms applications, allowing the use of the latest C# features for those who might not have the luxury of simply jumping to new frameworks and technologies willy-nilly.

For those interested in keeping up to date who also have the luxury of time to do so (sigh!) The new Windows development Tools are available for free. One can also read about What’s New within the Windows development ecosystem with Windows 10, And there are also Online courses regarding Windows 10 at the Microsoft Virtual Academy, as well as videos and tutorials on Channel9.

Computer/Programming books. I have a shelf full of them. When I was a teenager my computer time was limited, so I would often read computer books I had or borrowed. Recently I decided to expand my collection and add classics. I bought ‘Clean Code’ and “Code Complete”. Both well-regarded books about software development practices. I’ve barely cracked open either of them, sadly. The only time I can think of doing so was when I had a power outage!

Following this proud tradition, I decided to add more books to my collection. In particular- the biggest one was the entire four-volume set of Donald Knuth’s The Art of Computer Programming. I also got “Programming F# 3.0” and “Programming C# 5.0”. And the overwhelming question to me is “why”…. When will I read them? I’ve hardly even touched the two I bought over 2 years ago!

Books versus Internet

Programming/Computer Books do have a rather hefty competition in the form of the Internet. I think there is an argument to be made for books, however, even compared to eBook devices such as Kindles. There is an ineffable quality to a good book, and I think it being physical makes reading it more “personal” in some way.

The aforementioned titles have since arrived, and I’ve tried to get started with them. I’ve pushed through The first volume of the “Art of Computer Programming” as best I can. I can see why it has become something of a Programmer’s Bible of sorts, and why Knuth is so well-regarded in the industry.

One of the biggest advantages of Donald Knuth’s “Art of Computer Programming” is that it’s content is effectively timeless- he mentions therein that it is effectively designed to be accessible, and from what I’ve read so far (not much, admittedly!) he holds true to that

very well. This is in contrast to what could be called more timely titles. For example, “Programming F# 3.0” will eventually be “outdated” in the sense that there will be future versions of F# released; same with “Programming C# 5.0”. Both of these are excellent titles but will they still be useful in fifty years? Arguably, it doesn’t really matter since there will be updated versions, and a good software developer tries to keep up to date on the most recent technologies and platforms. But there is something to me -unsavoury- about content, like books that eventually becomes out of date.

On the other hand, an argument could be made that, regardless of content, books have a “timeless” quality to them. for example I still have books covering Windows 3.1 and Visual Basic 2.0, as well as an ancient college textbook covering BASIC. I find that these books are far more valuable for information about the topics they cover than even the Internet, so while what they actually cover may be somewhat out-dated, the information and topics they cover particularly with regards to older software is much more difficult to find online than more up to date information, so there is certainly some value there.

For some time I have been working on and off on an attempt to create a useful, powerful, and easy to use library to help with serializing and deserializing data instances to and from XML. Without repeating too much, the core concept is to have an IXmlPersistable interface implemented by classes, or to have an IXmlPersistableProvider implementation made available that the library can associate with that type to save/load instances to and from an XElement node.

For the most part, it has gone quite well. However I hit some interesting snags with regards to Generic types, particularly, Generic types that I want to write IXmlProvider implementations for. For example, let’s say we want to support serializing and deserializing a List<T>. We obviously cannot implement every single permutation of IXmlProvider<List<T>>, since implementing an interface requires a concrete class definition. What this means is that if I wanted to support saving/reading a List via a PersistableProvider, I would need an implementation that implements IXmlPersistableProvider<List<String>> and so on for each possible type- which means of course that types that I don’t know about at compile time could never be included. Unfortunately the logic is too complex to embody via generics- really, I’d want an implementation of IXmlPersistableProvider<List<T>> where T was any type that itself either implemented IXmlPersistable or for which there was an available IXmlPersistableProvider<T>. So clearly I needed to find an alternative approach.

My first consideration/implementation was to not have any sort of Provider at all; instead, I created a SaveList<T> method and a corresponding ReadList<T> method, which would attempt to save each element T of the List by using the generic SaveElement<T> method I created:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

publicstaticXElement SaveList<T>(List<T>SourceData,StringpNodeName)

{

//without a Func as in the overload, we'll try to "build" our own function.

//the function we build will close over a instance of IXMLSerializationProvider (or the type GetXMLData method if it implements IXMLSerializable).

//basically we want to create the function to handle the loading of classes that implement the interface or have a defined provider.

Func<T,XElement>buildfunc=(elem)=>SaveElement(elem,pNodeName);

returnSaveList<T>(buildfunc,pNodeName,SourceData);

}

/// <summary>

/// Saves a list to an XElement.

/// </summary>

/// <typeparam name="T">Type of the list to save.</typeparam>

/// <param name="SourceTransform">Function that takes the Type T item and saves it to an XElement.</param>

However I soon discovered that there was a bit of a caveat to that approach. In my case I discovered it after expanding to create a similar construct for Dictionaries. If there was Dictionary where either the key or the value type was a List, than it wouldn’t work properly- as there is no implementation of the Providers or interface to save/read a List! So it was back to the drawing board. it was then that I decided that while I couldn’t implement a all-emcompassing IXmlPersistableProvider for the List<T> type, I could have an implementation that covered IList and IDictionary, which are interfaces implemented by the List and Dictionary generic types respectively. There is still a caveat in that saving a Dictionary<String,List<String>> will save properly, but calling ReadElement will instead return a Dictionary<String,IList>, but for the moment I cannot determine a reasonable method to do otherwise within the framework I have created. For now I’m thinking that can be a sort of advisory for the actual Persisting code to keep in mind if it ever needs to save/load nested Generic types.

Null

This will be a bit of a tangent, but I thought I’d cover null itself. Most of us, I expect, have an understanding of what “Null” is (or rather, what it isn’t). Nonetheless, The idea of null itself has some controversy surrounding it, in a sense. Many programming languages use a concept of null to represent Nothing; Visual Basic’s equivalent to Null is in fact called Nothing. There is a “movement” of sorts which effectively claims that programming would be less trouble-prone if we got rid of Null; if Null was non-existent. Generally, the idea is that a lot of programming exceptions are errors such as Null Reference Exceptions and null pointers; therefore, the logic is that by removing the possibility for null, we remove the possibility for those errors and therefore we eliminate the problem.

I think that eliminating nulls to simplify programming is like trying to eliminate 0 to simplify math; we created the concept because it made it easier to demonstrate and express abstract thinking. In eliminating Nulls, we simply end up with other bugs. The only real alternative would be to force no instance to ever be null by making a “Empty” instance be instead a blank instance. But that just raises further issues, both in terms of how that default get’s set (What is a default state for a Socket?) as well as exactly what problems that would solve- instead of trying to access a method throwing a NullReferenceException erroneously, that instance will be a default instance and the method call will not cause that error but may cause other errors within the method which may simply be even harder to trace than the NullReferenceException. It is trading one set of errors for another set of errors entirely, and furthermore that new set of errors will be even more difficult to diagnose.

Now that I’ve got that out of my system, we still don’t want to deal with NullReferenceExceptions if we can avoid it. Thankfully, it is fairly easy to program defensively if you suspect an instance variable might be null:

1

2

3

4

5

6

7

8

9

10

11

12

publicvoidRunPlugin(Pluginp)

{

if(p==null)thrownewNullArgumentException("p");

ICorePlugin cp=pAsICorePlugin;

if(cp!=null){

CorePlugins[cp]=p.Name;

cp.Initialize(this);

}

p.Load(this);

LoadedPlugins.Add(p);

}

In this instance we have a Plugin abstract class definition but it is also possible that a Plugin implements an extended interface ICorePlugin. In this case if we find it is a core plugin we need to take extra steps- for obvious reasons we cannot simply index into the dictionary or call a method on the instance if it doesn’t implement the type, as if it doesn’t implement the interface the As case will return null. With the Null-Conditional operator, we can replace the null checks with- well, the operator:

1

2

3

4

5

6

7

8

9

10

publicvoidRunPlugin(Pluginp)

{

if(p==null)thrownewNullArgumentException("p");

ICorePlugin cp=pAsICorePlugin;

Stringinitname=CorePlugins?[cp];

cp?.Initialize(this);

p.Load(this);

LoadedPlugins.Add(p);

}

As we can see here, after casting the type to a ICorePlugin, we merely access the possibly-null value via the conditional operator, as well as call functions and indexers which accept that parameter differently, via the conditional. The Conditional operator using a question mark makes semantic sense given that the null coalescing operator is ??.
improve discussion

Thank you very much

The ?. usage has become known to some as the “Elvis Operator”. The stroke of the question mark, I’m told, looks like his hair. Presumably this naming scheme takes after the “Spaceship” operator “< =>” in that it is named after what it looks like rather than what it does. Not that that is a bad thing but I’m of the mind that it rather makes sense to name and call operators by names that describe what they do, however once you get passed this sort of jargon such a name reasonably becomes far less amusing, so funny names I suppose keep interest.

One interesting consideration is that it is sort of the ? that is the operator on it’s own, and you can simply use it in a few instances for implicit null-checks. ?. allows you to have an implicit null check when accessing an instance member, and you can do something similar for index access operators (as shown) as well.

The flip-side

Adding a lot of operators with interesting capabilities definitely expands the language but there is an argument to be made about language complexity. Furthermore, while it can be argued that if we do not like a new feature, we can simply not use it, that realistically only applies to code we write. If we are working in a team, than decisions need to be made about how things are done and these sorts of operators might be considered “off-limits” simply to reduce the complexity of the codebase. This is hardly an argument against adding these features but more complex languages tend to become less accessible to newer developers and adding complex syntax and parsing rules can result in confusion. Thankfully these operators are effectively syntax sugar and can be learned after learning the “long way” involving proper null-checking, before these operators are introduced.