Windows 10 introduced a new software development platform- the Universal Windows Platform, or UWP. In some respects it builds upon the earlier Windows Runtime that was introduced with Windows 8. One interesting aspect of the platform is that- properly used- it can be utilized to have software that can be built once and distributed to a number of platforms running Microsoft Operating Systems, such as the XBox One.

I’ve fiddled a bit with UWP but honestly I found it tricky to determine what it’s for; As it is, it’s API as well as set of third-party portable libraries simply isn’t anywhere near a typical Application targeting the Desktop via WPF or even Windows Forms. But I think that is intentional; these aren’t built towards the same purpose. Instead, the main advantage of UWP appears to be in being able to deploy to multiple Windows Platforms. Unfortunately that is an advantage that I don’t think I can really utilize. However, I expect it will be well used for future applications- and it has already been well used for games like Forza Horizon 3, which utilized it for “Play anywhere” so it can be played not only on the XBox console but on any capable Windows 10 system. Forza 7 will also be using it to much the same effect.

Even if I won’t utilize it, it probably makes a lot of sense to cover it. My recent coding-related posts always seem to involve Windows Forms. Perhaps I should work instead to learn UWP and then cover that learning experience within new posts? If I an encountering these hurdles then I don’t think it is entirely unreasonable to think perhaps others are as well.

I’ve also got to thinking that perhaps I have become stuck in my ways, as I’m not partial to the approach that appears to bring web technologies to the desktop; Even today I find web applications and UI designed around the web to have a “feel” that is behind a traditional desktop application in usability. That said, I’m also not about to quit my job just because it involves “legacy” frameworks; We’re talking about quite an old codebase- bringing it forward based on library and platform upgrades would mean no time for adding new features that customers actually want. That, and the upgrade path is incredibly murky and unclear; with about 50 different approaches for every 50 different problems we might encounter, not to mention things like deciding on the Framework versions and editions and such.

I know I was stuck in my ways previously so it’s hardly something that isn’t worth considering- I stuck with VB6 for far too long and figured it fine and these newfangled .NET things were unnecessary and complicated. But as it happens I was wrong about that. So it’s possible I am wrong about UWP; and if so then a lot of the negative discussion about UWP may be started by the same attitude and thinking. Is it that it is something rather large and imposing that I would need to learn that results in me perceiving it so poorly? I think that is very likely.

Which is not to suggest of course that UWP is perfect and it is I who is wrong for not recognizing it; but perhaps it is the potential of UWP as a platform that I have failed to assess. While there are many shortcomings, future revisions and additions to the Platform are likely to resolve those problems as long as enough developers hop on board. And it does make sense for there to be a reasonable Trust Model where you “Know” what information an application actually uses or requests, rather then it being pretty much either limited user accounts or Administrator accounts and you don’t know exactly what is being used.

It may be time to come up with a project idea and implement it as a UWP application to start that learning experience. I did it for C# and Windows Forms, I did it for WPF, and I don’t see how the same approach couldn’t work for UWP. Unless it’s impossible to learn new stuff after turning 30, which I’m pretty sure is not the case!) If there is a way to execute other programs from UWP, perhaps the Repeater program I’m working on could be adapted. That is a fairly straightforward program.

QuickBASIC is an out of place choice when compared to most other languages that I’ve written in this series. Why would I jump so far backwards to QuickBASIC?

There are actually an umber of reasons. The first is that QuickBASIC actually imposes a number of limitations. Aside from the more limited programming language compared to, say C#, it also means any solution needs to appropriately contend with issues such as Memory usage and Open File Handles on MS-DOS. At the same time, a lot of the development task is actually more simple; one doesn’t need to fiddle with designers, or property pages or configuration tabs, or anything of that sort. You open a text file and start writing the program.

The first task is to determine an algorithm. Of course, we know the Algorithm- it’s been described previously- However, in this instance, we don’t have hashmaps available; furthermore, even if we want to implement that ourself, we cannot even keep all the information in memory. As a result, one compromise is to instead keep an array of index information in memory; that array can contain the sorted word as well as a record index into another random-access file, so, to start, we have these two TYPE structures:

1

2

3

4

5

6

7

8

9

TYPE SORTINDEX

SORTED ASSTRING*28

OFFSET ASLONG

ENDTYPE

TYPE SORTRECORD

WORDCOUNT ASINTEGER

SORTWORDS(1TO20)ASSTRING*28

ENDTYPE

By writing and reading directly from a scratch file when we need to add a new file to the “hash” we can avoid having any of the SORTRECORD structures in memory except the one we are working with. This drastically reduces our memory usage. As did determining that the longest word in SORTINDEX is 28 characters/bytes. The algorithm thus becomes similar- basically, with a word, we sort the words letters, and then we consult the array of SORTINDEX types. If we find one with the sorted word, we take the OFFSET and we read in the SORTRECORD at that offset, increment wordcount, and add the word to the SORTWORDS array, then PUT it back into the scratch file. And if it isn’t found in the SORTINDEX, we create a new entry- saving a new record with the word to the scratch file and recording the offset and sorted text in the index for that record.

Of course this does have several inefficiencies that I won’t address; The first is that the search for the matching sorted word is effectively a sequential search. Ideally, the in-memory index would be kept sorted and searches could use a Binary Search. I guess if somebody is interested I “left it as an exercise for the reader”.

Otherwise all seems well. But not so fast- the dict.txt file has 45402 words. Our type definition is 32 bytes, which means for all words to be stored in the index, we would need 1,452,864 bytes, which is far beyond the conventional memory limits that we are under. So we need to drastically reduce the memory usage of our algorithm. And we had something so promising! Seems like it’s back to the drawing board.

Or is it? instead of trying to reduce how much our algorithm uses, we could reduce how much data it is working with. At a time. We can split the original dictionary file into chunks, and as it happens since words of different lengths cannot be anagrams of each other, we can merely split the file into separate file organized by length. Then we perform the earlier algorithm on each of those files and output the resulting anagram list of each to one file. That would give us one file listing all anagrams without exceeding memory limitations!

Before we get too excited, let’s make sure that the largest “chunk” would be small enough. using another QuickBASIC program (because, what the hell, right?) I checked over the count of files of particular lengths. In this case, the chunk with the most is words of 7 letters in length, of which there are 7371 in the test dictionary file. This would require 235,872 Bytes of storage, which is well within our 640K conventional memory limit.

Of course, there is a minor caveat; we do need to start QuickBASIC with certain command line arguments, as, by default, the dynamic array maximum is actually 64K. We do this by launching it with the /Ah command line parameter. Otherwise, we might find that it encounters Subscript out of range errors once we get beyond around the 2000 mark for our 32-byte index record type.

Another consideration I encountered was open files. I had it opening all the dictionary output files at once, but it maxed out at 16 files, so I had to refactor it to be much slower by reading a line, determining the file to open, writing the line, and them closing the file. Again, there may be a better technique here to increase performance. For reference, I wasn’t able to find how to increase the limit, either (adjusting config.sys didn’t help).

After that, it worked a treat- the primary algorithm runs on each length subset, and writes the results to an output file.

Without further Adieu- The Full source of this “solution”:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

DECLARESUB SPLITDICT()

DECLARESUB ADDWORD(WORDADD ASSTRING)

DECLAREFUNCTIONGETOFFSET%(SORTEDWORD ASSTRING)

DECLAREFUNCTIONGETBYTEOFFSET%(SORTEDWORD ASSTRING)

DECLAREFUNCTIONSORTCHARS$(ST ASSTRING)

TYPE SORTINDEX

SORTED ASSTRING*28

OFFSET ASLONG

ENDTYPE

TYPE SORTRECORD

WORDCOUNT ASINTEGER

SORTWORDS(1TO20)ASSTRING*28

ENDTYPE

DIMxASINTEGER

COMMON SHARED INDEXDATA()ASSORTINDEX

COMMON SHARED INDEXFILE ASSTRING

COMMON SHARED INDEXCOUNT ASINTEGER

COMMON SHARED INDEXHANDLE ASINTEGER

DIM USEFILE ASINTEGER

DIM READWORD ASSTRING

DIM READINDEX ASLONG

DIM ANAGRAMOUT ASINTEGER

SPLITDICT

ANAGRAMOUT=FREEFILE

OPEN"C:\ANAGRAMS.TXT"FOROUTPUT ASANAGRAMOUT

DIM LETTERCOUNT ASINTEGER

FORLETTERCOUNT=2TO32

INDEXFILE="C:\DICT.IDX"

INDEXHANDLE=FREEFILE

INDEXCOUNT=0

KILL INDEXFILE

OPEN INDEXFILE FORRANDOM ASINDEXHANDLE LEN=562

ERASE INDEXDATA

DIM DICTFILE ASSTRING

DICTFILE="C:\DICT\DICT"+RTRIM$(LTRIM$(STR$(LETTERCOUNT)))+".TXT"

USEFILE=FREEFILE

IFDIR$(DICTFILE)<>""THEN

OPEN DICTFILE FORINPUT ASUSEFILE

PRINT USEFILE

PRINT INDEXHANDLE

'each line is a word. However, as we have limited memory

'compared to,C# and other languages we need to come up with a clever workaround.

'that workaround? we use the file system. Instead of using a hashmap, we instead create a

'C:\DICT\DICT7.TXT would contain the words that have7letters,forexample.

DIM FILENAMES(2TO28)ASSTRING

DIM TOTALCOUNT(2TO28)ASLONG

DIM LCOUNT ASINTEGER

DIM USEOUTFILE ASSTRING

DIM DICTREADER ASINTEGER

DIM LINEREAD ASSTRING

DIM CHECKLEN ASINTEGER

DIM FSORT ASINTEGER

FORLCOUNT=2TO28

USEOUTFILE="C:\DICT\DICT"+RTRIM$(LTRIM$(STR$(LCOUNT)))+".TXT"

FILENAMES(LCOUNT)=USEOUTFILE

FSORT=FREEFILE

OPEN USEOUTFILE FOROUTPUT AS#FSORT

CLOSE#FSORT

NEXT LCOUNT

DICTREADER=FREEFILE

OPEN"C:\DICT\DICT.TXT"FORINPUT AS#DICTREADER

PRINT"Reading Dictionary File Data..."

DOWHILENOTEOF(DICTREADER)

LINE INPUT#DICTREADER, LINEREAD

CHECKLEN=LEN(LINEREAD)

TOTALCOUNT(CHECKLEN)=TOTALCOUNT(CHECKLEN)+1

PRINT LINEREAD

FSORT=FREEFILE

OPEN FILENAMES(CHECKLEN)FORAPPEND ASFSORT

PRINT#FSORT, LINEREAD

CLOSE#FSORT

LOOP

ENDSUB

And there you have it. an Anagram search program written in QuickBASIC. Of course, it is arather basic and is a bit picky about preconditions (hard-coded for a specific file, for example) but it was largely written against my test VM.

One of the fun parts of personal projects is, well, you can do whatever you want. Come up with a silly or even dumb idea and you can implement it if you want. That is effectively how I’ve approached BASeBlock. Iit’s sort of depressing to play- held back by older technologies like WindowsForms and GDI+, and higher resolution screens make it look quite awful too. Even so, when I fire it up I can’t help but be happy with what I did. Anyway, I had a lot of pretty crazy ideas for things to add into BASeBlock, some fit and were even rather fun to play- like adding a “Snake” boss that was effectively made out of bricks- others were sort of- well, strange, like my Pac man boss which attempts to eat the ball. At some point, I decided that the paddle being able to shoot lightning Palpatine-style wasn’t totally ridiculous.

Which naturally led to the question- how can we implement lightning in a way that sort of kind of looks believable in a mostly low-resolution way such that if you squint at the right angle you go “yeah I can sort of see that possible being lightning?” For that, I basically considered the recursive “tree drawing” concept. One of the common examples of recursionm is drawing a tree; f irst you draw the trunk, then you draw some branches coming out of the trunk, and then branches from those branches, and so on. For lightning, I adopted the same idea. The eesential algorithm I came up with was thus:

From the starting point, Draw a line in the specified direction in that direction at the specified “velocity”

From that end point, choose a random number of forks. For each fork, pick an angle up to 45 degrees of difference from the angle between the starting point and the second point, and take the specified velocity and randomly add or subtract up to a maximum of 25% of it.

If any of the generated forks now have a velocity of 0 or less, ignore them

otherwise, recursively call this same routine and start another “lightning” from this position at the specified velocity from the fork position.

Proceed until there are no forks to draw or a specified maximum number of recursions has been reached

of course as I mentioned this is a very crude approximation; lightning doesn’t just randomly strike and stop short of the ground and this doesn’t really seek out a path to ground or anything along those lines. Again, crude approximation to at least mimic lightning. The result in BASeBlock looked something like this:

Now, there are a number of other details in the actual implementation- first it is written against the game engine so it “draws” using the game’s particle system, and also uses other engine features to for example stop short on blocks and do “damage” to blocks that are impacted, and there are short delays between each fork (which again is totally not how lightning works but I’m taking creative license). The result does look far more like a tree when you look at it but the animation and how quickly it disappears (paired with the sound effect) is enough, I think, to at least make it “passably” lightning.

But All this talk, and no code, huh? Well, since this starts from the somewhat typical “Draw a shrub” concept applied recursively and with some randomization, let’s just build that- the rest, as they say, will come on their own. And by that, I suppose they mean you can adjust and tweak it as needed until it gets the desired effect. Or maybe you want to draw a shrubbery, I’m not judging. With that in mind here’s a quick little method that does this against a GDI+ Graphics object. Why a GDI+ Graphics Object? Well, there isn’t really any other way of doing standard Bitmap drawing on a Canvas type object as far as I know. Also as usual I just sort of threw this together so I didn’t have time to paint it and it might not be to scale or whatever.

What amazing beautiful output do we get from this? Whatever this is supposed to be:

It does sort of look like a shrubbery I suppose. I mean, aside from it being blue, that is. It looks nothing like lightning, mind you. Though in my defense if electricity tunnels through certain things it often leaves tree-like patterns like this. Yeah, so it’s totally electricity related.

This is all rather unfulfilling, so as a bonus- how about making lightning in Photoshop:

Step 1: Start with a Gradient like so

Next, Apply the “Difference Clouds” filter with White and Black selected as Main and Secondary Colours.

Invert the image colours, then Adjust the levels to get a more pronounced “Beam” as shown.

Finally, add a layer on top and use the Overlay filter to add a vibrant hue- Yellow, Red, Orange, Pink, whatever. I’m not your mom. I made it cyan for some reason here.

With a few decades behind it, Electronics how have an established “history”. This has resulted in a rather curious change in how “aftermarket” revisions to the hardware are regarded by some.

A good example would be the labels on Video game cartridges. If for example a label is torn or ripped, a person might decide to replace it. It is possible to make nearly perfect replicas of the original labels. The problem arises however in that there are people who find this behaviour unethical; in their opinion, these “reproduction” labels should be labelled as such, because it is not part of the original.

To me that argument makes far more sense when discussing things like reproduction ROMs, where the actual game “Card” and contents of the cartridge differ from the original. In particular, in that case the reproduction is effectively created afterwards, and typically those who make them and sell them aim to reproduce wildly popular and expensive titles in order to try to “cash in” on the rising demand for titles that have a limited supply.

But I do not think that extends to cosmetic considerations. If you have a copy of Bubble Bobble with a label that has ripped off, you aren’t “destroying history” by cleaning off the old label and affixing a freshly printed one. You are restoring your copy of the game. That such things could then be sold and mistaken for a good condition original is irrelevant, because the market that values good-condition labels was built entirely around conditions where the labels could not be fixed in this manner, and rather than deny or question those who create and affix reproduction labels to fix their games, collectors and those interested in purchasing these things should be aware of how good condition labels may not be original.

If I own a game with a damaged label, it is not my responsibility to adhere to some invented set of rules about what I’m “allowed” to do with it. I own the physical object, I can do anything I want with it, including replacing the damaged label however I see fit. The same applies to any piece of electronics, collectible or not. There is no unspoken responsibility for an owner of, say an Apple II, to keep it in factory condition; installing or using modern alternatives for things like Hard Drives (SD Card adapters, for example) does not magically make them a traitor against humanity or whatever wild accusations many people seem to often make against those who make aftermarket changes or restoration to their hardware.

The Industry is still relatively young but it appears we have reached a point where collectors – and speculators – take themselves as seriously as, say, collectors of old coins. There is a big difference between an original Spanish piece-of-eight from the 1500’s and a Video game cartridge from 20 years ago, both in terms of value as well as cultural and historical significance, and I think considering them equal heavily inflates the importance of Video games and the associated hardware. The people that made and were responsible for these are largely still alive. We may as well suggest that former presidents who are still alive be encased in plastic to preserve their historical significance.

Over time a lot of game franchises have appeared and many of them have many installments. Sometimes, you’ll try a much later release in the franchise and find you quite enjoy it, and decide to hop backwards- and see what you missed in previous ones.

As one might already guess, That’s effectively what I did and what I aim to discuss here. Specifically, a few years ago, for some reason, I wanted a new “thing” and decided to get an XBox One. Being that I was interested in a more complete racing game experience than Project CARS or Assetto Corsa seemed to provide- both of which still felt like Alpha-quality software particularly in terms of their menu interface, I decided to get the Forza Motorsport 6 Bundle/Edition. I actually hemmed and hawwed on the decision for a while because it was an awful lot to throw down on something I had difficulty actually justifying, particularly as I had never even heard of the series before; But I tossed my chips in anyway.

It comes as no surprise given what I’ve written here that I ended up quite liking the game. Which actually was a surprise to me since I’m not generally very car savvy or interested in cars. It’s worked out so far to about a dollar an hour of total playtime just with that one game.

Eventually the “6” got me thinking about the predecessor titles. So, following my typical style, I went overboard and got them all:

So far I don’t have any regrets; I’ve been playing through the first installment and I think it offers enough of it’s own uniqueness that it’s worth playing. In particular, having never owned the original XBox system before I was intrigued with the system as a whole and in particular the ability to have custom soundtracks. 2, 3, and 4 are for the XBox 360- which I’ve never owned either, so I once again went all in and have a new one on the way, for when I get through the first installment.

It is rather interesting how much variation there is between games that really were designed with the same intent. Ignoring for example graphical enhancements, you have to consider vehicle and even track licensing; As an example the Opel Speedster is available in the Original Forza Motorsport title but isn’t in Forza Motorsport 6; meanwhile settings like the New York Track and Tokyo tracks (among many others) simply are not available in the latest game, which themselves have tracks not available in earlier titles. My hope is that as I slowly make my way through each title each game provides a diverse enough experience that it isn’t too much of the same thing over and over.

Storing, calculating, and working with Currency data types seems like one of the few things that still provides a mixed bag of emotion. in C#, on the one hand you have the decimal data type, but on the other, you have pretty much no functions which actually accept a decimal data type or return one.

As it is the most suitable, the decimal data type is largely recommended for financial calculations.This makes the most sense. And while, as mentioned, there are many functions and calls you might need to make which will result in casts back and forth from other data types like double or even float, if you design your software and data from the ground up to deal with it you can usually accommodate these issues.

The problems arise, as usual, when we start looking at existing systems. For example, your old product might be working reasonably well with only a few problems for a customer, and they have loads of data. They aren’t going to be as likely to hop aboard your new system if it means that they will have to re-enter a load of data, and they aren’t going to like seeing errors from their current data appear in the new system. It was working before, after all. These old systems might be using ISAM databases and may have used floating point internally for calculations; even if it doesn’t use it’s own math routines you’ve got to consider that whatever programming environment was used for the old software might not follow the same mathematical rules as the new system, and so you have to decide how to proceed. Something as simple as a rounding function dealing with a corner case differently could result in massive amounts of manual data entry for either the customer or even yourself. On the other hand, using floating point types and writing wrappers to mimic the fiobles of the old functions is effectively building technical debt into the product. The compromise solution- have some sort of configuration which will either set it to use floating point compatibility mode or for the product to use decimal native mode would involve a lot of ground-up architecture to implement. Database schemas will differ, and you can’t just willy-nilly swap the option either

It’s the sort of problem that doesn’t seem to get covered in Academia on the subject, but it comes up often and each decision must be made carefully, in order to avoid alienating customer bases while attempting to avoid unnecessary technical debt. particularly since technical debt is why new systems might be implemented to begin with- bringing forward technical debt from the replaced system sort of defeats the purpose.

Upgrading library components across an entire suite of applications is not without it risks, as one may shortly learn when upgrading from Npgsql 2 to Npgsql 3. Though it applies between any version- who knows what bugs might be added or maybe even bugs were fixed that you relied upon previously, either intentionally or without even being aware that the behaviour on which you relied was in fact unintended.

As it happens, Npgsql 3 has such particulars when it comes to upgrading from Npgsql 2. On some sites, and with some workstations, we were receiving reports of Exceptions and errors after the upgrade was deployed. These were in the form of IOExceptions from within the Npgsql library when we ran a query, which failed because “a connection was forcibly closed by the remote host” form. Even adding some retry loops/attempts didn’t resolve the issue, as it would hit it’s retry limit, as if at some point it just refuses to work across the NpgsqlConnection.

After some investigation, it turned out to be a change in Npgsql 3 and how pooling is managed. In this case, a connection in the pool was being “closed” by the postgres server. This was apparently masked with previous versions because with Npgsql 2, the Pooled connections would be re-opened if they were found to be closed. Npgsql 3 changed this both for performance reasons as well as to be consistent with other Data Providers; This change meant that our code was Creating a connection and opening it- but that connection was actually a remotely closed connection from the pool, as a result attempts to query against that connection would throw exceptions.

Furthermore, because of the nature of the problem there was no clear workaround that could be used. We could trap the exception but at that point, how do we actually handle it? If we try to re-open the connection we’d just get the same closed connection back. It would be possible to disable pooling to get the connection open in that case but there isn’t much reason to have that only take place when that specific exception occurs, and it means having added handling such that we handle the error everywhere we perform an SQL query- and that exception might not specifically be caused by the pooling consideration either.The fix we used was to add new global configuration options to our software which would add parameters to the connection string to disable pooling and/or enable Npgsql’s keepalive feature. The former of which would sidestep the issue by not using pooled connections and the latter which would prevent the connections from being closed remotely (except when there was a serious problem of course, in which case we want to exception to take place). So far it has been successful in resolving the problem on affected systems.

Now, this is obviously Audio related, but where does the Volume Slapper program fit in here? Well to Record, I must turn down the volume of all programs I do not want to record (for example, System sounds, Skype notifications, sound in Web browsers) and switch the sound card to Headphone mode. I also need to disable the Aux Input, as it causes feedback (the tape deck outputs a low level signal of it’s own as well when recording). Now, for the most part these are pretty simple to do but adjusting audio levels is slightly annoying to do- especially if their current levels were carefully crafted over a period of time to suit what I was doing. My thinking towards Volume Slapper was to make it easy to restore the Audio levels I was using before I had a “recording session”. Disabling sound devices and flipping hardware relays (the headphone setting of the card) are outside the scope of the program IMO.

it also seems more widely applicable. It could be useful to save the volume settings of active programs so you can restore them later for a number of reasons. Maybe you achieved a perfect balance between your browser being used for video playback on one monitor and the audio of your game being played on your other screen, for example.

Now that that is out of the way, the actual implementation is actually quite simple. We just need to handle the new save and load features, obviously. In order to simplify my own usage I had it default to a “quick.xml” file saved to appdata if a file isn’t specified. The file itself- as indicated by the filename, is an XML file. it is built using the standard XElement capabilities of the .NET Framework. Since the usage is so simple here I didn’t reference Elementizer. it just saves the session names and the volume to the XML file, or loads them from an XML file. Of course since the sessions can be different between saving and loading, it currently ignores new sessions or sessions that didn’t exist when the data was saved. Saving the volume, starting word, and loading the volume file that was created won’t affect Word’s audio volume, for example.

VolumeSlapper, including these recent modifications, can be found on github.

As an interesting aside I’ve started working on a sort of silly “task” project which basically acts as a strict task scheduler that runs precisely and shows the time before each task is going to be run next.

For a while now Windows 10 has had a “Game Mode” feature. I’m rather mixed on the feature myself, but generally find it strange.

I’ve never been a fan of the “Game Booster” software phenotype; it seems like it is largely snake oil fakery, and where it does have an effect, it is really just a result of the same sort of adjustments that can be made manually via services or other configuration options. Game Mode does have an advantage, here; the first is that it sort of puts those applications “out of business”, and, being built into the OS, it is a much safer implementation, and it’s goals are less extreme. On the other hand, it does sort of legitimize the concept, which I’ve always found crazy, that such applications are in any way worth using.

I tend not to use the feature, however I can see it having benefits for some users and some systems. To me, overlay features such as the Game Bar that are used in this case feel like a sort of “chaff”; It is better than older approaches like the “Games for Windows Live” featureset, and better implemented as well, but I’ve found that- at least for now- it’s not really for me. This may be partly because I’m not a particularly heavy gamer, though- I seldom play games on my PC- nowhere near what I expected.

I also tend to enjoy older titles. Interestingly, I’ve found many older games- even going back to Win98 Games, run surprisingly well on Windows 10, most issues I’ve encountered with older titles tend to be a result of either lack of 16-bit compatibility (with much older titles) or are a result of the hardware being far in excess of what the game ever expected. A lot of older titles don’t have support for 2560×1440 for example because it is such a high resolution, requiring minor patches. Windows 10 is surprisingly backwards compatible in this regard. Even better than previous Post-Vista Windows releases, including Windows 7 which had an interesting explorer palette realization issue. that tended to cause problems with games that used 256-color modes.

I’m writing this, right now, on a computer from 1999; a Rev C iMac G3 system that I got off Craigslist for $20. This system has 64MB of memory and a 333Mhz Processor, using Microsoft Word 2001 running under Mac OS 9.2.2.

Considering the advances we’ve seen in tech, you would expect this system to be entirely unusable; and yet here I am using a relatively full-featured productivity application with seemingly the same responsive behaviour and capability as a more modern system.

This is leading inexhorably to a discussion regarding bloat. As computers grow faster, the software that runs on them expands to use the additional capabilities. I’ve had modern text editors that type out what I wrote in slow motion- updating a character every second or so, on a 4Ghz quad core system with 32GB of Memory that isn’t otherwise being taxed. There is very little excuse for this, and yet it happens.

As computers moved forward we find that extra capability is, oftentimes, absorbed by developers. That is, a faster Processor means they can get the same speed by using C instead of Assembly, or they can write the software in a higher-level Language like Java or C# instead of writing it in C. Those are entirely reasonable as, in a way, they do eventually reduce the cost of software to the consumer. Nowadays that question is in regards to web applications. We have many pieces of software that are themselves written in Javascript for example, which put a heavy load on the interpreter under which they run. As an interpreted language the performance is reduced even further, but it is considered acceptable because faster systems are the norm and your typical system is capable of running that at speed.

But in many respects, we’ve been upgrading our hardware but running in place. While many aspects have certainly improved- entertainment/game software, for example, has higher resolutions, more colours, higher-res textures and more polygons than ever before – a lot of otherwise basic tasks have not greatly improved.

But at the same time one is often left wondering exactly what features we have gained in this inexhorable forward march. As I type this the system I’m writing on is not connected to the Internet; I’m not receiving any notifications or tips, I’m not seeing advertisements for Cloud storage or nag screens about installing the latest update to the system software. I can install applications and use them, and in many cases I can use them with a lot of the same effectiveness and even performance as corresponding modern software. Accounting for the exception, of course, of web browsers.

Web browsers are an interesting case in looking how computing has changed. Your typical system from the late nineties would have had perhaps 64MB of RAM, like the iMac G3 I’m using right now. I can run Internet Explorer and open local web pages (I’m too lazy to move the system to connect it via Ethernet, since it naturally has no Wireless capabilities on it’s own), and Internet Explorer only consumes 10MB of memory. Compared to my main desktop system, the proportions are similar- oftentimes I find Firefox or Chrome consuming upwards of 1GB of Memory! It is easy to blame this on software bloat- that browsers have merely started using more memory because they don’t need to not- obviously a web browser using upwards of 1GB of memory couldn’t have existed at all in 1999, but it runs without issue on most modern systems, particularly now that 4GB is becoming a “bare minimum” to run a system with. Blaming it on that would be an oversimplification, as the task of browsing has ballooned from the time period where it could be done with that much RAM; now browsers need to not only support more complicated HTML structures as well as features such as Stylesheets, but they are effectively becoming a platform on their own, with web applications running inside of browsers. To make a contemporary reference- it’s like a Hypercard stack, to put it in terms relative to the older Mac systems I’ve been fiddling with lately. As a result, saying it is entirely due to bloat would certainly be unfair.

Perhaps, then, we need a better benchmark for comparison. I’m writing this in Microsoft Word 2001 for Mac, so perhaps Microsoft Word is a better comparison? As I write this, Microsoft Word is using 10MB of Memory. Launching Microsoft Word 2013, and opening a blank document, I find that, to begin with, it is using 55MB of memory.

Now, compared to the total amount of memory on each system, Word 2013 is actually using a much smaller percent; 10MB is about %15 of the total memory on this Mac, but 55MB is only 0.32% of the total 16GB of memory on the laptop I started Word 2013 on; so in that sense I suppose we could argue that memory usage of applications has reduced compared to the available hardware. But of course in absolute terms the story told is different, and a blank document is using over 5 times as much memory as it takes for this older release on an older computer to maintain and display a multiple-page document.

There are a number of reasons for this sort of difference. For one thing, excessive memory usage by certain components might not come up in testing on more recent machines; as long as it runs, excess memory usage might not be detected, and even if 55MB is higher than it is on this older system, as established, the smaller usage of total physical memory on most any modern system is going to result in it not being considered an issue. Another reason is that sometimes with additional capabilities, Software gets added effects. Features like Aero Glass and the drawing of features like the modern Office Ribbon, for example. Also to be considered are modern features like font smoothing, which were less prevalent and advanced in 1999.

Nonetheless, it is still somewhat humourous that a basic word processor has managed to start using that much more memory for what is effectively the same task! The actual word processing capabilities are largely equivalent between the two releases of the software, which is not something we can argue with browsers.

Perhaps it is not something that is too much of a problem. In many respects, it would seem that application needs eventually dictate what people consider a “bare minimum” of RAM, meanwhile, we can see many different productivity tasks remained largely the same and contain similar feature sets and capabilities as those requirements rise. Early versions of Microsoft Word or Excel, for example, generally contain the bulk of features that people make use of in the latest release of the software, while using a relatively infinitesimal amount of system memory in doing so. This does lead to what I find cringeworthy proclamations such as “How can you possibly do anything with only 2GB of Memory?” which make sense in a certain context but when applied broadly can be pretty silly; We managed to do many of the same things we are doing nowadays with desktop computers 20 or 30 years ago, with far less memory and processing power, after all. Additionally one could easily imagine bringing somebody from, say, 1994 forward in time to hear such a statement and have them be in awe at how such an unimaginably large amount of memory – an amount still unheard of even for Hard Disk sizes to them – was being dismissed as far too little RAM for many of the same sort of tasks they had performed without issue.

Personally, I’ve found popping onto an old computer- like this iMac G3, to be a helpful experience. These are computers that, many years ago, were top of the line, the sort of systems some people would drool over. And now they are relegated to $20 Craigslist ads, which are the only thing between them and the dump. Meanwhile, the Operating Systems are not only responsive but are designed in such a way that they are quite easy to use and even, dare I say it, fun to use! Mac OS 9.2.2 has little audio flairs and clicks and pops and swoosh sounds associated with UI interactions that actually had me missing it when using my more recent systems. Which is not to suggest that I think it wouldn’t become annoying fairly quickly with regular usage.

Unfortunately — or I suppose in some ways, fortunately — the systems are relics of a bygone era. Computers are commonplace enough that we have them in a form that we can keep in our pocket. We have become so accustomed to the devices that they are now a part of daily life, as are the networked components, perhaps even more so. People are constantly updating their Facebook feed, checking other people’s posts, reading their Twitter feeds or their Instagram, sending people text messages, arguing about politics with some brother of a friend of a friend of a friend on Facebook who they’ve never met, etc. We are living in what could effectively be described as a “Fantasy World” by people in the 90’s and yet here we are living it everyday to the point where it is not only mundane, but where things considered unimaginable conveniences only a decade ago are now unacceptable.

This is not intended to suggest we should not strive for or even demand progress, just that maybe we should lay off the hyperbole in describing how lack of such progress is detrimental to our life. A web portal missing a minor convenience feature is not something to throw a fuss over; software beign released at a price point you disagree with is not a reason to go on the warpath against it’s developer, and just because you have a Tumblr blog with 5 readers doesn’t make you a social media influencer that developers- or anybody, for that matter- should cater to in any way.

There is an argument that something ineffable has been lost with the rise and ubiquity of the internet that is very much beyond description. While nowadays “research” is loading up Google in a new tab and running a few searches, it used to consist of going to the library and looking up Index cards and reference material. Where dealing with say a Programming conundrum or trying to use a program feature you weren’t familiar with meant looking it up in the hard-copy manual, or in the former case, actually working on your own solution based on what you needed, now you just Google for it and go directly to the answer on sites like Stack Overflow- you copy paste the code function and use it, or you mindlessly follow the steps outlined to use the program feature you want. Neither way is of course better than the other, it’s just that the Internet really is the ultimate Enabler.

I have about a half dozen books that I’ve barely even cracked open that, if I had them a decade ago, I would have read cover to cover several times over by now. I’ve had Project ideas squashed before I even started them by a quick Google search to find out that there are already programs that performed the function and they did it better than I even imagined. Whereas before I would have pursued such projects anyway- not knowing there was anything already done, and end up learning as a result.

As much as the ubiquity of the Internet has helped us, it has also acted as the ever-present enabler to our addictions. It feeds our addiction to information, our addiction to instant gratification, and our ever-present curiousity, but it does so with what could almost be described as empty calories. It’s like eating hard candies when you are hungry. So it leaves many unsatisfied and seeking more, wondering what is missing.

It was the hunt for the information, the trail that you blazed while doing research or cross-referencing in a Dewey-decimal index. It was the excitement of finding a nice thick book on a subject you were interested in, knowing it would keep your information “mouth” fed for weeks to come as you read and reread it, sucking in all the information it had to offer. Even the bits you couldn’t use. I read “Applied Structured BASIC” from cover to cover multiple times despite it covering ancient BASIC dialects that had long since stopped mattering.

Now, I find that there is a place for the phrase “information overload”. No bookshelf, no matter how full, can possibly compete with the World wide web in terms of the ease of access to information, the accuracy and inaccuracy of that information, as well as the sheer amount of that information, to the point where one could almost argue there is too much. Perhaps the skill in using the Internet for information is having a proper “tunnel vision” to get the information you want and ‘get out’ when you are looking for something specific. The alternative, of course, is to go looking up how to create a Jump List in Windows 7 and later and suddenly finding yourself reading about how Strawberries are harvested.