Byte Cruft

Wednesday, March 11, 2015

Having a background in desktop and embedded software development, web application development technologies frequently stymie me. I get "the good parts" of Javascript, but I often feel like Javascript makes what I've come to consider good practice in application architecture difficult. The loose typing of Javascript is an appeal to so many, but it can make it hard to maintain a very large code base.

Consider:

Javascript:

var foo = function (payload) {

var timestamp = payload.getTimestamp();

//some code that does something with timestamp, etc...

}

There is an inherent code interface requirement for anything passed into the function foo that it must have a member function named getTimestamp, and that function must return a type suitable for the expected operations.

There are essentially 4 ways in Javascript that a developer who wished to use function foo can discover this implicit requirement:

Inspect the internals of the function foo, to learn all of it's expectations of the passed in type

Read documentation bequeathed by the author(s) of foo

run a "lint" -type tool for static analysis to catch improper usage.

run the code and debug the exception that gets thrown when you pass it an invalid object.

All of these options, except maybe the lint option require manual human action. All of the options, including the lint option, are quite fallible. I did not include a 5th option that is unit testing, unit testing is a good thing, but it does not even begin to help this particular problem. For starters, the unit test will not replicate the real world usage. Where was the payload object created? It might come from a call back from within a callback from within a callback using an object stored form some earlier callback. Similarly, lint tools are a good thing, but they are far from perfect as well, and more cumbersome to use than a compiler.

compare the Javascript with the equivalent C#

interface IPayload

{

DateTime getTimestamp();

}

void Foo(IPayload payload)

{

var timestamp = payload.getTimestamp();

//do some stuff with timestamp

}

Now this example is trite, but the point is that as the size of your application grows, and you pull in more and more 3rd party code, the occurrence of problems surrounding the "hidden contracts" in dynamic languages like Javascript grow. In the C# version (or just about any statically typed, compiled language), you know immediately, and so does the compiler, what is expected of the payload object passed to Foo, it's contract is clear, and rigid.

There is a lot of thing appealing about how Javascript works and dynamic languages. Due to the nature of web the web, deployment has always been more of a challenge than with desktop applications. The ability to swap out a single file has it's appeal. Java server side tried to alleviate this with the servlet system and WAR files. It's also more terse, and superficially more forgiving. (Any object could be a "payload" in the Javascript version so long as it has access to the getTimestamp prototype.

Also, there is this appeal in the perception that a change in a third party dependency won't break your application. I think the problem is with dynamic scripting languages is that your relationship to your dependencies becomes a "gentleman's agreement", and not an enforceable contract. Say in the next version of the foo library, the authors made to following changes to foo:

var foo = function (payload) {

var timestamp = payload.getTimestamp();

payload.resetTimeStamp();

//some code that does something with timestamp, etc...

}

Maybe this was documented, no big deal right? If your code was such that you could find everywhere that payloads were created, great. But what if it's not that easy? Again, this is trite, but illustrative of a general problem, use your imagination. To fix this problem could involve a lot of debugging, text searching, inspecting logs, etc. Then when you have to integrate your code into another department's application, you have to do it again.

In C# the compiler would immediately find all cases where the 3rd party library broke you.

Ok that was a rather winded way of saying "dynamic languages pose extra challenges for large applications". This is also true of Python. This isn't to say I don't like Javascript or Python, that can do many things that would be hard or even impossible in other languages. The asynchronous nature of Javascript lets you do a lot without multithreading and the evil, evil shared memory synchronization patterns you must use with it. At the same time, I miss the comfort of knowing my object is an apple, not not having to debug to figure out I was suddenly passed an orange.

One of the great things about Javascript is the ecosystem, and the shear massive amount of effort and work going into it. There are more Javascript libraries and frameworks and tools than one could ever learn in a lifetime. Javascript is ubiquitous,

The second aspect to Web Development that makes me marvel at it's popularity is way in which user interfaces are created, the DOM. Creating user interfaces in html is for the self-taught an execise in cooking "div soup". Go to any major modern website; Facebook, Twitter, CNN, and open the page in the dev tools in your browser off choice. Invariably, it's built with nested div upon div upon div's. The css is typically where the magic happens that makes those div actually look and behave like the site you're visiting. But it's a dark art to getting it right and authoring, that comes with experience, not necessarily the good kind. It takes years of learning the arcane quirks like "oh you need to wrap that in a div with float:left" or "oh Firefox need's explicit widths on those elements". Compared to most of the Desktop UI frameworks I've worked with, traditional web UI development feels .... irrational.

There is hope though, I think things are getting better. HTML 5 has enabled the tools needed to make vast improvements. A lot of HTML 5 is not implemented yet, and even more is still experimental or bleeding edge. Web Components offers a path to maybe one day doing away with the div soup and making a more satisfying meal. I have been experimenting with Google Polymer lately, and it's still very new, but has me excited that I might soon be able to build a browser app with markup that makes sense for a UI.

Though I never used Silverlight much, I've used it's big sibling, WPF, extensively. It had its warts for sure, but creating a UI was so much more intuitive than the div soups/css hacks of HTML we see today. It makes me sad that Silverlight, met an untimely end. I feel it's demise was more due to a shift to mobile devices and away from browser plugins, than its technical merits. There is a project that I ran across in my feeds recently, Fayde, which is a re-imagination of Silverlight in pure Javascript. I was able to get a basic UI rendering in minutes, despite never using it before. It is super-neat-o.

For those crufty programmers such as myself, with a slight allergy to new-fangled dynamic languages, there are a lot off tools to let you build Javascript applications in the language of your choice. Tools like Google Web Toolkit(Java), Emscripten(C/C++), Typescript, Dart, JSIL(C#/VB) all let you shield yourself a little bit from the idiosyncrasies of Javascript and the very loosely-typed nature of it. It used to be that "strongly typed" was generally considered a virtue in a programming language, it seems like the rise of browser based development has flipped the consensus. I predict it will come full circle once really large "SPA" web or nodejs application developers start feeling the pain of maintaining multi-million line, aged, legacy applications that were built on gentleman's agreements.

--P

[Note, I struggled to find the right title for this post, no offense to millennial engineers is meant by it. I think if you learned how to program before Google was a thing, you cut your teeth under a different set of common wisdoms than today. Some are now obsolete but others got lost to the noise of the new shiny. The merit of strong explicit contracts seems like one of those lost wisdoms these days. I'm always keeping up with the new shiny, while trying not to loose touch with my roots. Millenials are crazy-smart and talented. Now get off my lawn!]

Thursday, January 8, 2015

The Scenario:

Sometimes, libraries, native dlls, or environment in your Windows application's running process can interfere with each other. 'Pure' .Net managed code is much less prone to conflict due to .Net's fantastic versioning, type system, strong naming, and separation via application domains. In real world applications, you often need to interact with a lot on native code or APIs where you have no such protection. This blog post is about a simple way to keep native dependencies separate for different code within a single .Net desktop application.

Background:

I have some software I'm working on in C# that requires the use of an embedded web browser. I'm have a web application I'm working on that is HTML5 and Javascript heavy, and I'm writing a C# application that hosts the web app and interacts with it to do some "heavy lifting" things that are not possible inside a browser without plug-ins. All I'll say about it right now is that it's robotics based, I'm going to open-source it, and the web app part with be available to run from Anibit, but I'll also have the offline Windows application that will be easy-peasy to use and have some fancier features than the pure browser based version. It will generate code, and the the offline version will also compile and upload the result to your device.

The Problem:

I found that when I had the browser loaded, the GeckoFx library, that is a managed shim for Mozilla's "xulrunner" embedded Firefox, that when I spawned the compiler as a child-process, it would fail with a lot of cryptic errors. If I did not load the browser component, the spawned compiler worked fine. I'm pretty sure that some dll's or environment settings or something in the process's memory were not playing well between xulrunner and gcc. Rather than spend forever tracking down the exact problem, which I ultimately probably would have had to build my own xulrunner or gcc binaries to fix, (yuck), I came up with a nice work-around that gives me the best of both worlds.

The solution:

On start-up in my application, before I have done anything, I launch a second instance of the application with special command-line parameters. The parameters tell the second instance that it should run in a "remote execution server" mode, and I also pass the Windows process ID of the parent to the server/child. The child process periodically checks to see if the parent process is still running, and exits if not.

The child process starts a server for a ".Net remoting" object. .Net remoting is one of lesser-known/understood technologies of .Net, but it's fantastic if you're on a pure Microsoft technology stack. It makes remote procedure calls across applications or even machines super simple. Essentially, with some small configuration files, and a little bit of support code, you can create a class who's functions automatically get executed in the child process. The calls can be synchronous, and parameters and return values are magically handled by the CLR. (Note that Microsoft advises against using the 'legacy' remoting APIs, and instead recommends using "Windows Communication Foundation". I find for really simple situations, .Net remoting is a bit simpler to setup and use, and the remote interface is dynamically generated so there's no endpoint API to maintain.)

Rather than make a lot of diagrams or posting code to this blog, I just put a small demonstration C# project on Github, if this sounds like it could help you, feel free to use it for whatever you want. You can find it here.

Tuesday, December 23, 2014

I've adapted my "Automated CAD design" system to go a few steps further and have it generate all the data and metadata needed for an interactive website that lets you explore a canned(for now) set of design permutations.

I've got a lot of polish to do on the html itself, but I finally got it to a minimally viable web app.

Friday, December 12, 2014

I attend monthly meetings at Triembed, a hobby electronics enthusiast group in Central North Carolina.

The group holds 2 hour meetings, and in the first hour, people are invited to present things they've worked on, If you're convenient to the Raleigh area, and can make it to NC State's campus on the second Monday evening of the month, you should check it out! It's a great group of people.

At last week's meeting, I gave a brief talk about my "Automated CAD design" post. Pete Soper, one of the founder's and organizers of the group, was kind enough to edit and post the video of my talk to Youtube. If you've read my post, I don't have much new information in the talk, but I do say "um" a lot. (I have a secret podcast I've been working on, and I probably would have published it by now if I could stop saying the u-word so much).

Sunday, December 7, 2014

I wrote a tool that I am open-sourcing to keep track of website server performance. It is very un-fancy. It tries to do one job and do it well: Log and display page load times. (Wait, is that two jobs?)

Background

Over the past several months, I've spent way more time than I had anticipated stressing over, testing, and trying to fix the abdominal performance I had on anibit.com. I was using a low end hosting service intended for personal websites, which I had expected to on the slower side of things, but it got so bad that page loads could take up to 45 seconds. That was even at times when I was the only user other than the failed spam-bot log ins once a minute or so. I would have been satisfied with 6 second loads even though that is considered a generally poor user experience. Something had to give. Anibit was on a shared server with probably dozens (my host service doesn't reveal that I'm aware of how many websites share your host). Any of them could misbehave and bring the server to it's knees until the hosting service brought the hammer down on them, which it frequently needed to do.

Friday, November 21, 2014

I received a lot of positive feedback on my last post, and some of the ideas posted on the sites that picked it up gave me the inspiration to write this follow up. There were some good ideas and some questions posed so I thought I'd try to make few addenda and clarifications.

Choice of toolchain.

I chose OpenSCAD mostly as the first thing that popped into my head. The nightly version (2014.03 as I write), is fairly stable. The output it generated was well received by most of the other tools I used it with, which included Blender3D, for rendering the preview images, MeshLab for debugging the geometry when I goofed something, and LibreCAD for loading the 2D DXF files to send to my laser cutter. I even used Elmer to do some rudimentary(and probably naive) stress sanity checks.

I did lament that some the really cool features in the bleeding edge version of OpenSCAD were not available, but the generated output of the newer version had some erroneous geometry. I've read on the OpenSCAD forums there may be some versions for download that don't have those bugs.

I had worked with OpenSCAD before, so I knew what it's capabilities and limitations were. I wasn't really tied to OpenSCAD, here are some alternative tools work very similarly. Ultimately, it came down to the devil I knew, and that it supported all of the features that were critical to me: easy to setup, mature and stable, bug free STL and DXF output, and allowing me to programmatically override the skeleton designs by emitting generated design code.

Wednesday, November 19, 2014

First, I should get something off my chest, the title is probably a little misleading. "Heavily parameterized 3D case design" might have been a more accurate, but dull title.

I recently kicked off laser cutting services on Anibit, and I plan to augment that service with a lot of specific product designs that I create. I'm a nut for automation and flexibility, and I'm deficient in intrinsic artistic talent. I determined that, as much as possible, I would design the physical aspects of my mechatronics creations in a way that I could easily make changes large and small, and not have to re-do much work.

My first area of focus was an automated heavily parameterized system of scripts for creating laser cut acrylic project cases. This was stupidfun to work on, and my blog post frequency has been anemic this year, so grab some snacks and settle in for a read.

Automatically rendered preview image.

My weapon of choice in this case is OpenSCAD, a text-based parametric modeling program. Don't be intimidated, OpenSCAD is one of the easiest modeling packages I've ever used. Software developers especially will feel right at home:

Wednesday, October 8, 2014

....figuratively speaking, of course. (I'm not crazy! Well maybe a little)

Last week, I ended my employment at my "day job", to work full time on Anibit.com and building a consulting business. It's very exciting, and I'll admit, a little scary. I'm going to be adding a lot of services to Anibit.com (and I'll add a "hire me" sidebar pane to this blog soon).

It's been a dream of mine to run my own company for a long time. I'm not getting any younger, and I plan to die with no regret for something I never got around to. The timing for me personally is as close to perfect as it gets, which is to say not really perfect, but pretty good.

I've also always wanted to do pod-casting and video production. I've started a daily podcast, but I'm in "practice mode" right now, because I need to work on my "radio voice"(I say 'um' in between every other sentence). I've also played around with producing some screencast tutorials, also not ready for prime time. I'm also now in "fail fast" mode on Kickstarter project that I've been cultivating for 6 months in what little free time I had. I'll go public with those details soon.

Wish me luck in these exciting times, expect to see more frequent updates here, and on Anibit's website. And if you have any Windows desktop, Android, or AVR/ARM cotrex applications you would like developed, give me a shout at anibit.technology[at symbol]gmail[dot symbol]com

Monday, September 29, 2014

One thing I love about AVR chips is how electrically hardy they are compared to most ARM devices. Most can run at 5 volts, and source tens of milliamps. Most also run well at 3.3 volts, which is especially good when interfacing with an ARM.

One thing to watch out for is that the maximum stable clock speed for an AVR is reduced when running at lower voltages. ATtiny85s cannot run reliably at the internal pll'ed clock speed of 16MHz, when powered at 3.3 volts. I've cried myself to sleep over this, so I offer this cautionary tale. Read your datasheets!

Tuesday, September 23, 2014

I'm having some issues with my new domain for this blog, so I've temporarily reverted it to the old bytecruft.blogspot.com domain. Atom feeds may or may not be working and please disregard any wierd redirect warnings ove rthe next few days while I work it out.

I've been trying to do some real work on a Raspberry Pi and it's cramped my style a little bit to cannibalize a monitor and slap an extra keyboard an mouse on my desk.

Working in Raspian Wheezy, you have a lot of Debian Linux at your disposal, so I thought, "I'll just ssh in" That works great, from either a Linux VM, or using Putty on Windows. I needed to run graphical applications and spawn terminals at will, so I really wanted full desktop experience.

X11, the base graphic interface run by virtually all Unix-like operating systems, supports a feature known as display redirection. I used this back in the 90's when I tried to make use of a boat anchor DECstation from my Slackware Linux box. It still pretty much works the same way it did 20 years ago. Before I get too far into how it works, I'm just going to stop and and mention that if all you want is to remotely run graphical programs on your Rasberry Pi, stop right there! There is a much easier way! X11's server-client model is very powerful and flexible, and is unique in a class of technologies that has stood the test of time, but it's not very "get'er done" user friendly.

Linux machines support Microsoft's "Remote Desktop" protocol with two programs: xrdp and rdesktop.

xrdp is the "Remote Server". This runs on the machine that your want to remotely log into. Note that is this backward from X11, where the "server" is the machine with the physical display, and the client is the (remote) application that generates contents to display.

To install xrdp on your Pi (if using Raspian or other Debian Linux derivative), type:

sudo apt-get update

then

sudo apt-get install xrdp

You're pal apt will download, setup, and launch the xrdp deamon to start listing for connection requests. If you're parnoid, reboot your Pi to make sure.

Tuesday, August 26, 2014

I've run across some bad information on the Internet more than a few times lately, and it prompted me to try to counter this misinformation about optimizing workstation computers with gobs of RAM.

The tl;dr version is: leave your RAM alone, let the operating system manage it. Teams of people smarter than you and I have figured this problem out.

The Problem:

The advice is basically that emulating a hard-drive/ssd with system RAM will speed up your system, and that moving your paging file to the RAM Disk can improve system performance. That is just wrong. Why does this idea never seem to die the death it deserves? I think there is a lot of misunderstanding of what a page file is. There seems to be this idea that the reason a PC is running slow is because it's spending too much time writing to the paging file.

How paging files work.

All modern desktop operating systems support the idea of a paging file. In short, a page file is the operating system's way allowing programs to ask for more RAM than what actually exists. In the old days, when RAM was used up, the operating system simply returned an error when a program asked for more memory. The purpose of a page file is not to speed up a computer, it is to allow your drivers and applications to have access to more memory than is actually available on the system. Indirectly, a paging scheme can make available more RAM for other uses, which does lead to high performance.

Mental experiment: a femto computer:

Monday, August 18, 2014

I'm trying out Google's new Domain service. It seems Blogger is a little paranoid and throws up a ton of "redirect" warnings. I'm hoping that The Goog figures out that both services are their own, and it's the same person linking this blog to www.bytecruft.com. Either that, or I've got something horribly configured.

I swear by Oden, I shall return to this blog in full strength.

In the mean time, enjoy the fruits of my recent tinkering with Javascript. I'm certain that I probably made a lot of js faux pas, but hey, it's Javascript, the business casual programming language. I want to build a lot of "nano" tools or interactive demos, and using the browser is the best way to get the widest reach. Once I get a little more of a command of it, I think I'll switch to something like TypeScript or Dart.

I'm pretty sure that you'll run into issues if you use anything but the latest browsers, I didn't do any testing of old browsers, and probably will always target the latest stable browsers.

I wrote a calculator tool for determining the target address for an AVR microcontroller relative jump instruction. The motivation was personal, I've spent a lot of time recently staring at hex dumps from AtTiny devices, I wrote a bootloader (more on that in a future post), and debugging it involved memory dumps of a lot of dynamically generated rjmp instructions. (AtTiny device do not have hardware support for bootloaders, so you have to fake it in software). I got tired of calculating it by hand, so I wrote a spreadsheet, and thought, "If I could write it in a web app, it would be there forever, and I'd always have access to it".

Thursday, May 8, 2014

As the French like to say Crikeys! It's been a long time since I posted. I thought once Anibit.com went live, I've be able to post to ByteCruft mroe often. That was the plan, at least.

I've been pretty much head-down, working on things related to Anibit. I think some more of it will be public soon.

So the title of this post, is my Public Service Announcement about ATtinys. FMUL is Atmel's assembly instruction for "unsigned fractional multiply". Before you make grandiose plans that hinge on doing fast math in hardware, make sure your CPU supports it. Most, if not all, ATtinys do not support hardware multiplication.

Thursday, March 13, 2014

Whew. I've been spending alot of time on my "Sister site". I thought I would have had more time to blog about Fun StuffTM by now. There is so much work behind the scenes that you need to do to launch a site like I want to do. I enjoy it, the supporting work, but it's all still more of the means to an end, I'm enjoying the journey, but I'm looking forward to the destination.

I have to say one stark lesson I've already learned is "Just because you've built it, doesn't mean they'll come". I have very little traffic to my site, and website traffic generation ranks substantially below marketing in requisite business development activities as far as my interests go. I know there are thousands or tens of thousands of people out there that would be interested in the site, but how do I connect with them? Part of building a catalog of existing parts was to attract a regular audience/demographic interested in hobby electronics. That, and other content I have planned (in the next phase, starting soon) would make Anibit bookmark/Feedly-subscription-worthy site.

But I get the feeling, given metrics to date, that there's more needed that just throwing content up and hoping for people to find it. Yet, I don't know what that thing is. My traffic to Anibit so far is dismal, especially if you cull the traffic generated by my personal Social Networking friends(lot of my hits cluster in places I or my family have lived). I supposed I should keep it in perspective (starting a business requires a healthy does of that). Self-bootstrapping with cash on hand like I am doing requires a lot of patience, and the experiences I've had while doing it so far have almost all been rewarding and overwhelmingly positive.

Monday, February 3, 2014

I started a new project recently, a reflow oven. This time, I've started with the physical build, and the software side of it will lag behind. I've laser cut the case I designed in OpenSCAD, and I couldn't be more pleased with the results.

The case with the "human interface" components - a touch screen LCD a 1 watt speaker.

I have some projects in R&D for Anibit, and I'm going to need to whip up some PCB's. I need a reflow oven, and I'm going to recycle an old toaster-over. I was inspired by this project, which was in turn based on osPID. The touch-screen was something I've had lying around, waiting for a project.

Sunday, January 19, 2014

I started this blog in part to document my hobby work, and hopefully provide ideas or inspiration to others.

A lot of work that I've done lately has been on something I wasn't ready to blog about... until now.

I'm starting a new website, devoted to hobby electronics and programming, and hope for it to become more than just a website, I'd like to use it to be a more substantial presence, and a greater contributor to the community, by publishing code, tutorials, and designs, as well as hosting forums and of course, I'd love to make my hobby, self funding, so it will feature a store, where I'll sell my original products, as well as resell other product of interest to the makers and programmers out there.

Sunday, December 22, 2013

[Editor's note. This is a post that I started almost 3 years ago, I'm making an effort to purge or polish some of the unfinished or unworthy posts on Bytecruft]

This is a traffic light I made for my kids. It's got a ATtiny 13 driving the leds. It also features a "soft" on-off circuit driven by a tactile momentary push button.

I know, another AVR LED based project. They're so easy. I swear, I work on more than LEDS and with chips other than AVR's. It's just that the AVR ones tend to get 100% finished. Part of my hopes for this blog is to provide motivation to finish a couple of the unfinished projects I have. Actually, looking back, it's not so much a problem of motivation, but time. I usually "multithread" my projects, work on several at once, to keep form getting burned out or over-obsessed with one. Sometimes, before I know it, a project hasn't been touched in a couple months.

This is a project I did for my kids, it was another one of those "in between" projects, where my goal was to make a small project that would be lightweight after working on something big.

Present-day action shot, I finally replaced the 3+ year old batteries
during the teardown/photo shoot for this.

The real neat trick with this project, in my opinion, was the power management.