I had a friend ask me a question concerning his engineering homework some time back and it got me thinking. At first glance, I looked at it as an engineering issue, being such is the world where in I live… and all of a sudden, its like hey wait a second, this is just homework.

This isn’t a real circuit, where in Mr Murphy is lurking around every corner, just waiting to holler out gotcha should you overlook some seemingly minor detail. This isn’t a production issue, where in an error can equate to hundreds of man hours in rework, or an analysis to determine whether rework, or throwing the entire batch away and starting over is a more cost effective approach.

I thought back to my days at uni in circuit analysis class, where in we’d pound on complex resistor networks, using norton and thevenin to solve equations. We’d look at short cuts where in we could throw this and that sort of characteristics away, being their contribution to the circuit of a whole was minuscule. The classes back then served me well for years, and continue to do so.

And yet, there are many times where in that resistor, isn’t really a resistor, its a complex device, with potentially a capacitive or inductive component or both. It may have some weird impedance curves as a function of frequency. It will have thermal dependencies due to ambient temperatures, but it may also have issues with self heating, or heating from nearby components. And then to add insult to injury, it may have its own internal voltage sources with both ac and dc components as well. All of the above could change throughout its life cycle, or could even change due to mechanical stresses, either as one time things due to assembly issues, or as life long concerns due to stresses on the pcb itself. It could also be influenced by environmental factors such as circuit board pollution, crystalline growth on the terminations, or even water or chemical ingress through its packaging.

Fortunately, most resistors are applied in such a way that they are merely resistors, the factors above end up falling into the miniscule category and can be discarded as too small too have much of an effect on the circuit as a whole.

As such, the primary drivers of resistor selection end up being the idealized value and its associated tolerance, the power and voltage rating, its physical size/foot print, and whether it currently exists in inventory. Application specific concerns may require a specific material spec, carbon comp, metal film, and wirewound to name a few. Temperature coefficient, and/or reliability specification may also be in driving factor in some designs.

A Few Positives

Customization

How many times have you run into a product thats 90% there. It may be an environmental chamber, a piece of test gear, or even a production tool. That remaining 10% can be a real annoyance at times. With open source, customers can have a product modified to get that last 10% solved… with closed source, often times pretty weird hacks are necessary as nothing else come close. I’ve seen production equipment hacked to death at times in order to make it do something reliably. Had prints been available, a third party might have been able to step in, or perhaps even the customer themselves could make a minor modification to a less than obvious function, and be right back in business for less cost, and tons less aggravation.

Orphanization/EOL

The product life cycle can be considered sort of an inverted bathtub curve. In the beginning volumes are low, and unexpected challenges are often pretty rampant. In the middle, the product is rock solid, volumes are up, and costs are down. As a product reaches its end of life, it likely is no longer cost effective for a manufacturer to produce anymore. It may be due to production equipment wearing out, or a lack of institutional knowledge so support said product, or even individual components reaching end of life status. The end result of course is that a product is no longer made, and if you see an ongoing need, you may need to capture a last time buy and put spares in storage, or even search out used ones on ebay. Granted in a majority of cases, on going need ends up to be a non-issue. Most Ink jet printers for example have a finite life, and new models come out every year or two. On the other hand, other products may fall into a niche type industry. For example, some Epson printers are used for cake decorating, ie food grade printable inks… and if Epson were to EOL a printer line without a replacement, a whole industry of food grade printing could be affected. Granted, a printer would be a bear to open source due to its complexity, but the idea of EOL issues affecting more than just a single product is similar. With open source, one is no longer locked into proprietary formats, or the whims of the marketing guys of a sole manufacturer. (Obviously there is good and bad about such a concept… marketing guys like to be sole source and proprietary as that’s where the maximum revenue lies… and it may be the price targets in the open source arena can never compete with closed source due to limited volumes and investment opportunities). Either way, open source reduces some of the risk of EOL and orphanization issues that commonly occur.

Funding

Occasionally potential clients will want a project designed, but lack the financial resources to do so. One workaround for this situation is to fund an existing open source project, where upon I can provide customization services at much less cost than a full blown effort. Another possibility is to rally a number of contributors and build a related open source project, such that only modification is needed, rather than a full blown development effort. Granted there is an IP risk in doing so, but in reality, the largest risk in new product development is marketing, followed by a lack of funding to complete a project. IP risk, while very real, is much less of an issue than the marketing or funding aspects of failures. Open source is one possible way to work around funding shortfalls, and because of lower time commitments, may enable one to get customers involved earlier than later, and thus minimize marketing risk as well.

Time to Market/Risk Mitigation

Time to market is often times a critical factor in new product development. It may be an issue with a market window, ie seasonal, or trade related, or it may be an adjunct product that needs to be available concurrently with another product. Being open source designs most likely have a solid framework, its no longer necessary to reinvent all the wheels. It may well turn out that only incremental changes and/or cosmetic changes are needed, and thus trimming months or in some cases years off the development cycle.

A Few Major Concerns

Business Models

Existing business models built on loss leaders based upon proprietary design/integration wont work. Ie, sell the printer or game console at cost or at a loss, and make it up on the consumables.

Service and support are anathema to most existing business models, where as they are the life blood of open source revenue streams. Of course, there is the aspect of whether customers will pay for said service and support.

Economy of scale

The economy of scale may not be there to meet the customers pricing demands. Ie, when a single manufacturer tools up to build 100K units, there is the potential for significant cost savings, in contrast with 100 manufacturers each building 1000 units a year.

Investors

Investors may be wary of giving away the farm, ie the huge growth possible in a narrow channel is distributed across many. It will require a different approach.

Open Source Project Verification and Support

I’m a firm believer in open source software as well as hardware, yet, I see a real need for qualification of the projects, especially as it concerns hardware. There are just too many half complete open source projects out there, or worse, 3d rendered vaporware that looks so good, its very difficult to tell if its real or not.

I’ve have hundreds of thousands of units in the field where I used an op amp as a comparator, but its not a calkwalk by any means… in fact, as a friend says, its challenging Mr Murphy. I do not recommend doing so… but, marketing cost targets and pcb real estate limitations, combined with a left over op amp in a quad package may end up making it worth considering. Thus, lets look at a few issues.

Op Amps are dog slow compared to comparators

The most apparent issue is op amps are dog slow compared to comparators. Now, if your signals are pretty slow, speed is likely a non issue.

Be wary of an op amps output topology and power supply

Next is the matter of the op amps output. Ie, if you are using bipolar power supplies for the op amps, and the op amp/comparator feeds a micro, you need to play a few games, and that will add some cost and real estate. Or perhaps you are able to run the op amp on a unipolar supply, and it has a rail to rail output, and you will use the same power supply for the micro… expect Mr Murphy to arrive, as you chase op amp instability due to unintentional positive feedback (mostly due to common mode effects). Such scenarios can be a real bear to deal with, requiring untold number of pcb revs to make things happy and stable… and what if the pcb gets dirty with age, throw in a little leakage to create a positive feedback path, and now you have field failures left and right.

If you are tempted to try something like this, add a second analog supply, separate grounds, use faraday shields, and consider conformal coating. Then, once you are all done, go hammer on the input, and look for any signs of instability… hammer it hard, you may be surprised that you still have work to do. With such a topology, you are likely asking to see Mr Murphy at every turn, so put up lots of stop signs, and once installed, hammer them to make sure they are solid.

Phase Inversion, oh noes!

Then comes phase inversion… yes, the term op amp vendors dont like to talk about. Comparators are designed to have a substantial differential input voltage and the resulting currents to some extent as well, op amps on the other hand, are applied where the differential input voltage is theoretically zero, same with input current (Vos, Ib, and layout issues obviously preclude it from being zero, but you get the idea..) If you go outside of the maximum speced differential, expect that you might see phase inversion. It happens with a lot of common op amps, perhaps less so with todays designs than years ago… but no one likes to talk about it. DO NOT EXPECT SPICE MODELS to show this, in fact dont expect spice models to show much of any real world behavior…. Also be aware, it is possible each time the op amp gets whacked with an out of spec differential, it may be degraded permanantly… Not a good spot to be in.

Internal Protection Diodes may bite

And speaking of damage…. some op amps have internal protection diodes, so, if you go outside of the differential input specs, you fire them up… expect all sorts of bizarre and unexplained behavior. This could include thermal issues on the die adjacent to the internal ground making for all sorts of fun scenarios, long after the op amp signals have returned to a nominal state.

Apart from damage, pretty much anytime you get very far from zero volts differential on an op amp input, performance specs can get very dodgy. Sometimes manufacturers will spec out how performance degrades… often times, being such is a misapplication of an op amp, they leave that information off the datasheet. And yes, Spice models as a general rule wont tell you either, even more so, if temperature varies.

Dont use op amps as comparators, but if you must

So… what to do… dont try it, but if you must, go over the datasheets with a fine tooth comb. Look for gotchas on the input and output specs. Be very careful to avoid unintentional positive feedback paths. Give the apps guys at call at the factory, and ask em straight out. Can I do this… they will tell you NO, but they may offer particular suggestions which might help. They know folks misapply their parts all the time… They also know that some op amps plain and simple will not work as comparators no matter how much tweaking one does. In other cases, they must admit some models can do well in such a topology, provided the designer does their homework ahead of time.

But, the problem is getting there, and I think accounting procedures for many businesses are likely a key to successful innovation, or the potential to go under in a huge way.

In good economic times, its easy to play accounting games and shift expenses from a pet project to a mission critical area. Ie, the new whizbang has tons of overhead, so the solution is to shift the accounting for that overhead to existing cost centers where it likely will remain hidden. Then multiply this by tons of pet projects, and all of a sudden, rather than a pet project having real costs associated with it… it seems a no brainer. Thats fine, until the cost centers end up bloated, and prime targets for budget reductions. The end result is, many core functions end up taking a hit, all the while the pet project looks good, at least for a while.

As budgets continue to shrink, the core function cuts will go too far,
and that will hopefully force a restructuring of accounting games,
such that the costs for pet projects become much more accurate.

Slashing key infrastructure only goes so far, before an entity can no
longer support its existing customers, and ends up going under.
Hopefully the downturn will foster more accurate accounting in time to put real numbers behind ALL functions. Not numbers good enough to continue funding pet projects for the next quarter, all the while
letting core functionality go south as all too often happens in large organizations.

Yes, I know its odd for the tech guy to put accounting on a pedestal, but cash flow and its proper allocation becomes really critical in a downturn… and if it uncovers inefficiencies and bad allocations of resources, it gets every one on the road to recovery and innovation a ton faster, as contrasted with riding the fake numbers as the ship goes down.

Conceptually, these types of the devices are fairly easy to understand, everyone is familiar with the term red hot. In the radiation thermometry arena, all we do is apply numerical values correseponding to temperature readings based upon the radiation which is emitted. With a disappearing filament optical pyrometer, one visually looks at the color of a hot surface, and compares that to a thin heated wire, where in the temperature of the heated wire is a known value. When the heated wire is at the same temperature as the surface one is looking at, it effectively blends into the background.

In fact, to those skilled in the field, one can even come pretty close (a few hundred degrees F) to estimating the temperature of a known material just by viewing it with the naked eye. I blew away the Minardi F1 guys, by telling them their brake rotor temperatures via observation, back when I worked with them years ago.

Granted, instrument design, calibration, and accurate temperature measurements are a lot more complex than this simple explanation, but as an initial post on the subject, I thought it best to start off with a very simplified layman’s approach.

For a more detailed explanation, the guys over at Spectrodyne have a nifty graphic. Way back when, I did a lot of work with them, they are one of the ultimate calibration houses out there, albeit their focus is limited to a few specific manufacturers. They also repair and calibrate Radiamatic sensors, which played a large part of my life for a number of years.

Granted, these types of instruments have pretty much all disappeared save the retrofit market, and the Spectrodyne model. In part, being they require an operator to visually make a call, wide variances can exist from operator to operator, but also different materials require different spectral regions for measurment, and being the human eye is limited to the visual spectrum, significant error can be introduce. These factors combined with a need for tighter and tigheter measurement accuracy really limit their application. Yet, for ease of explantions, the basic operation is something most everyone can relate to.

In the software world, scaling comprises two parts, the technical aspect of whether the application will scale as users grow, and the marketing aspect. The techical part is that it wont require an exponential increase in server / computing capability, and ideally, such costs per user would drop as more users are added. The marketing part, is the marketing overhead per user drops as the application grows.. always a tricky part with any type of business, but with open source, perhaps even more critical.

In the hardware world, economy of scale also comprises two distinct parts. First, raw materials/components prices drop as the line item purchases become larger. Ie, if I want to buy one rca jack, its $2 at Radio Shack, it I want 1,000,000 of them, I can even get them customized for a fraction of that cost. Its even more dramatic with enclosures, ie getting 5 thermoformed enclosures might end up costing $500 each, where as getting 500,000 injection molded enclosures might drop the cost down to $0.50 or less.

Then there is manufacturing overhead… everyone would like to build small quantites economically, but when it comes to electronics, often times the setup costs are tens if not hundreds of times the individual piece part costs. Ie, it might take 20 seconds to populate a large dense circuit board, but it takes 4-8 hours to program, load, and test the assembly equipment the first time to make it so. A similiar deal exists in test engineering… ie, a board level test fixture costs $2500, and whether you run 10 or 100,000 units through it, (assuming one already has a lab view style master test console) the fixed costs remain the same. Lastly there is the knowledge base of the line technicians… a 10 piece run does not develop a knowledge base to allow fast rework/repair, or troubleshooting, where as a 10K run pretty much means the line techs are fully up to speed and ready to roll.

All of these factors taken together, make small production runs of open source hardware problematic. Granted, if the margin is there either by uniqueness or customization oppurtunities, its much less of a problem, but for low margin products, its a real challenge.

Some of the ways to mitigate this, are to choose parts which keep the bom cost at a minimum to start with. Ie, avoid $25 highly specific parts, even though the prices drops like a rock with volume. Another solution would include test fixture designs with the design, such that test engineering overhead is minimized. And of course, using production notes to get line techs up to speed, well before they have run 10,000 plus parts. It may be that the use of a collaborative wiki where all manufacturers can chime in with ideas, problems, and fixes may also be of great help in keeping the economy of scale manageable for low volume production runs.

Within the open source hardware domain, there is a wide range of approaches, everything from conceptual designs, not far from the lunchtime napkin, all the way to production ready. Granted, a full blown design with gerbers, bom, avl, mechanicals, production notes, including pick/place targets is easy to spot, just as scans of ideas off napkins or notebooks, its really the projects in the middle that are hard to make the call upon.

Granted, if one is going to build 1, or perhaps a hundred, pick/place targets are likely not of great value, but production notes often are.. and often times, they are the most critical. Ie, things like ferrite beads, and the key role proper temperature profiles play, or perhaps issues like potting, and how to prevent it from migrating into the connectors etc.

And production is really where the rubber meats the road so to speak. Back in my contract manufacturing days, it was often said, most anyone can build one, the challenge is building volume, and indeed that is all too true. It could be production tooling, calibration, test selects, final test, qualification, rework, common failure modes, or any number of factors. A few pages of notes can make the difference between great success, or huge frustration and potential failure.

Thus, as I start posting designs, I will be sure to include production notes, even things that should be obvious, ie the ferrite bead issue is just one of many.

Based upon the products specification, we will develop a prototype qualification plan. This plan will include provisions for Alpha and Beta testing of the prototypes, prior to entering the next stage which is manufacturing preparation. We have access to labs all across the US, with a multitude of specialities ranging from product safety, environmental, FCC, and EMC testing. Our experience has been that even Alpha prototypes should go through a minimum series of test before being released to highly qualified end users for testing. Time and money spent in the lab can save many thousands in the field. This is even more critical for Beta test units, in that potential customers may be involved in the Beta stage. One day in the EMC lab can save numerous flights and field service calls to say nothing of saving face in front of ones Beta testers. The key however, is a well written test plan and qualification documents to try and catch as many potential failure modes in the lab, rather than at the Alpha and Beta test sites.

There is a tendency to want to sell Alpha and Beta units. We do not condone such practices, as such products are not production ready, have not been thoroughly tested on production gear, and just by their nature, may require another interation or two to meet the design specification. As such, the final stages of manufacturing preparation, and product release should be completed, before anything other than pre-release intent to purchase commitments are made.

By the same token, it can be advisable to require some financial commitment on the part of Alpha and Beta testers, such that they take the testing process seriously. I recommend taking deposits as a requirement for Alpha and Beta testing, such that the deposit will be refunded upon the completion of the test, and the return of the Alpha and Beta units.

Non-Disclosure Agreement

I had a fairly generic bilateral non-disclosure agreement created some time back. Its primarily for my new customers to use, however, it could also serve for a template for your own needs as well. However, IANAL, and being these types of things do vary, use at your own risk. At a bare minimum, it would be especially prudent to have a local attorney check it out before using it for something exceedingly critical.

The NDA is in Word 2000 format. If you prefer another format, please use my contact form, and I can email it to you in any number of formats.

One of the things I’ve learned in my SEO travels, is that search engine rules evolve over time, and are becoming closer and closer to what a user wants, as contrasted with what the search engine finds pragmatic.

As such, a meaningful page title makes a lot of sense. For example, in this case, the title is SEO and Title Pages[Ron Amundson] While its not super friendly, it does serve to identify what this page is truley about. In the ideal case, my blog software would allow me to set the title page in a more readable fashion. However, be that as it may, at least, if a user looks at the title, they have something readable that makes sense.

Often times, page title names are either computer generated, and thus not terribly intelligible, or in other cases, they are a default setting spread across a whole web page. Both of these scenarios are not the greatest for users, nor are they conducive to a search engine.

The other thing to keep in mind, is that the title of a page, should indicate what is on that page. I know it seems obvious, but it seems that many in the web community seem to miss the fact that web pages are read by humans, and no matter how much you choose to game the search engines to visit your page. If a user shows up, and then immediately hits the back button, you never had a chance to tell them about your content anyhow.

There are cool tools available to verify keyword density, such that you can verify your title indeed reflects the content of your page. Here is one super cool such analyzer http://www.ranks.nl/tools/spider.html.