After migrating to a new Mac, I found out that a ton of my web site passwords were gone or were out of date (I haven’t been using iCloud passwords or any other password app). The Migration Assistant seems to have problems under certain conditions. While I haven’t completely figured out the underlying issue, the only viable solutions were to either copy passwords to a new key chain or to use iCloud passwords temporarily or permanently.

On newer MacOSes password storage focuses on iCloud that is complemented with a local cache and local system passwords. You can find the password cache in key chain files and folders in ~/Library/Keychains. If you deactivate iCloud password use, the cache will be referred to as “Local Items” (if you decide to keep it). Every Mac gets its own UUID folder so if the migration succeeded you will find your old Mac’s folder and a new one for you new Mac. Now even with the files present on the drive the passwords were still missing. Copying those files around won’t help.

After investing a bit of time I decided to activate iCloud on my old Mac and then my on new Mac. The files are synced online and then to the new Mac (sure enough, during the experimentation phase this syncing deleted at least one of my very new passwords…). Since this did the trick, I am now even considering keeping iCloud passwords active and thus simplifying my future migration processes.

If you don’t want to store passwords on the iCloud servers permanently, just deactivate iCloud passwords on ALL devices (not that easy if you use a ton ;). Actually, Apple is suggesting that there is a way to completely skip password storage in the iCloud by not creating a Cloud Security Code. I haven’t tried this and am even unsure if this still works in Sierra.

Today I opened a milk carton that was printed with a thank you, because by buying this brand of milk we are supporting dairy hill farmers. Germany is the home some of the hardest discounters (and other retailers) who, acting in the interest of the consumer, use their economies of scale to dictate the purchasing cost for food and other goods. This process has lowered the retail prices in Germany to the lowest in the EU. However, it also jeopardizes farmers and other producers.

What if there is a better way for pricing?

First I would like to look at the reasons why consumers push for such low prices and then propose an alternative pricing model. While retailers act in the interest of consumers to lower the cost of goods, they also want to maximize their return. So the price of the goods gets an additional level of complexity. Competing retailers advertise their prices and consumers can compare the offers. Commodities such as milk are not differentiated, so the comparison is easy. Rational consumers decide for the cheaper offer, since they need to maximize the utility of their income.

However, I think that there is a second interest that the consumer is after, at least me. Since there is a considerable retail margin on top of the goods, it is this we are trying to optimize away. Retailers have long lost their personality and their knowledge but still try to earn the same margin. For example, I know more about the BMW I am pursuing or the camera than my dealers do. All the information is available on the internet and as a buyer I am more interested in the product than the dealer. With all the price transparency would you buy more at a more expensive retailer if the there is zero benefit to it?

In addition, I think that consumers want to support the producers of goods, some of my colleagues are also supportive of retailers (much more so than me).

New pricing model

Discounters that often offer only one type of a commodity, should provide two prices for that good. One that is calculated hard as it is today and one that is a few cents more expensive. That additional margin transparently and to 100% is cashed out directly to the producer of the good. Without even ending on the profit and loss account of the retailer. Technology should make this easy to implement.

One could extend this strategy to other products and retailers, too.

A consumer can then decide if they want or if they can support the producer directly without in any way supporting the retailer (other than purchasing at the store). This would increase fairness to the producers. By having two prices for a single product, the consumer can themselves decide (not through the proxy of the discounter) if the prices are too high. It would still be good for the retailer who would provide the offer, initially differentiating themselves from the competition and later still provide that solid margin for the base good.

It is also fair to the consumer, since not all are equal. Wealthy consumers could easily afford the higher prices, while the poorer could pay the lower prices, knowing that the pricing model is supporting the producers. Both could decide to do the opposite if they so wish. This model would directly show the interest of the consumer to the producer without being proxied through the retailer.

Why not just buy that milk brand described in the introduction? How fair is that? Who gets the margin? Is it fair to the consumer? Where can I get that brand? Transparency, fairness, availability are all reasons to not use that model, if the proposed strategy is implemented.

Posted inUncategorized|Comments Off on Pricing Strategies for a Fairer Trade

Docker and containers in general are revolutionising the way we think about application development and operations. As a new packaging technology I think it perfectly defines the interface between an application and the host it will be running on. In order to better understand why this is so, I have looked at this interface and how it captures application dependencies and what it leaves out. Because many different packaging technologies have been used. Here’s a historic list of those that I have ever been using:

C64 BASIC source code (actually it is byte code but can be seen as equivalent)

Breakthrough innovations are changing IT at ever faster speeds. New tools, languages, libraries, virtualisation technologies, workflows appear and get adopted in shorter times. A perfect example of this is Docker.

Keeping pace in that kind of environment affords a structured methodology. Here is what I do when preparing for a new thing, keeping in mind that no matter how cool it is, I will probably be far away from using it at my current paid projects.

Read a lot on sites such as InfoQ, where practitioners report.

Go to general DevOps conferences such as QCon, Velocity or watch in Youtube.

Key people will start to emerge from what you read and hear. Follow their publications and talks.

Pick a topic that matches your preferences and that will keep you focused over the next months and hopefully years.

Try it out, start with a small startup scenario, prototype it. Ask for help.

Operational aspects of software systems are often treated as second class citizens. The point is that e.g. availability is expected to be a given. FRs (functional requirements) are a differentiator and NFRs (non-functional requirements) are not. So why should anyone focus on NFRs?

In a world of mobile and sharing economy NFRs are becoming a necessity for the consumer (enterprise IT guys need to understand the difference between an internal customer and a consumer). However, due to complexity, internal structure and ignorance some enterprises who are creating those new products won’t get Ops right.

The consumer will find itself in a world of imperfection. Focus on Ops will lead to differentiation and therefore competitive advantage. Or at least ineffective Ops might devastate the unprepared!

Posted inDevOps|TaggedDevOps, NFR, Ops|Comments Off on Ops as a competitive differentiator?

Barbara Liskov, professor at MIT, presented an IT-history-based keynote at QCon, called “The Power of Abstraction”. It was fun and painful to be reminded of the IT topics of 70s:

Gotos

Top-down structural design

Modules

Abstract data types

Algol, Simula, CLU

Specifically, her work on CLU, a programming language used mainly for research, with its concepts of inheritance, polymorphism, iterators, multiple return values, explicit type casting and exception handling have influenced the development of OOP and popular languages like Java and Python amongst others. Graham Lee has collected all documents referenced in the talk in his post.

ThoughtWorks’s Vladimir Sneblic today held the excellent Continuous Delivery course at QCon 2013 together with his colleague. Expanding on the well-known must read from Jez Humble, the tutorial included anectodal stories, case examples and professional materials.

In a nutshell, Continuous Delivery proposes to improve the painful – at least in most larger companies I know of – process of software delivery from development to operations by increasing delivery / deployment frequency. There are multiple reasons to do this. Massively increasing delivery frequency:

decreases the increment size, reducing the risk resulting from the scope of the change,

forces to focus on the delivery process and drives automation efforts,

uncovers the necessity to cooperate especially between Dev and Ops,

shows the urgency to improve the structural deficiency in Ops,

and ultimately reminds of the importance of Ops in the context of software development.

However, implementing Continuous Delivery is a major change effort. The following is a list of things to implement, supposing that an agile software development is already in place:

Continuous integration

Trunc is always production ready

Automated testing

DB migration tools

Agile infrastructure

Comprehensive confugration management (Everything is in version control)

DevOps

In larger corporations this results in a major organizational transformation to be pitched at the level of the corporate CIO.

Today Dan North presented his “Three Ages” core pattern (or as I would say, business model) at the QCon 2013. I like how it succinctly categorizes the phase of, say, the adoption a methodology in an organization.

Business is often in explore, however, they require IT to be in stabilize.
Dan recounted the story of an IT ops guy who is constantly driving for commoditization. I would love to have this guy in my team!

In a project I know of, the teams are trying to reach stability. Business however, try to force the maximization of efficiency of IT. That destabilizes the teams into exploration.

There is no shortcut to the three steps. Follow the steps in the given order and create a culture of continuous improvement.