The half-life of knowledge or half-life of facts is the time that has to elapse before half of the knowledge or facts in a particular area is superseded or untrue. These coined terms belong to the field of quantitative analysis of science known as scientometrics.

Here are a few things to think about:

What is the half-life of startup knowledge? Can we take lessons from the past and use them today?

What is the half-life of knowledge about software architecture and design?

What is the half-life of knowledge about sales and marketing techniques?

Some knowledge may have a shorter half-life, some of it may not be. To stay relevant in your industry you need to figure out how much of your knowledge is still useful.

The term “Product Management” evolved somewhat. For me, it was a confusing term. Product Managers (AKA PMs) did not fit into either development or marketing roles. I list three good articles that demystify Product Management.

I was talking to a student. He is fascinated with a robot that cleans pipes. He had a prototype and won some awards. He wanted to discuss it.

We sat with him and brainstormed many ideas for the design at a very high level. I encouraged him to think about a different cleaner robot – one that cleans water tanks. Our discussion lasted half an hour and it was one of the most rewarding exercises I did today.

Thinking through the design of products is fun. When you do it as a small passionate group, it is even more fun. One of the reasons I hang out with a lot of engineering students.

Most of my school and college life was spent in learning lots of facts. I also learned principles and concepts but not in any coherent manner. I was not sure why I was learning, what I was learning. Our teachers (if they knew), forgot to tell us the “Whys?”. Some of this learning was fun and enjoyable and reasonably effortless but some of it was not.

When I started working, I started learning by doing. This was way more fun since I had a context on why I had to learn certain things. I retained my knowledge better since Iusing it. When you learn by doing or learn so that you can use it, the style is very different. You learn on demand and if some of what you are learning does not make sense, you dig deeper and try to find out why something works the way it does. I will call this as exploratory learning and it certainly is a lot more effective.

I think people will learn better, if:

They know why they are learning (learning by understanding the larger context)

They are allowed to explore (learning by exploring and discovering)

They are challenged by tasks that require learning (learning by doing)

They have the freedom to learn in their own ways (Seven freedoms of Learning)

If you are using Google or Bing Search, if you get recommendations for books or other products from Amazon, if you are getting hints for the next word to type on a mobile keyboard, you are already using Machine Learning.

Artificial Intelligence (aka AI), will have a deep impact on our lives – both positive and negative. Like any other tool or technology, a lot depends on how we use it. I often get asked these questions:

What is AI?

What is good about it?

Will it destroy jobs?

Will it take over humanity?

What do we need to do to leverage AI?

AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language. These traits allow AI to bring immense socioeconomic opportunities, while also posing ethical and socio-economic challenges.

Right now the opportunities are in research, technology development, skill development and business application development.

The technologies that power AI – neural networks, Bayesian Probability, Statistical Machine Learning have been around for several decades (some as old as the late 50’s). The availability of Big Data is bringing AI applications to life.

There are concerns about misuse of AI and a worry that it may result in uncontrolled proliferation, killing jobs in its wake. Other worries include unethical uses, unintended biases, and other problems. It is too early to take one side or the other.

She likes everything in brief – ideally 100 words. Me, I like to pontificate, take my time (in words, I mean), and ramble a bit.

There are three reasons why I think AI as Augmenting Human Intelligence:

Humans have to be in the loop to teach AI. In supervised learning, they are designing the training sets, doing feature engineering and other tweaks. In reinforcement learning they are provided with feedback through reinforcement signals.

Humans will figure out where to apply AI, how to apply AI and how to interpret and improve the results.

There may be some situations when the AI may be autonomous – like in space robots or in some hazardous situations where humans cannot get involved in real time.

As AI learns more and discovers new insights, humans will use them to move them to the next higher level. In my opinion, humans and AI co-evolve. This is the process of Augmenting Human Intelligence.

A few links on Machine Learning and Software Engineering. The first one talks about how to explain machine learning to a software engineer and why software professionals need to pay attention to ML. It is both a tool and a bit of a threat.

The second article compares the way we build software and how it differs from building ML applications.

Software engineering is about developing programs or tools to automate tasks. Instead of “doing things manually,” we write programs; a program is basically just a machine-readable set of instructions that can be executed by a computer.

Now, machine learning is all about automating automation! Instead of coming up with the rules to automate a task such as e-mail spam filtering ourselves, we feed data to a machine learning algorithm, which figures out these rules all by itself. In this context, “data” shall be representative sample of the problem we want to solve — for example, a set of spam and non-spam e-mails so that the machine learning algorithm can “learn from experience.”

Not all core concepts from software engineering translate into the machine learning universe. Here are some differences I’ve noticed.

A few thoughts:

ML and Software development will co-evolve. Software will be used to build tools for building ML. ML will automate automation. Since software is the current tool for automation, ML will replace many of the software activities. Does this pose a threat to software profession?

Do we need a different mindset for building ML apps, compared to building software? What principles of software development can be reused while building ML apps?

Can ML help us build better software by improving the building process?

The software industry is one of the heaviest users of tools for automating their own work. Various low-level (assembly), high-level (Java, C++, C#) and very high-level (Python, Ruby) languages and their associated tool chains simplified building applications. Now we have tools for not only building software, but debugging, profiling, optimizing, and managing it. Is ML going to be another one of these tools? Will these new class of ML apps take software as input and produce better software as output?