Observing lives lost and trauma from preventable tragedies is among the most frustrating experiences of my career. However, whatever frustration we feel pales in comparison to the pain victims and their family members experience.

Prevention of human-caused catastrophes has long been a top priority of our R&D. We have a desire and an obligation to provide insight into what can be done to prevent school shootings and other similar human-caused catastrophes. I sincerely hope this modest attempt to help will assist in taking specific pragmatic actions to save lives.

Step 1: Understanding Prevention

Investing well in prevention provides the highest possible return on investment.

In every respect including human lives, trauma avoidance and financial ROI.

The super majority of human-caused crises are now preventable.

The majority of human-caused catastrophes are preceded by multiple warnings in the modern information rich environment.

Effective prevention is primarily a combination of good science and engineering that tend to work best independent of political and financial conflicts.

Although state-of-the-art systems are the most effective for capturing preventions, low cost local efforts in manual form are better than what exists today in most communities.

The main components in effective prevention include:

A functional system to identify early warning signs.

Ability to connect dots from various disparate sources.

Methodology to qualify risk factors with accuracy.

Integrated reporting to appropriate authorities.

Follow-up to confirm necessary action was taken.

Foolproof governance to prevent abuses and failures.

The ability of social networks to assist is limited.

Large social networks have significant resources and strong data science teams. Similar to the federal government they have the entire world to be concerned with, not just one community or school. Social networks also have business models that often directly conflict with the needs of individuals and communities. People use many different types of apps that come and go.

For these reasons communities need their own networked security systems that encourage participation and are free from external conflicts. Community efforts need to coordinate with schools and law enforcement, which in turn integrate with federal agencies.

The most effective prevention requires advanced systems engineering.

Since the Phoenix memo was revealed following the 9/11 terrorist attacks an enormous amount of time, intellectual and financial capital has been invested in prevention methods and systems. The best systems are very good but expensive. Independent systems that have the capacity to effectively manage the complexity and scale of public interaction are not yet available in an affordable turnkey manner.[i] However, low-cost methods do exist that all communities can and should initiate to significantly lower the risk of school shootings and similar tragedies.

Step 2: Short-term Action Plan

Create a small high-level taskforce of operational managers from local organizations including law enforcement and IT experts.

The goal should be to reduce the probability of catastrophic events to the lowest level possible given the physical, financial and regulatory constraints.

Step 3: Long-term Plan

Unified artificial intelligence (AI) systems with human augmentation distributed over fully interoperable networks should be a high priority for the long-term. Carefully designed networked systems are not only optimal for physical safety, but also health, learning, productivity and economic competitiveness.

AI systems are similar conceptually to Moore’s law that accurately forecast the number of transistors in a dense integrated circuit to double approximately every two years. Performing many functions in one system well can be much more efficient and easier to justify from a financial investment perspective. Distributed AI systems offer the potential for a multiplier effect than can pay dividends for many years.

New Novel Financing Program for Prevention

One of the obstacles is that investment in effective prevention for low probability events for any particular location or entity are difficult to justify, particularly in tight budget environments. We developed a novel new model last year called HumCat to help overcome the problem of financing. The HumCat program (human caused catastrophes) bundles insurance coverage and bonding with unified AI systems, which can be extended to bonds to finance projects.

The economic goal is to demonstrate improved risk profiles for communities over time, which can then also improve ratings and reduce insurance and borrowing costs. The intent for the HumCat program is to not only cover the cost of the system install and monitoring, but also pay for itself many times in the form of several types of captured preventions that saves lives and treasure.[ii]

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at markm@kyield.com.

[i] In the case of our Kyield OS the cost is in tens of millions to develop into turnkey hybrid cloud format in the highly specific architecture required to provide scalability in an independent manner for prevention with optimal governance, monitoring, and security. The system includes governance for the entire network, ability to continuously learn and improve automatically with a simple to use natural language interface, and downloadable apps for all individuals on any standard device.

[ii] HumCat is an abbreviation for prevention of human-caused catastrophes roughly patterned after ‘nat cat’ (natural catastrophes): “A Million In Prevention Can Be Worth Billions of Cure with Distributed AI Systems” http://www.kyield.com/technology/humcat.html

Since we offer an organizational and network operating system—technically defined as a modular artificial intelligence system, we usually deal with the most important strategic and operational issues facing organizations. This is most obvious in our new HumCat offering, which provides advanced technology for the prevention of human-caused catastrophes. Short of preventing an asteroid or comet collision with earth, this is among the most important work that is executable today. Please keep that in mind while reading.

In our efforts to assist organizations we perform an informal review on our process with the goal of improving upon the experience for all concerned. In cases where we invest a considerable amount of time, energy, and money, the review is more formal and extensive, including SWOT analysis, security checks and reviews, and deep scenario plans that can become extremely detailed down to the molecular level.

We are still a small company and I am the founder who performed the bulk of R&D, so by necessity I’m still involved in each case. Our current process has been refined over the last decade in many dozens of engagements with senior teams on strategic issues. In so doing we see patterns develop over time that we learn from and I share with senior executives when behavior causes them problems. This is still relatively new territory while we carefully craft the AI-assisted economy.

I just sent another such lessons learned to a CEO in a public company this morning. Usually this is done in a confidential manner to very few and never revealed otherwise, but I wanted to share a general pattern that is negatively impacting organizations in part due to the compounding effect it has on the broader economy. Essentially this can be reduced to misapplying the company’s playbook in dealing with advanced technology.

The playbook in this instance for lack of a better word can be described as ‘industry tech’, as in fintech or insurtech. While new to some in banking and insurance, this basic model has been applied for decades with limited, mixed results over time. The current version has switched the name incubators for accelerators, innovation shops now take the place of R&D and/or product development, and corporate VC is still good old corporate VC. Generally speaking this playbook can be a good thing for companies in industries like banking and insurance where the bulk of R&D came from a small group of companies that delivered very little innovation over decades, or worse as we saw in the financial crisis, which delivered toxic innovation. Eventually the lack of innovation can cause macro economic problems and place an entire industry at risk.

A highly refined AI system like ours is considered by many today to be among most important and valuable of any type, hence the fear however unjustified in our case. Anyone expecting to receive our ideas through open innovation on these issues is irresponsible and dangerous to themselves and others, including your company. That is the same as bank employees handing out money for free or insurance claims adjusters knowingly paying fraudulent claims at scale. Don’t treat the issue of intellectual capital and property lightly, including trade secrets, or it will damage your organization badly.

The most common mistake I see in practice is relying solely on ‘the innovation playbook’. CEOs especially should always be open to opportunity and on the lookout for threats, particularly any that have the capacity to make or save the company. Most of the critical issues companies face will not come from or fit within the innovation playbook. Accelerators, internal innovation shops and corporate VC arms are good tools when used appropriately, but if you rely solely on them you will almost certainly fail. None of the most successful CEOs that come to mind rely only on fixed playbooks.

These are a few of the more common specific suggestions I’ve made to CEOs of leading companies in dealing with AI systems, in part based on working with several of the most successful tech companies at similar stage, and in part in engaging with hundreds of other organizations from our perspective. As you can see I’ve learned to be a bit more blunt.

Don’t attempt to force a system like Kyield into your innovation playbook. Attempting to do so will only increase risk for your company and do nothing at all for mine but waste time and money. Google succeeded in doing so with DeepMind, but it came at a high price and they needed to be flexible. Very few will be able to engage similarly, which is one reason why the Kyield OS is still available to customer organizations.

With very few exceptions, primarily industry-specific R&D, we are far more experienced in AI systems than your team or consultants. With regard to the specific issues, functionality, and use cases within Kyield that we offer, no one has more experience and intel I am aware of. We simply haven’t shared it. Many are expert in specific algorithmics, but not distributed AI systems, which is what is needed.

A few companies have now wasted billions of USD attempting to replicate the wheel that we will provide at a small fraction of that cost. A small number of CEOs have lost their jobs due to events that cost tens of billions I have good reason to believe our HumCat could have prevented. It therefore makes no sense at all not to adopt.

The process must be treated with the respect and priority it deserves. Take it seriously. Lead it personally. Any such effort requires a team but can’t be entirely delegated.

Our HumCat program requires training and certification in senior officers for good reasons. If it wasn’t necessary it wouldn’t be required.

Don’t fear us, but don’t attempt to steal our work. Some of the best in the world have tried and failed. It’s not a good idea to try with us or any of our peers.

Resist the temptation to customize universal issues that have already been refined. We call this the open checkbook to AI system hell. It has ended several CEO careers already and it’s early.

Since these are the most important issues facing any organization, it’s really wise to let us help. Despite the broad characterization, not all of us are trying to kill your organization or put your people out of work. Quite the contrary in our case or we would have gone down a different path long ago.

We are among the best allies to have in this war.

It is war.

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at markm@kyield.com.

Observing lives lost and trauma from preventable tragedies is among the most frustrating experiences of my career. However, whatever frustration we feel pales in comparison to the pain victims and their family members experience. Prevention of human-caused catastrophes has long been a top priority of our R&D. We have a desire and an obligation to […]

It is truly an honor to share our recent announcement and welcome Vice Admiral Phil Wisecup USN (Ret.) to our board of directors. Phil joins Dr. Robert Neilson who is now special advisor to the board. As their bios only partially reflect, Phil and Rob are exceptional additions to Kyield’s leadership. Vice-Admiral James P. “Phil” Wisecup (Ret.) brings 40 […] […]

From theorem to market through multiple valleys of death and beyond This is a personal story about our real-world experience, which contains little resemblance to most of what is written about entrepreneurism and technology commercialization. While our journey has been longer than most, scientific commercialization (aka deep tech) typically requires two de […]

Even though some companies may seem well positioned, the fundamental economic and business environment is rapidly changing. To the best of my awareness, survival from this point forward will essentially require a strong AI OS for the super majority of organizations.

I wanted to share a general pattern that is negatively impacting organizations in part due to the compounding effect it has on the broader economy. Essentially this can be reduced to misapplying the company’s playbook in dealing with advanced technology (AI systems).

Every year, natural catastrophes (nat cat) are highly visible events that cause major damage across the world. In 2016 the cost of nat cats were estimated to be $175 billion, $50 billion of which were covered by insurance, reflecting severe financial losses for impacted areas.[i] The total cost of natural catastrophes since 2000 was approximately […]

The focus should be maximize benefits from our inventions, engineered systems and technologies to recreate a sustainable competitive advantage. One benefit of lagging behind other countries in infrastructure is that much progress has been made in recent years. Future projects can be embedded with hardware that enable intelligent networks, which can then be m […]

Learn about the background of Kyield and the multi-disciplinary science involved with AI systems, with a particular focus on AI augmentation for knowledge work and how to achieve a continuously adaptive learning organization (CALO).

The photo above represents a learning opportunity especially relating to survival and adaptation. Recently completed by my wife Betsy[i], the artwork was inspired by our visit to the Acoma Pueblo a few months ago, which is one of the oldest continuously inhabited communities in North America. Ancestors of current residents have lived on top of a 360-foot tal […]