Most of Open AI’s future work will be done under the name OpenAI LP. But, never fear, its intentions are pure, says the release, citing the fact that OpenAI LP will be governed by the OpenAI nonprofit board, only some of whom will be allowed to have a financial interest—maybe we should call this “partial inurement”?

OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language.

The cited reason for the change is a need to raise “billions of dollars” and attract the best and the brightest with massive signing bonuses. But, aside from any basic divergences in basic motivation, Open AI says it’s still following the same mission and will limit the amount investors and workers can make from it. (That’s the aforementioned “cap.”) In fact, investors in the first round are “only” allowed to earn up to 100 times their initial investment.

The group put its need for capital down to the massive computing resources needed to run its data-crunching algorithms, as well as a desire to build its own AI supercomputers. In a demonstration of how sheer computing brawn can bring big advances, OpenAI last month demonstrated a language-producing system it had built that can construct coherent-sounding text from any prompt. The system works by analyzing mountains of text and then guessing which word is most likely to come next in any situation, turning writing into a statistical guessing game.

Here is what ValueWalk has to say about one reaction to OpenAI’s humanitarian work:

With artificial intelligence taking a major leap in recent years and estimates showing it will likely grow even more, we have seen many discoveries which could do more harm than good. One such example is the text-generator developed by OpenAI. The machine learning algorithm can turn only a small portion of text into lengthy and convincing paragraphs. Now MIT has collaborated with IBM’s Watson AI lab to develop a machine learning algorithm to fight AI-generated text like that generated by OpenAI’s algorithm.

Language models have now improved dramatically, leaving plenty of room for manipulation. In other words, people with malicious intents could use text generators to spread propaganda or false information.

Ruth is Editor in Chief of the Nonprofit Quarterly. Her background includes forty-five years of experience in nonprofits, primarily in organizations that mix grassroots community work with policy change. Beginning in the mid-1980s, Ruth spent a decade at the Boston Foundation, developing and implementing capacity building programs and advocating for grantmaking attention to constituent involvement.