Will Oligarchs Own the Future of AI?

Here we go again.  We are set back a bit this morning (two weeks ago now) with a recent Tech Crunch article about Anthropic, perhaps the most inspiring company touting safe AI.  They seek to raise billions to compete with not so Open AI.  Before commenting on the article, how about a little context?

Open AI, the company, was founded in 2015 precisely to, as stated on Wikipedia today, “freely collaborate” […and make] its patents and research open to the public.  Things began to change in 2019 as Open AI transitioned to a for-profit corporation.  Ultimately, this year, Open AI has written that they will no longer “share [any] details for commercial and other reasons”.

Last year, in a podcast with Sam Altman and Reid Hoffman, the CEO of Open AI suggested that only a handful of companies could provide the foundational AI models on which everyone else will build “the middle layer”.  This suggests that foundational AI will simply become part of “Big Tech”, raising familiar questions of who owns the future.

Open AI has been stodgy with being open since it initially refused to share the GPT-2 model in 2019, eventually doing so, arguably due to pressure from the AI community, in particular.  Open AI has not shared any subsequent model. 

Open AI (little ‘o’; not the company) does not mean commercial AI becomes impractical.  The intent of open AI is to keep fundamentals of nature, like math, electricity, and fire (including nuclear power), from becoming private property at the expense of society.  Making a living harnessing them and applying them innovatively should remain fair game.

Our entire tech industry is built on shared intellectual property, most notably the open-source software movement.  Without open-source, much of modern life would be stuck decades in the past.  All the progress in machine learning over the last few decades would have been impractical without open-source operating systems and programming languages, such as Linux and Python, in particular.

AI models are a little different.  They have two critical parts.  One is the source code that implements them.  Typically, this is Python code which runs on tens to thousands of GPUs, which are massively parallel matrix manipulating machines.  Essentially, given data, the algorithms written in Python adjust the matrices until the error in predicting things about the training data is minimized (or nearly so).

This second part, the resulting contents of the matrices after training (a.k.a., the model weights), is where the controversy of open AI started with GPT-2.  Through GPT-3, Open AI was quite good about publishing details of the algorithms used in its models.  The AI community easily replicated such models, with various modifications.  Fair enough, that’s one degree of open AI.  Many believe it’s not enough.

Better is the general, open-source attitude among AI researchers, including many with commercial affiliation, and especially the Hugging Face community sponsored by Meta.  But having the source code of a model is not “democratic” enough.  Wherein democratic, means practically available to anyone and everyone.  Practically available to everyone requires both the source code of a model and the weights resulting from its training to be available.

Just a few details on the models and their weights.  The transformer architecture has been refined significantly but remains basically unchanged over the last 5 years.  The source code for producing the state of the art is widely available and gradually evolving as techniques improve.  In order to produce a state-of-the-art model, massive amounts of data are needed.  Whether training a language model from text or a multi-modal model with text, images, etc., we have enough readily available data to approach the state-of-the-art results democratically.  Where it becomes less democratic is the cost of computing the model weights given the training data.

The amount of computation required to train is model is (naively) proportional to the number of training iterations times the size of the model.  For the most part, the amount of computation is proportional to the number of parameters in the model.  The weights are simply the values of those parameters after optimizing the model by training it with the data.

Table stakes for a good language model, which generally require over 10 billion parameters, is 1023 floating point operations.  For example, Google’s Chinchilla proved a 70B parameter model superior in many regards to models many times its size.  Meta’s more recent LLaMA benefits from additional improvements.  A Chinchilla-scale model was trained on 2,048 A100 GPUs on over 1 trillion tokens of text in 21 days.  At arms-length, on-demand processing last year, this would cost roughly $1 million.

Commercial Open AI would have us believe that this is just the tip of an iceberg.  That $1 million today will be $1 billion tomorrow.  Open AI would have us believe that we can’t afford to keep up as they build models 10 to 100 times larger.  Well, the jury is out.  There have already been models with 3 to 10 times as many parameters as GPT-3 which have fizzled quickly.  But the intent is clear.

Unfortunately, according to the article, inspiring Anthropic now aspires to be one of the oligarchs of AI.  The article states that Anthropic’s investor pitch deck claims, “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”  It goes on to assert that AI will automate large swaths of the economy in very few years.  Such hyperbole may further inflame unfortunate calls for a moratorium on AI.

We like Anthropic’s approach to Constitutional AI and are big fans of continuous, self-supervised learning, as well as reinforcement learning given human feedback.  These aspects have materially advanced the safety and the instruction-following and conversational abilities of language models recently.  But in doing so, they require an order of magnitude less compute than the brute force pre-training discussed above.

Anthropic thinks building a model with “tens of thousands” of GPUs will produce magic.  We’ll see.  There are a few stubborn facts in the way.  One problem is that increasing the size of a model and the amount of training data must be balanced.  Loosely speaking, one cannot just double the number of parameters without doubling the amount of training data.  One problem with doubling the amount of training data is that we are running out of data.  Another is that we are already approaching the asymptotes that we can get from numbers of parameters and amounts of training data. 

The basics of the learning curves for models of more than a few billion parameters is that the inflection point of diminishing returns is passed quickly, somewhere between 10 and 100 billion tokens of training data.  After “just” a few 100 billion training tokens, a model with 30 to 120 billion parameters begins to look asymptotically close to “fitting” the training data.  And the 30 billion parameter model fits the data over 95% as well as the model 4 times its size.

Whether or not size matters, other innovations are coming into focus now that we have sufficient scale.  Hopefully, we can avoid Big Tech, including Open AI and Anthropic, owning our future by more openly sharing models, including their weights.  If not, we can expect the innovations and advances in AI to slow as proprietary interests slow the exchange and experimentation that has produced staggering advances in the last decade.  Either way, if the limits of scale alone are indeed near, it’s not the end of the world.