GPT under $100,000?

For the last several years, we’ve been hearing about how much it costs to build ever larger language models.  Today, a state-of-the-art language model requires approaching a million-trillion-trillion (1024) arithmetic operations involving hundreds of billions of parameters.  Doing the math, assuming a decent, if older GPU, such as an A100, you come up with how many years this computation will take.  Then you figure out how many GPUs you need given how many days you have to complete the computation.  For example, Meta recently published that training a 65 billion parameter version of the LLaMA model using over a trillion tokens of text took approximately 21 days on roughly 2,000 such GPUs.  That’s almost exactly 1 million hours of GPU time, which can be had for less than $1,000,000.

So, for $1 million, given a few decent machine learning folks, you could replicate a state-of-the-art language model or build your own tweaked to perform better in your domain, such as Bloomberg has done given its financial market.  Expect to see much more of this from many corners, especially in healthcare, various areas of the life sciences, and others.

I would like to save the $1 million and start with the 30 or 65 billion parameter LLaMA model rather than train it from scratch.  Unfortunately, Meta is not forthcoming with model weights for LLaMA beyond 13 billion parameters.  The 13 billion parameter model is impressive enough.  The 7B model is not capable enough for me. The 65 billion parameter model would be better, but not twice as good.  The 30B parameter model is in the sweet spot.

Note that if you’re fine with a smaller pre-trained language mode, you could try Stability AI’s LM. These are the folks who brought you Stable Diffusion. They promise to eventually release, for any use including commercial, language models up to 65B. When available, the 15B model may be a good option. For now, I’d like to stick with LLaMA because of some of its significant algorithmic improvements.

Although available, even the 13 billion parameter model is not openly licensed.  As is frustratingly common, the model weights are licensed only for research, not for commercial purposes.  Meta invites commercial inquiry but, regrettably, based on my experience, is not eager to respond.  So, wanting to use LLaMA commercially, you may have to train your own model.  So, let’s talk budgets?

Training a 13 billion parameter model will cost about 20% of training a 65-billion parameter version.  That will cost less than $200,000.  You might be able to cut that cost by half, maybe more, however.  It’s a little dicey, but you can cut back on the training data.  Google’s excellent results from Chinchilla teach us to balance model size with training.  OK, but the truth is that you can get over 90% of a language model’s final performance with less than half a trillion tokens of training data. 

If you can afford it, you can avoid cutting back on the training data by taking your time.  Your language model will be pretty good in 30% of the time Meta took and you can just let it improve over time.  That is, start using the model and keep training it, replacing the one you’re using every once and a while.  This is viable even if you perform fine-tuning (and even reinforcement learning) because the relative costs for such tuning are quite small versus pre-training. 

The bottom line here is that you can build your own 13 billion parameter LLaMA for less than $100,000.  If you’re going to do millions of transactions, you might not be able to afford not to go in this direction!

LLaMA is essentially an improved version of Open AI’s GPT.  LLaMA benefits from various algorithmic improvements since GPT-3 was released a couple years ago.  Recently, Open AI introduce Instruct GPT, which follows instructions and Chat GPT, which holds conversations.  And GPT has advanced to version 4.

LLaMA is GPT without the instruction following or conversational abilities.  These are easy, and inexpensive, to add, however.  Consider instruction following, for example.  Researchers from Stanford generating tens of thousands of simple instructions and results using GPT-4 and trained the 7 billion parameter version of LLaMA with them.  The dataset is relatively simple, and I thought weak, but was remarkably effective.  I was quite surprised how well it follows instructions given only that simple, synthetic dataset.

On the other hand, it’s not all that surprising, given that we have seen much transfer of learned representations in vision.  The ease of improvement here is simply because any decent generative language model will quickly adapt to using its representation to new linguistic sequences, such as those involving instructions.  It doesn’t have to construct much new representation to do so.

The resulting language model is dubbed Alpaca; a cute play on words.  Well, now there is Vicuna!  Vicuna takes the 13 billion parameter LLaMA to approach ever more closely to Open AI’s state-of-the-art performance.  According to the researchers from CMU, Stanford, and the University of California at Berkley and San Diego.

Look them up.  It’s stunning how easily they compete with Google and approach Open AI.  And the training cost to improve LLaMA to “within 10%” of GPT was less than $1,000.  More and more is happening on this front.  For more, see Microsoft’s DeepSpeed-Chat (which may seem odd given their investments in OpenAI, the company.).

Imposing Our Constitution on AI

In the face of calls for moratoria on AI research, it is clear that society wants to govern AI.  In the United States, we are governed by The Constitution.  It lays out founding principles and inalienable rights, since amended 27 times, including the Bill of Rights.  It defines the structure of our government into 3 branches and how they are governed by the people.  Thus, we govern ourselves, democratically.  How shall we govern AI?  (Presumably, we don’t want to give AI the vote!)

Recent advances in AI have shocked our social fabric.  There is fear of economic upheaval, sudden significant unemployment, or deep fakes and toxic bots swarming social media.  Fear is easy, and understandable.  Thank you, Darwin, God, et al.  Flight commonly follows.

The rational, educated, adult reaction to fear which is not immediately threatening should be to think – without fear – before acting.  Is the banning of AI by various social media sites, school districts, universities, etc. rational? 

We think “yes” in some cases, such as trying to avoid coding bots from overwhelming Stack Overflow with misinformation.  AI is not smart enough to provide reliable answers.  And don’t hold your breath, please?  The same is true for social media, where deep fakes and misinformation sewn by Russian bots can be overwhelming, but the more practical solution seems to be banning anonymity more broadly.  Once you know “who” a user is, you can label or prohibit AI, but you can’t tell which is which otherwise.  To put that another way, unless you validate who a social media user is, a ban is nothing but preening.

We think “no” in education, generally speaking.  In particular, trying to stop students from using Chat GPT to help with homework or write assigned essays seems like a fool’s errand.  It’s like banning calculators.  First, it’s impractical.  The student will use the calculator (or AI) at home and teachers will only find out when in-class behavior differs.  Second, the days of Luddites are past.  Today, calculators do much more than arithmetic and their use is permitted even in standardized testing, such as the SATs.  Beyond calculators, we don’t have penmanship or typing classes anymore.  The same will happen with AI in education.  It will fundamentally change what and how we teach and why it matters.  It may take as long as it did for calculators to be accepted, for printing to dominate cursive, or for typewriters to disappear, but it’s inevitable.

Various proposals for a moratorium on AI development have been proposed.  For the most part these are also fearful reactions.  They seem more rational, however, because they propose less action.  They seek to freeze a moment in time to grasp time to think.  We think that’s counter-productive, but it’s not unreasonable for society to make such a decision.  But it can’t work for long.  And it will be costly.

We think the rational path is to lay out a constitution governing AI.  Just as the Founding Fathers did not put society on hold while they debated and drafted the constitution in Independence Hall, we prefer not to impose a moratorium on those who are making a living advancing and applying AI towards staggering improvements in world-wide economics, education, healthcare, and more.

Our corner of the AI community has adopted the notion of “constitutional AI” promoted by Anthropic.  This notion of constitutional AI requires a provider of AI to define the constitution by which it will abide.  The dominant aspects of such constitutions today are that it will be helpful, harmless, and honest.  However the AI provider defines the constitution, they represent to its users that it will substantially conform to its constitution and that the provider is committed to improving such conformance and ameliorating any non-conformance.

That’s all well and good, and we could go into the details of the typical constitution at length, but it’s not good enough for society.  Society needs all such constitutions to be governed by a constitution that we, the people, accept.  This shall be the constitution for AI.

The debate by which this constitution shall be defined and ratified belongs, as all matters of law, in the bodies of our legislatures, influenced as always by the public (and, of course, special interests).  It should, as with all matters of legislation, be out in the open and deliberate enough to allow for public discussion and protest before enactment, as may be appropriate.

For decades, experts foretold superhuman AI as a decade away.  Today, “the singularity”, where AI becomes completely general, capable of superhuman performance in all regards, is again, supposedly, a decade or so away.  Don’t believe the prognosticators.  Past performance is indicative of future returns.

Today, our legislators have little understanding of AI.  They, like most of us, can imagine what AI is and will become capable of.  None of us can distinguish fantasy from reality when projecting AI forward.

What matters is that quick legislation governing AI inside the six-month moratorium some are calling for will be more fearful than thoughtful.  If so, it will not allow time for many of us, especially our legislators, and eventually regulators, to learn the technical capabilities and limitations of the technology well enough to govern it wisely.

How and whether other nations govern AI is not within our control, however.  And is a perhaps the only rational argument needed to defeat calls for any moratorium.  The reasons are somewhat obvious.  In any case, given history, international consensus is ever evasive, implying that any moratorium will be either ineffective or perpetual.  The former is catastrophic.  The latter is impossible.

Will Oligarchs Own the Future of AI?

Here we go again.  We are set back a bit this morning (two weeks ago now) with a recent Tech Crunch article about Anthropic, perhaps the most inspiring company touting safe AI.  They seek to raise billions to compete with not so Open AI.  Before commenting on the article, how about a little context?

Open AI, the company, was founded in 2015 precisely to, as stated on Wikipedia today, “freely collaborate” […and make] its patents and research open to the public.  Things began to change in 2019 as Open AI transitioned to a for-profit corporation.  Ultimately, this year, Open AI has written that they will no longer “share [any] details for commercial and other reasons”.

Last year, in a podcast with Sam Altman and Reid Hoffman, the CEO of Open AI suggested that only a handful of companies could provide the foundational AI models on which everyone else will build “the middle layer”.  This suggests that foundational AI will simply become part of “Big Tech”, raising familiar questions of who owns the future.

Open AI has been stodgy with being open since it initially refused to share the GPT-2 model in 2019, eventually doing so, arguably due to pressure from the AI community, in particular.  Open AI has not shared any subsequent model. 

Open AI (little ‘o’; not the company) does not mean commercial AI becomes impractical.  The intent of open AI is to keep fundamentals of nature, like math, electricity, and fire (including nuclear power), from becoming private property at the expense of society.  Making a living harnessing them and applying them innovatively should remain fair game.

Our entire tech industry is built on shared intellectual property, most notably the open-source software movement.  Without open-source, much of modern life would be stuck decades in the past.  All the progress in machine learning over the last few decades would have been impractical without open-source operating systems and programming languages, such as Linux and Python, in particular.

AI models are a little different.  They have two critical parts.  One is the source code that implements them.  Typically, this is Python code which runs on tens to thousands of GPUs, which are massively parallel matrix manipulating machines.  Essentially, given data, the algorithms written in Python adjust the matrices until the error in predicting things about the training data is minimized (or nearly so).

This second part, the resulting contents of the matrices after training (a.k.a., the model weights), is where the controversy of open AI started with GPT-2.  Through GPT-3, Open AI was quite good about publishing details of the algorithms used in its models.  The AI community easily replicated such models, with various modifications.  Fair enough, that’s one degree of open AI.  Many believe it’s not enough.

Better is the general, open-source attitude among AI researchers, including many with commercial affiliation, and especially the Hugging Face community sponsored by Meta.  But having the source code of a model is not “democratic” enough.  Wherein democratic, means practically available to anyone and everyone.  Practically available to everyone requires both the source code of a model and the weights resulting from its training to be available.

Just a few details on the models and their weights.  The transformer architecture has been refined significantly but remains basically unchanged over the last 5 years.  The source code for producing the state of the art is widely available and gradually evolving as techniques improve.  In order to produce a state-of-the-art model, massive amounts of data are needed.  Whether training a language model from text or a multi-modal model with text, images, etc., we have enough readily available data to approach the state-of-the-art results democratically.  Where it becomes less democratic is the cost of computing the model weights given the training data.

The amount of computation required to train is model is (naively) proportional to the number of training iterations times the size of the model.  For the most part, the amount of computation is proportional to the number of parameters in the model.  The weights are simply the values of those parameters after optimizing the model by training it with the data.

Table stakes for a good language model, which generally require over 10 billion parameters, is 1023 floating point operations.  For example, Google’s Chinchilla proved a 70B parameter model superior in many regards to models many times its size.  Meta’s more recent LLaMA benefits from additional improvements.  A Chinchilla-scale model was trained on 2,048 A100 GPUs on over 1 trillion tokens of text in 21 days.  At arms-length, on-demand processing last year, this would cost roughly $1 million.

Commercial Open AI would have us believe that this is just the tip of an iceberg.  That $1 million today will be $1 billion tomorrow.  Open AI would have us believe that we can’t afford to keep up as they build models 10 to 100 times larger.  Well, the jury is out.  There have already been models with 3 to 10 times as many parameters as GPT-3 which have fizzled quickly.  But the intent is clear.

Unfortunately, according to the article, inspiring Anthropic now aspires to be one of the oligarchs of AI.  The article states that Anthropic’s investor pitch deck claims, “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.”  It goes on to assert that AI will automate large swaths of the economy in very few years.  Such hyperbole may further inflame unfortunate calls for a moratorium on AI.

We like Anthropic’s approach to Constitutional AI and are big fans of continuous, self-supervised learning, as well as reinforcement learning given human feedback.  These aspects have materially advanced the safety and the instruction-following and conversational abilities of language models recently.  But in doing so, they require an order of magnitude less compute than the brute force pre-training discussed above.

Anthropic thinks building a model with “tens of thousands” of GPUs will produce magic.  We’ll see.  There are a few stubborn facts in the way.  One problem is that increasing the size of a model and the amount of training data must be balanced.  Loosely speaking, one cannot just double the number of parameters without doubling the amount of training data.  One problem with doubling the amount of training data is that we are running out of data.  Another is that we are already approaching the asymptotes that we can get from numbers of parameters and amounts of training data. 

The basics of the learning curves for models of more than a few billion parameters is that the inflection point of diminishing returns is passed quickly, somewhere between 10 and 100 billion tokens of training data.  After “just” a few 100 billion training tokens, a model with 30 to 120 billion parameters begins to look asymptotically close to “fitting” the training data.  And the 30 billion parameter model fits the data over 95% as well as the model 4 times its size.

Whether or not size matters, other innovations are coming into focus now that we have sufficient scale.  Hopefully, we can avoid Big Tech, including Open AI and Anthropic, owning our future by more openly sharing models, including their weights.  If not, we can expect the innovations and advances in AI to slow as proprietary interests slow the exchange and experimentation that has produced staggering advances in the last decade.  Either way, if the limits of scale alone are indeed near, it’s not the end of the world.

Democracy and AI

This is an earlier version of what became ‘Truly open AI: Meta’s LLaMA offers open-source foundation models‘ published at Merlyn Mind. It was drafted before subsequent developments which I aim to address shortly. I am solely responsible for the opinions expressed herein.

The ability of large language models has made huge leaps with the launch of ChatGPT in late 2022 and now GPT-4, both from OpenAI (and, effectively, Microsoft). Unfortunately, the GPT family of language models is being held increasingly close to the chest. From GPT-3 on, the models are simply not available other than as hosted services. 

Much has been written suggesting that AI will become concentrated in a few Big Tech firms because language modeling at scale has become prohibitively expensive. In particular, we have heard that democratic AI – whereby the state of the art is truly open and available to all – is impractical given the amount of compute needed to reach or surpass language models such as GPT-3, Google’s PaLM, and others.

Papers on GPT-3 have gone into some detail about the proprietary models, such as the number of layers and attention heads, as well as model width. This has allowed the AI community to glean some insights and compare the performance of other models against GPT. There are many papers comparing smaller models such as DeepMind’s Chinchilla against GPT-3, and it is not uncommon for the smaller models to outperform their larger sibling.

In mid-March, OpenAI published a paper describing GPT-4, but it gives few details of the model architecture. For example, the number of parameters is not even disclosed. The authors attribute GPT-4’s improvement on exam taking over GPT-3.5 to pre-training methodology, but they explicitly state that no details will be shared for competitive and other reasons. We simply don’t know whether size matters as much in GPT-4 as it did previously.

The promise of open source

Well, let’s step back to Meta’s late 2022 release of Galactica, a model 2/3 the size of GPT-3 that is trained not on arbitrary internet content but on scientific literature and data. As soon as it was made available, however, it was harshly criticized, even though it is superior to GPT in many regards. The criticism was mostly regarding its “toxicity,” which is a reasonable concern (though it would be hard to argue that it was more toxic than GPT has been).

Galactica was promising. The model could be obtained from Meta, unlike those of OpenAI, and it could be deployed on readily available and affordable hardware. And, again, it was superior to GPT-3 in various regards.

Well, Meta has upped the ante significantly with the release of LLaMA: Open and Efficient Language Models.

LLaMA models range up to 1/3 the size of GPT-3. They differ by leveraging architectural improvements from the many works of Google, DeepMind, Meta, and others. This allows LLaMA to be trained more efficiently and to perform better given whatever training budget.  The largest LLaMA model competes handily with models three time its size (GPT-3) and eight times its size (PaLM).

Here at Merlyn Mind, we’ve given LLaMA a go, and it’s impressive. It’s truly open AI. 

But wait! There’s more …

InstructGPT and ChatGPT are much better at following instructions and chatting than language models that have not been fine-tuned to follow instructions or chat. There is a lot going on to train non-GPT models with such capabilities, but one effort, in particular warrants kudos.

First, let’s given Anthropic an honorable mention for its work on Constitutional AI and related data sets that will enable the community to address toxicity and harm in language models. 

Now …

Stanford’s work on Alpaca: a Strong, Replicable, Instruction-Following Model is quite fun and helpful. The Stanford team took a small LLaMA model and taught it to follow instructions. The fun part is that they prompted ChatGPT to give it instructions! The result is Alpaca, a fine-tuned LLaMA. It’s worth your time to check it out!

Here we go again…

It has been over 4 years since I joined Merlyn Mind to apply AI in Education in the summer of 2018. It has been fabulous working with great folks around the world and the founding and extended team, many of whom came from IBM Research with a bunch of folks formerly working with Watson, especially in applying Watson to education.

Merlyn has grown tremendously thanks to the founding team and Learn Capital, in particular. The corporate culture is absolutely fantastic! I am honored to have been designated as a Distinguished Engineer, which I had never heard of before. Folks from IBM assure me it’s a good thing… In a recent award, the company added another honor while calling me “agent provocateur extraordinary”, perhaps subtly provoking as well as complementing me! As a Distinguished Scientist colleague says, it’s “fair”.

That’s all for this note. I plan to add some new posts which will reflect on the advances in AI over the last several years, especially as they pertain to some of my favorite matters, such as natural language. Of course, there will be much regarding machine learning and various deep learning technologies, including multi-modal language models.

CPC and the Grandmother Neuron

A lot of recent work has advanced the learning of increasingly context-sensitive distributed representations (i.e., so-called ’embeddings’). In particular. DeepMind’s paper on “Contrastive Predictive Coding” (CPC) is particularly interesting and advances on a number of fronts. For example, in wav2vec, Facebook AI Research (FAIR) uses CPC to obtain apparently superior acoustic modeling results to DeepSpeech’s connectionist temporal classification (CTC) approach. In the CPC paper, the following image is particularly striking, harkening back to the early notion of a Grandmother Cell.

grandmother cells resulting from CPC

Neural Logic Machines

This is an important paper in the development of neural reasoning capabilities which should reduce the brittleness of purely symbolic approaches:  Neural Logic Machine

The potential reasoning capabilities, such as with regard to multi-step inference, as in problem solving and theorem proving, are most interesting, but there are important contemporary applications in machine learning and question answering.  I’ll just provide a few hightlights from the paper on the latter and some more points and references on the former below.

Continue reading “Neural Logic Machines”

Entailment-driven Extracting and Editing for Conversational Machine Reading

When I wrote Are Vitamins Subject to Sales Tax, I was addressing the process of translating knowledge expressed in formal documents, like laws, regulations, and contracts, into logic suitable for inference using the Linguist.

Recently, one of my favorite researchers working in natural language processing and reasoning, Luke Zettlemoyer, is among the authors of Entailment-driven Extracting and Editing for Conversational
Machine Reading
.  This is a very nice turn towards knowledge extraction and inference that improves on superficial reasoning by textual entailment (RTE).

I recommend this paper, which relates to BERT, which is among my current favorites in deep learning for NL/QA.  Here is an image from the paper, FYI:

Entailment-driven Extracting and Editing for Conversational Machine Reading

Simon Says

Some folks use the term “automatic speech recognition”, ASR.  I don’t like the separation between recognition and understanding, but that’s where the technology stands.

The term ASR encourages thinking about spoken language at a technical level in which purely inductive techniques are used to generate text from an audio signal (which is hopefully some recorded speech!).

As you may know, I am very interested in what many in ASR consider “downstream” natural language tasks.  Nonetheless, I’ve been involved with speech since Carnegie Mellon in the eighties.  During Haley Systems, I hired one of the Sphinx fellows who integrated Microsoft and IBM speech products with our natural language understanding software.  Now I’m working on spoken-language understanding again…

Most common approaches to ASR these days involve deep learning, such as Baidu’s DeepSpeech.  If your notion of deep learning means lots of matrix algebra more than necessarily neural networks, then KALDI is also in the running, but it dates to 2011.  KALDI is an evolution from the hidden Markov model toolkit, HTK (once owned by Microsoft).  Hidden Markov models (HMM) were the basis of most speech recognition systems dating back to the eighties or so, including Sphinx.  All of these are open source and freely licensed.

As everyone knows, ASR performance has improved dramatically in the last 10 years. The primary metric for ASR performance is “word error rate” (WER).  Most folks think of WER as the percentage of words incorrectly recognized, although it’s not that simple.  WER can be more than 1 (e.g., if you come up with a sentence given only noise!).  Here is a comparison published in 2011.

Today, Google, Amazon, Microsoft and others have WER under 10% in many cases. To get there, it takes some talent and thousands of hours of training data.  Google is best, Alexa is close, and Microsoft lags a bit in 3rd place.  (Click the graphic for the article summarizing Vocalize.io results.)

Continue reading “Simon Says”