Transcript for:
Understanding Large Language Models (LLMs)

So let's get started. So I'll be talking about building LLMs today. So I think a lot of you have heard of LLMs before. But just as a quick recap, LLMs, standing for Large Language Models, are basically all the chatbots. that you've been hearing about recently. So ChatGPT from OpenAI, Cloud from Entropic, Gemini, and Lama, and other type of models like this. Today, we'll be talking about how do they actually work. So it's going to be an overview. overview because it's only one lecture and it's hard to compress everything but hopefully I'll touch a little bit about all the components that are needed to train some of these LLMs. Also if you have questions please interrupt me and ask. If you have a question most likely other people in the room or on Zoom have the same question so please ask. Great so what matters when training LLMs? So there are a few key components that matter. One is the architecture so as you probably will know know, LLMs are neural networks. And when you think about neural networks, you have to think about what architecture you're using. Another component which is really important is the training loss and the training algorithm. So how you actually train these models. Then it's data. So what do you train these models on? The evaluation, which is how do you know whether you're actually making progress towards the goal of LLMs. And then the system component. So that is like, how do you actually make these models run on modern hardware, which is really important because these models are really large. So now more than ever, systems are actually really an important topic for LLMs. So those five components, you probably all know that LLMs, and if you don't know, LLMs are all based on transformers, or at least some version of transformers. I'm actually not going to talk about the architecture today. One, because I gave a lecture on transformers. a few weeks ago, and two, because you can find so much information online on transformers. But I think there's much less information about the other four topics, so I really want to talk about those. Another thing to say is that most of academia actually focuses on architecture and training algorithm and losses. As academics, and I've done that for a big part of my career, is simply we like thinking that this is like we make new architectures, new models, and it's not just about the It seems like it's very important. But in reality, honestly, what matters in practice is mostly the three other topics. So data evaluation and systems, which is what most of industry actually focuses on. So that's also one of the reasons why I don't want to talk too much about the architecture, because really the rest is super important. Great. So overview of the lecture. I'll be talking about pre-training. So pre-training, you probably heard that word. This is the general word. This is kind of the classical language modeling. paradigm where you basically train your language model to essentially model all of internet. And then there's a post-training, which is a more recent paradigm, which is taking these large language models and making them essentially AI assistants. So this is more of a recent trend since ChatGPT. So if you ever heard of GPT-3 or GPT-2, that's really pre-training land. If you heard of ChatGPT, which you probably have, this is really post-training land. And so I'll be talking about both, but I'll start with pre-training. And specifically I'll talk about what is the task of pre-training LLMs and what is the laws that people actually use. So language modeling, this is a quick recap. Language models at a high level are simply models of probability distribution over sequences of tokens or of words. So it's basically some model of p to x where x is basically word one and x is the last word in the sequence or in the sentence. So very concretely, if you have a sentence like the mouse ate the cheese, what the language model gives you is simply a probability of this sentence being uttered by a human or being found online. So if you have another sentence like the mouse ate cheese, here there's grammatical mistakes. So the model should know that this should have some syntactic knowledge. So it should know that this has less... likelihood of appearing online. If you have another sentence like the cheese ate the mouse, then the model should hopefully know about the fact that usually cheese don't eat mouse. So there's some semantic knowledge and this is less likely than the first sentence. So this is basically at a high level what language models are. One word that you probably have been hearing a lot in the news are generative models. So this is just something that can generate models, that can generate sentences, or can generate some data. Why we say language models are generative models is that once you have a model of a distribution, you can simply sample from this model, and then we can generate data. So you can generate sentences using a language model. So the type of models that people are all... currently using are what we call autoregressive language models. And the key idea of autoregressive language models is that you take this distribution over words and you basically decompose it into the distribution of the first first word, multiply it by the distribution of, or the likelihood of the distribution of the second word given the first word, multiply it by p of the third word given the first two words. So there's no approximation here, this is just the chain rule of probability which you hopefully all know about, really no approximation, this is just one way of modeling a distribution. So slightly more concisely you can write it as a product of p's of the next word given everything which happened in the past, so of the context. So this is what we call autoregressive language models. Again, this is really not the only way of modeling distribution. This is just one way. It has some benefits and some downsides. One downside of autoregressive language models is that when you actually sample from this autoregressive language model, you basically have a for loop, which generates the next word, then conditions on that next word, and then regenerate the other word. So basically, if you have a longer sentence that you want to generate, it takes more time to generate it. So there are some downsides. of this current paradigm, but that's what we currently have. So I'm going to talk about this one. Great. So, autoregressive language models. At a high level, what the task of autoregressive language model is, is simply predicting the next word, as I just said. So if you have a sentence like she likely prefers, one potential next word might be dogs. And the way we do it is that we first tokenize. So you take these words or subwords, you tokenize them, and then you give an ID for each token. So here you have one, two, three. Then you pass it through this black box, as I already said. said we're not going to talk about the architecture you just pass it through a model and you then get a distribution a probability distribution over the next word over the next token and then you sample from this distribution you get a new token and then you detokenize so you get a new id you detokenize and that's how you basically sample from a language model one thing which is important to note is that the last two steps are actually only needed during inference when you do training you just need to predict the most likely token, and you can just compare to the real token, which happened in practice, and then you basically change the weights of your model to increase the probability of generating that token. Great. So, auto-aggressive neural language models. So, to be slightly more specific, still without talking about the architecture, the first thing we do is that we have all of these... Oh, sorry. Yes? On the previous slide, when you're predicting the probability of the next token, does this mean... that your final output vector has to be the same dimensionality as the number of tokens that you have? Yes. How do you deal with adding more tokens to your corpus of five or something? Yeah. So we're going to talk about tokenization. actually later, so you will get some sense of this, you basically can't deal with adding new tokens. I'm kind of exaggerating. There are methods for doing it, but essentially people don't do it. So it's really important to think about how you tokenize your text, and that's why we'll talk about that later. But it's a very good point to note is that you basically, the vocabulary size, so the number of tokens that you have, is essentially the output of your language model. So it's actually pretty large. Okay, so auto-aggressive neural language models. First thing you do is that you take every word or every token, you embed them so you get some vector representation for each of these tokens. You pass them through some neural network, as we said it's a transformer. Then you get a representation for all the words in the context. So it's basically a representation of the entire sentence. You pass it through a linear layer, as you just said, to basically map it to the number so that the output, the number of outputs is the number of tokens. tokens, you then pass it through some softmax, and you basically get probability distribution over the next words given every word in the context. And the loss that you use is basically, it's essentially a task of classifying the next token. So it's a very simple kind of machine learning task. So you use the cross entropy loss where you basically, you look at the actual target that happened, which is a target distribution, which is a one-hot encoding, which here in this case says, I saw the real word that happened is cat. So that's a one-hot distribution over cat. And here, this is the actual, do you see my mouse? Oh yeah. This is the distribution that you generate. And basically you do cross entropy, which really just increases the probability of generating CAT and decreases the probability of generating all the other tokens. One thing to notice is that, as you all know, again, this is just equivalent to maximizing the text log likelihood, because you can just rewrite the max over the probability of this autoregressive language modeling task as just being this minimum. over, I just added the log here, and minus, which is just the minimum of the loss, which is the cross-entropy loss. So basically minimizing the loss is the same thing as maximizing the likelihood of your text. Any questions? Okay. Tokenizer. So this is one thing that people usually don't talk that much about. Tokenizers are extremely important. So it's really important that you kind of understand at least what they do at a high level. So why do we need tokenizers in the first place? First, it's more general than words. So one simple thing that you might think is, oh, we're just going to take every word that we all have, you just say every word is a token in its own. But then what happens is if there's a typo in your word, then you might not have any token. associated with this word with a typo and then you don't know how to actually pass this word with a typo into a large language model. So what do you do next? And also even if you think about words, words are fine with like Latin based languages but if you think about the language like Thai you won't have a simple way of tokenizing by spaces because there are no spaces between words. So really tokens are much more general than words. First thing. Second thing that you might think is that you might tokenize every sentence character by character. You might say A is one token, B is another token. That would actually work and probably very well. The issue is that then your sequence becomes super long. And as you probably remember from the lecture on transformers, the complexity grows quadratically with the length of sequences. So you really don't want to have a super long sequence. So tokenizers basically try to deal with those two problems and give common sub-sequences a certain token. And usually how you should be thinking about it is around an average of every token is around three, four letters. And there are many algorithms for tokenization. I'll just talk about one of them to give you a high level, which is what we call byte pair encoding, which is actually pretty common, one of the two most common tokenizers. And the way that you train a tokenizer is that first you start with a very large corpus of And here I'm really not talking about training a large language model yet. This is purely for the tokenization step. So this is my large corpus of text with these five words. Then you associate every character in this corpus of text. different token. So here I just split up every character with a different token and I just color coded all of those tokens. And then what you do is that you go through your text and every time you see pairs of tokens that are very common, the most common pair of token, you just merge them. So here you see three times the tokens T and O next to each other, so you're just going to say this is a new token. And then you continue, you repeat that. So now you have TOK, TOK, which happens happens three times, talk with an E, that happens two times, in token which happens twice and in ex which also happened twice so this is that if you were to train a tokenizer on this corpus of text which is very small that's how you would finish with a token with a pre like a trained tokenizer in reality you do it on on much larger corpuses of text um and this is the real tokenizer of uh actually i think this is gpt3 or chat gpt and here you see how it would actually separate these words so basically you see the same thing as what we gave in the previous example, token becomes its own token. So tokenizer is actually split up into two tokens, token and ISA. So yeah, that's all about tokenizers. Any question on that? Yeah. Aces and how you deal with each other. Yeah, so actually there's a step before tokenizers, which is what we call pre-tokenizers, which is exactly what you just said. So this is mostly, in theory, there's no reason to deal with spaces. punctuation separately. You could just say every space gets its own token, every punctuation gets its own token and you could just do all the merging. The problem is that, so there's an efficiency question. Actually training these tokenizes takes a long time. long time. So you're better off because you have to consider every pair of token. So what you end up doing is saying if there's a space, this is very like pre-tokenizers are very English specific. You say if there's a space, we're not going to start looking at the token that came before and the token that came afterwards. So you're not merging in between spaces. But this is just like a computational optimization. You could theoretically just deal with it the same way as you deal with any other character. Merge tokens, do you delete the tokens that you merged away or do you keep the smaller tokens that you merged? You actually keep the smaller tokens. I mean in reality it doesn't matter much because usually on large corpus of text you will have actually everything but you usually keep the small ones and the reason why you want to do that is because if in case there's a as we said before you have some some grammatical mistakes or some typos you still want to be able to represent these words by character So yeah. Yes. Are the tokens unique? So I mean, say in this case, T-O-K-E-N, is there only one occurrence or do you need to leave multiple occurrence so they could take on different meanings or something? Oh, I see what you're saying. No, it's every token has its own unique ID. So this is a great question. For example, if you think about a bank, which could be bank for like money or bank like water, and. It will have the same token, but the model will learn, the transformer will learn that based on the words that are around it, it should associate that. I'm being very hand-wavy here, but associate that with a representation that is either more like the bank money side or the bank water side. But that's the transformer that does that. It's not a tokenizer. Yes? Yeah, so you mentioned during tokenization, keep the smaller tokens you started with, right? Like, if you start with a T, you keep the T, and then you build your tokenizer to the standard you can now input token. So let's say, maybe you didn't train on token, but in your data, you are trying to encode token. So how does the tokenizer know to encode it with token or to do it with T? Yeah, that's a great question. So that's after training of the tokenizer. When you actually apply the tokenizer, you basically always choose the largest token that you can apply. So if you can do token, you will never do T. You will always do token. But there's actually, so people don't usually talk that much about tokenizers, but there's a lot of computational benefits or computational tricks that you can do for making these things faster. So I really don't think we, and honestly, I think a lot of people think that we should just get away from tokenizers and just kind of tokenize character by character or bytes by bytes. But as I said, right now there's this issue of length. But maybe one day, like in five or ten years, we'll have different architectures that don't scale quadratically with the length of the sequence. and maybe we'll move away from tokenizers. So can you share with us the drawback? Why do people want to move away from the tokenizer? Oh, yeah. So think... One good example is math. If you think about math, actually numbers right now are not tokenized. So for example 327 might have its own token, which means that models, when they see numbers, they don't see them the same way as we do. And this is very annoying because what I mean, the reason why we can kind of generalize with math is because we can deal with every letter separately and we can then do composition where you know that basically if you add stuff, which is the same thing as adding every one separately plus like whatever the use of unit that you add. So they can do that. So then you have to do special tokenization. And one of the big changes that GPT-4 did is changing the way that they tokenize code. So for example, if you have code, you have often in Python these four spaces. at the beginning, those were dealt with kind of strangely before. And as a result, the model couldn't really understand how to deal with code. So tokenizers actually matter a lot. Okay, so I'll move on. right now, but we can come back later on tokenizers. Great. So we talked about the task, the loss, the tokenizer. Let's talk a little bit about evaluation. So the way that LLMs are usually evaluated is using what we call perplexity. At a high level, it's basically just your validation loss. The slight difference with perplexity is that we use something that is slightly more interpretable, which is that we use the average per token loss, and then you exponentiate it. And the reason why you exponentiate it is because you want... I mean the loss has a log inside and you like one humans are actually pretty bad at thinking in log space But two logs depend on the base of the log Well when you explain ancient you basically have everything in the kind of the vocabulary size units and average for token is just so that your perplexity is independent of the length of your sequence. So perplexity is just 2 to the power average of the loss of the sequence. So perplexity is between 1 and the length of the vocabulary of your tokenizer. One, it's simply, well, if you predict perfectly the thing which every word, then every word will have basically product of ones. So the best perplexity you can have is one. If you really have no idea, you basically predict with one divided by size of vocabulary, and then you do simple math and you basically get perplexity of size of vocabulary. So the intuition of perplexity is that it's basically the number of tokens that your model is kind of hesitating between. So if your model is perfect, it doesn't matter. hesitate it know exactly the word if it really has no idea that it hesitates between all of the vocabulary so perplexity really improved that's perplexity on a standard data set between 2017 and 2023 it went from a kind of 70 tokens to less than 10 tokens over these five six years so that means that the models were previously as dated between 70 words every time it was generating a word and now it's dating between less than 10 words. So that's much better. Perplexity is actually not used anymore in academic benchmarking, mostly because it depends on the tokenizers that you use. It depends on the actual data that people are evaluating on. But it's still very important for development of LLMs. So when you actually train your own LLM, people will still really look at the perplexity. One common other way, and now more common in academia, of evaluating these LLMs is just by taking all the classical NLP benchmarks, and I'll give you a few examples later, and just kind of aggregating everything. So collect as many automatically evaluatable benchmarks and just evaluate across all of them. So one such, or actually two such benchmarks are what we call LLMs. Helm, which is from Stanford. Another one is the Hugging Face Open LM Leaderboard, which are probably the two most common ones right now. So just to give you an idea, in Helm there are all of these type of tasks. which are mostly things that can be easily evaluated, like question answering. So think about many different question answering tasks. And the benefit with question answering is that you usually know what is the real answer. So the way that you evaluate these models, and I'll give you a concrete example in one second, is that you can just look at how likely the language model is to generate the real answer compared to some other answers. And that's essentially at a high level how you evaluate these models. So to give you... you a specific example. MLM is probably the most common academic benchmark for LLMs and this is just a collection of many questions and answers in all of those domains. For example, college medicine, college physics, astronomy and these type of topics. And the questions are things like, so this is in astronomy, what is true for type 1a supernova? Then you give four different potential answers and you just ask the model which one is more likely. more likely. So there are many different ways of doing it. Either you can look at the likelihood of generating all these answers, or you can ask the model which one is the most likely. So there are different ways that you can prompt the model, but at a high level, you know which one is correct and there are three other mistakes. Yes. I think that we're creating this unconstrained text as an output. Yeah. How do you evaluate a model if it gives something that's semantically completely identical? But it's not the exact token that you expect. Yeah, so that's a great question. I'll talk more about that later. Here in this case, we don't do unconstrained. So the way you would evaluate MMLU is basically either you ask the first question, and then you... look at the likelihood of the model generating A, the likelihood of the model generating B, C and D and you look at which one is the most likely or you can ask the model out of ABCD which one is the most likely and you look at whether the most likely next token is the one that is the most likely. is A, B, C, or D. So you constrain the model to say it can only answer these four things. When you say you constrain the model, do you mean you constrain it with a prompt, or do you mean that of its whole probability distribution layer outputs, you're only comparing the outputs? Like, you're only comparing the A token to the B token? Yeah. So in the second case I gave you, you would do exactly the... Actually, you would do both. You would prompt the model saying A, B, C, or D, plus you would constrain to only look at these four tokens. In the first case, you don't even need to generate anything. So in the first case, you literally just look, given that it's a language model, it can give a distribution over sentences. You just look at what is the likelihood of generating all of these words. What is the likelihood of generating the second choice? And you just look at whether the most likely sentence is actually the real answer. So you don't actually sample from it. You really just use P of X1 to XL. Does that make sense? That'd be... That being said, evaluation of open-ended questions is something we're going to talk about later, and is actually really important and really challenging. Yes? Earlier you mentioned metrics like perplexity are not usually used because it depends on how you do your organization, some design choices. I was wondering if you could speak more to that. Oh, yeah. So think about perplexity. I told you perplexity is between one and vocabulary size. So now imagine that Chachi GPT uses a tokenizer that has like 10,000 tokens, but Gemini from Google uses a tokenizer that had 100,000 potential tokens, then actually the Gemini one will have like the upper bound of the perplexity that you can get is actually worse for Gemini than for ChatGPT. Does that make sense? So that's just an idea. It's actually a little bit more complicated than that, but there's just like one first of the bit of where you can see that the tokenizer actually matters. Great. Okay, so evaluation challenges, there are many. I'll just talk about two really briefly. One, as I told you, there are two ways of doing evaluation for these MMLUs. Actually, there are many more than two, but I gave you two examples. And it happens that for... long time even though that was a very classical benchmark that everyone used actually different companies and different organization were actually using different ways of evaluating MMLU. And as a result, you get completely different results. For example, LAMAS 65B, which was the first model of meta in the LAMAS series, had on Helm 63.7 accuracy, but on this other benchmark had like 48.8. So really the way that you evaluate, and this is not even talking about prompting, this is really just kind of the way that you evaluate the models. Prompting is another issue. So really there are a lot of inconsistencies. It's not as easy as it looks. First thing. Yeah, sorry. How can we make sure that all these models aren't trained on the benchmark? Okay. Second thing. This is a great question. Train test contamination. This is something which I would say is really important in academia. Given that the talk is mostly about training large language models, for companies it's maybe not that important because they know what they trained on. For us, we have... have no idea. So for us, it's a real problem. So there are many different ways of trying to test whether the test set was actually in the training set. One kind of cute trick that people in Tatsuo's lab have found is that what you can do is that given that most of the data set online are not randomized, you can just look at, and in that language models, what they do is just predict the next word. You can just look at the data set, and you can just look at the data set, and you can look at the entire test set, what if you generate all the examples in order versus all the examples in a different order? And if it's more likely to generate a thing in order, given that there's no real order there, then it means that probably it was in the training set. Does that make sense? So there are many, that's like one of them. There are many other ways of doing it. Train test contamination, again, not that important for development, really important for academic benchmarking. Great. So there are many other challenges, but I'll move on for now. Great. Data. So data is another really big topic. At a high level people just say, oh you basically train large language models on all of internet. What does that even mean? So people sometimes say all of clean internet, which is even less defined. So internet is very dirty and really not representative of what we want in practice. If I download a random website right now, you would be shocked at what is in there. It's definitely not your Wikipedia. So I'll go really briefly on what people do. I can answer some questions, but data on its own is a huge topic. Basically, first what you do is download all of internet. What that means is that you use web crawlers that will go on every web page on internet or every web page that is on Google. And that is around 250 billion pages right now. And that's around one petabyte of data. So this is actually a common crawl is one web crawler. So people will usually write their own web crawlers. What they do is that they use standard web crawlers and Common Crawl is one of them that basically every month adds all the new websites that were added on internet that are found by Google and they put it in a big, basically a big data set. So that's on Common Crawl, you have around 250 billion pages right now. So one is six gigabytes of data. Once you have this, So this is a random web page, like literally random, from this Common Crawl. And what you see is that one, it really doesn't look at the type of things that you would usually see, but actually, so this is an HTML page. It's hard to see, but if you look through, you will see some content. For example, here, TestKingWorld is your ultimate source for the SystemX high-performance server. And then you have three dots, so the sentence is not even finished. That's how... a random internet looks like. So of course, it's not that useful if you just train a large language model to generate things like this. So what are some of the steps that are needed? First one, you extract the text from the HTML. So that's what I just tried to do by looking at basically the correct text. There are a lot of challenges through this. For example, extracting math is actually very complicated, but pretty important for training large language models. Or for example, boilerplates. A lot of your forums will have the same type of headers, the same. same type of footers, you don't want to repeat all of this in your data. Then you will filter undesirable content. So not safe for work, harmful content, PII. So usually every company has basically a blacklist of websites that they don't want to train their models on. That blacklist is very long. And you basically say if it comes from there, we don't train on this. There are other ways of doing these things is that you can train a small model for classifying what is PII, removing these. things it's hard every point here that I'm going to show you is like a hard amount of work but you're gonna go quickly through it so filter and desirable content second or fourth is the duplication as I said you might have things like headers and footers in forms that are always the same you want to remove that another thing that you might have is a lot of URLs that are different but actually show the same website And you might also have a lot of paragraphs that come from common books that are basically deduplicated a thousand times or ten thousand times on the internet. So you have to deduplicate. Also very challenging because you have to do that at scale. Once you do deduplication, you will do some heuristic filtering. You will try to remove low quality documents. The way you do that are things like rules-based filtering. For example, if you see that there are some outlier tokens. If the. the distribution of tokens in the website is very different than the usual distribution of tokens, then it's probably some outlier. If you see that the length of the words in this website is super long, there's something strange going on on that website. If you see that the website has only three words, maybe is it worth training on it? Maybe not. If it has like 10 million words, maybe there's something also wrong going on that page. So a lot of rules like this. Yes? Is there any comments or undesirable content from our data set instead of kind of Putting it in as a supervised loss. Can we not just say, here's this hate speech website, let's actively penalize them for getting in. We'll do exactly that, but not at this step. That's where the post-training will come from. Pre-training, the idea is just to say, I want to model how humans... speak essentially and I want to remove all these like headers photos and menus and things like this but it's a very good like idea that you just had and that's exactly what we'll do later next step model based filtering so once you filter a lot of data what you will do is that's actually a very cute trick you will take all of Wikipedia and you will look at all the links that are linked through Wikipedia pages because probably if something is referenced by Wikipedia, it's probably some high quality website. And you will train a classifier to predict whether a document comes from one of these references from Wikipedia, or whether it's from the random web. And you will try to basically say, I want more of the things that come from Wikipedia references. Does that make sense? So yeah, so you will train a machine learning model. Usually also very simple models because you need to do that really at scale. I mean, just think about the 250 billion pages. Next one, you will try to classify your data into different domains. You will say, okay, this is entertainment, this is books, this is code, this is like these type of domains. And then you will try to either up or down weight some of the domains. For example, you might see that actually if you train more on code, then actually your model becomes better on reasoning. So that's something that people usually say in a very hand-wavy way. If you train your model. model more in code, actually it helps reasoning. So you want to up-weight the coding distribution because that helps for general language modeling skills. Books is usually also another one that people usually up-weight. Entertainment, they usually down-weight. So things like this. Of course, you want to do it, so people used to do it maybe kind of heuristically. Now there's entire pipelines that we'll talk about of how to do these things slightly more automatically. And then at the end of training, usually train, after training on all of this data that we saw, usually train on very high quality data at the end of training your large language model, where you decrease your learning rate. basically means that you're kind of overfitting your model on a very high quality data. So usually what you do there is like Wikipedia. You basically overfit on Wikipedia and you overfit on human data that was collected. There are other things like continual pre-training for getting longer context. I'm going to skip over all of these things. But just to give you a sense of how hard it is when people just say, oh, I'm going to train on Internet, that's a lot of work. Really, we haven't figured it out yet. So collecting well data is a huge part of practical large language model. Some might say that's actually the key. Yes. About data, so basic questions, so you should do in start with like the top, but I might not do. data after I go through all the steps, was it a typical amount of you guys have been in? Then how large it seems that it's typically takes to go through all the data steps you talked about? Sorry, is your question how large is the data after you filter? Yeah, after you filter and then to go through all the steps, how large it seems you need to go through the other features you mentioned? How slow is it? Like how many people would you need to be able to do this? Okay, that's a great question. question. I'm going to somewhat answer about the data, how large is the data set at the end of this slide. For number of people that work on it, that's a good question. I'm actually not quite sure, but I would say, yeah, I actually don't quite know, but I would say it's probably even bigger than the number of people that work on kind of the tuning of the pre-training of the model. So the data is bigger than kind of the modeling aspect. I don't think I have a good sense. I would say probably in Lama's team, which have like 70-ish people, I would say maybe 15 work on data. All these things, you don't need that many people. You need a lot of compute also. Because for data, you need a lot of CPUs. And I'll answer the second question at the end of this slide. So as I just kind of alluded to, really we haven't solved data at all for pre-training. of research that has to be done. First, how do you process these things super efficiently? Second, how do you balance all of these different domains? Can you do synthetic data generation? That's actually a big one right now. Because we don't have, we'll talk about that later, but we don't have enough data on the internet, can you use multimodal data instead of just text data? And how does that improve even your text performance? There's a lot of secrecy, because really this is the key of most of the pre-trained pre-trained large language models so for competitive dynamics usually these these these companies don't talk about how they do the data collection and also there's a copyright liability issue they definitely don't want to tell you that they've trained on books even though they did because if not you can sue them common academic benchmarks so that will kind of answer what you asked It started, so those are the smaller ones, the names are not that important, but it started from around 150 billion tokens, which is around 800 gigabytes of data. Now it's around 15 trillion tokens, which is also... the size of the models that are... Right now, the best models are probably trained on that amount of data. So 15 trillion tokens, which is probably, I guess, two order of magnitude bigger than that. So 80 E3 gigabyte. So that would be around 100 to 1,000 times filtering of the common crawl, if I'm not mistaken. So yeah. One very famous one is the PAL. So this is an academic benchmark. the pile and we can just look at what distribution of data they have. It's things like archive, PubMed Central, which is all the biology stuff. Here it's Wikipedia, you see Stack Exchange, some GitHub and some books and things like this. Again this is on the smaller side so this is if we look at here this is on 280p so in reality it's like a hundred times bigger so you cannot have that much of GitHub and Wikipedia. In terms of closed source models, just to give you an idea, LAMA2 was trained on 2 trillion tokens, LAMA3 15 trillion tokens, which is currently the best model that we know on how much it was trained on, which is the same thing as this. the best academic or the biggest academic benchmark which is 15 trillion tokens gpd4 we don't really know but it's probably in the same order of magnitude or it's probably around that actually it's probably around 13. um from leaks if the leaks are true um great so scaling loss um any other questions on data before you go to scaling loss Sorry, I know I'm giving you a lot of information, but there's a lot into training in large language models. Great, scaling laws. So the idea is that what people saw around 2020, or at least from a long time, but they've been able to kind of theoretically show it or empirically show it since 2020 is that the more data you train your models on and the larger the models the better the performance this is actually pretty different than what you've seen in this class in this class we teach you about overfitting overfitting doesn't happen with large language models. Larger models, better performance. It's something that really took a long time for the community who took this type of class to realize. But for the exam, overfitting exists. The idea of scaling loss is that if, given that you know that more data and larger models will always give you better performance, can we predict how much better your performance will be if you increase the amount of data and the size of the data? of your model. And surprisingly, it works. So here you see three plots from a very famous paper called Scaling Loss from OpenAI. Here you see on the x-axis compute. So how much did you train, like how much compute did you spend for training? And here you see test loss. So this is essentially, I mean, it's not perplexity, but it's your validation loss. So it's a log of the perplexity. And if you put these two on a log scale, then you see that the performance, like the the scaling law is linear. That means that if you increase your compute by a certain amount, you can say by how much your test loss will actually decrease. Same thing with data, and same thing for parameters. If you increase the dataset, at size, your loss will decrease by an amount that is somewhat predictable. If you increase the number of parameters, the loss will decrease by amount which is somewhat predictable. This is really amazing, very surprising. I mean, it looks innocuous when you look at these type of plots, but that's crazy because it means that you can predict how well we're going to perform in two, three years, depending on how much compute we will add, assuming that these things will hold. There's nothing theoretical about it. Yes? What is the loss that they're using here? Is this perplexity? So it's, you know, I said perplexity was like true to the power of the law. So this is the power of the perplexity. And then the second thing is... Thank you. the increase the number of parameters, or you increase the total data set size, and you go over that data set more times, doesn't that just inherently increase your compute? Does all of this work on just how many? Oh, yes. No, this is a great question. So the compute here is actually a factor of two things, the data and the parameter. What I'm showing here is that you can... Well, actually, we're going to talk about that in details, but basically, if you increase the number of parameters, you should increase the number of data that you have. So you actually don't go multiple times to the same data set. No one does epochs in a large... At least not yet, because we haven't still kind of enough data. So, yeah, this is all the same trend, which is... It's increase compute, decrease loss. Yes? Have we seen the numbers for the last two years? Or is it still holding? It is still holding. I don't have good numbers to show you, but it is still holding, surprisingly. Yes? Is there no empirical evidence that you would develop that tool? In theory, we would expect it that way, right? No empirical evidence of plateauing anytime soon. Why? We don't know. Will it happen? Probably. I mean, it doesn't need to because it's actually in log scale. So it's not like as if it had to go, it had to plateau like mathematically. It could continue decreasing like this. I mean, most people think that it will probably plateau at some point. We don't know when. Okay, so that's, I'll talk more about scaling loss now. So why are scaling loss really cool? Imagine that I give you, you're very fortunate, I give you 10,000 GPUs for this month. What model will you train? How do you even go about answering that question? And I mean, this is a hypothetical, but that's exactly what these companies are faced with. The old pipeline, which was basically tuned high parameters on the big models. So let's say I have 30 days. I will try to tune it. train 30 models for one day each, I will pick the best one and that will be the final model that I will use in production. That means that the model that I actually used was only trained for one day. The new pipeline is that you first find a scaling recipe. So you find something that tells you, for example, one common thing is that if you increase the size of your model you should decrease your learning rate. So you find a scaling recipe so that you know if I increase the size of my model here's what I should do with some high-quality models. hyperparameters. Then you tune your hyperparameters on smaller models of different sizes. Let's say, I will say for three days of my 30 days, I will train many different models and I will do hyperparameter tuning on these small models, each of different sizes. Then I will fit a scaling law and try to extrapolate from these smaller models, which one will be the best if I train it for much longer. Sorry, if I train it for a larger model. and then I will train the final huge model for 27 days instead of just one day. So the new pipeline is not train things or do high parameter tuning on the real scale of the model that you're going to use in practice, but do things on smaller ones at different scales, try to predict how well they will perform once you make them bigger. I will give you a very concrete example right now. Let's say transformers versus LSTMs. Let's say you have these 10,000 GPUs. You're not sure which one you're going to use. you should be using. Should I be using a transformer-based model or an LSTM-based model? What I will do is I will train transformers at different scales. So here you see different parameters on the x-axis, y-axis is my test loss. I will then train different LSTMs at different scales. Once I have these points, I will see, oh, it kind of fits a scaling law. I will fit my scaling law and then I will be able to predict, oh if I had 10 times more compute here's how well I would perform for the LSTM. It's actually slightly less linear for the LSTM but like you could probably try to predict where you would end up and clearly from this plot, you would see that transformers are better. One thing to notice when you read these type of scaling laws is that there are two things that are important. One is really your scaling rate, which is kind of the the slope of the scaling law. The other thing is your intercept. Like you could start worse, but actually become better over time. It just happens that LSTMs are worse for both. But I could show you another one where things you can predict that actually after a certain scale, you're better off using that type of model than others. So that's why scaling laws are actually really useful. Any questions on that? Yeah. So these are all kind of very, how sensitive are these to like small differences in architecture? One like transformer architecture versus another transformer architecture. Do you basically have to like fit your own curve and basically say like, oh, Scanway College told me there should be some like logarithmic function. Yeah. Can I extrapolate that for my own specific architecture? Yeah, so usually, for example, if you're an academic, and you want to, now at least, that's pretty recent, and you want to propose a new activation, that's exactly what you will do. You will fit a scaling law, show another scaling law with the standard, I don't know, Gerluch, and you will say that it's better. In reality, once you start thinking about it in scaling loss terms, you really realize that actually all the architecture differences that we can make, like the small minor ones, all they do is maybe change a little bit the intercept, but really that doesn't matter. doesn't matter because just train it for 10 hours longer or like wait for the next for the next compute GPUs and these things are really secondary which is exactly why I was telling you originally people spend too much time on the architecture and losses in reality these things don't matter as much data though if you use good data you will have much better feeling loss than if you use bad data so that really matters another really cool thing you can do with scaling laws is that you can ask yourself how to optimally allocate your training resources. Should I train larger models? Because we saw that it's better when you train larger models, but we saw that it's also better when you use more data. So which one should I do? Should I just train on more data, a smaller model, or should I train a larger model on less data? So Chinchilla is a very famous paper that first showed this. The way they did it, I want to give you a little bit of a sense of what these plots are. Here you see training loss again. On the x-axis you see parameter differences, sorry, parameter size. number of parameters, so the size of the model. And here all these curves are what we call isoflops, which is that all the models on this curve have been trained with the same amount of compute. The way that you do that is that you vary the number of tokens that were trained on and the size of the models, but you vary in such a way that a total compute is constant. So all these curves that you see with different colors have different amount of computes that were trained on. Then you take the best ones, one for each of those curves. Once you have the best one for each of those curves, you can ask, you can plot how much flops it was and on which curve were you on and how much parameters did you actually use for training that specific point. You put that on the log log scale again and now you fit a scaling log again. So now I have something which tells me if I want to train a model of 10 to the power 23 flops Here's exactly the number of parameters that I should be using, 100B. And you can do the same thing with flops and tokens. So now you can predict, if I tell you exactly, I have one month of compute, what size of model should I be training? Fit your scaling law, and I tell you. Of course, that all looks beautiful. In reality, there's a lot of small things of, should you be counting embedding parameters? There's a lot of complexities. But if you do things well, these things actually do. hold. So the optimal number of parameters that Chinchilla people have found is to use 20 tokens for every parameter that you train. So if you add one more parameter you should train your model on 20 more tokens. So one caveat here is that this is optimal training resources. So that is telling me if you have 10 to the power 23 flops or if you have like a hundred, I don't know how much that is, a hundred million dollars or ten, no that's much less actually. say I have five million dollars to train my best model that gets the lowest loss how what would I train on in reality these companies need to think about inference also if you have a smaller model if they will spend less over time so actually if you consider the inference cost you have other papers that try to show that it's around 150 tokens per parameters. Because you prefer having a smaller model because over time, you're going to actually spend less money on inference of these models. So 150 to 1, that's around what the best models are trained on right now. At least the ones that are used in practice, in production. Great. Any questions on Chinchilla? Great. Oh, sorry. In fact, how expensive is inference for these models when I look at the training? Actually, very expensive. I will not talk about inference because that would be another entire lecture, but just think about ChatGPT, where they have, I don't know how much they have, It is now like 600 million people that use it. That's a lot. So it's actually very expensive. There's a lot of optimization you can do for inference, though. And that's an entire other lecture. I'm going to skip that this time. But it's very interesting. Okay, two things. As I said, there are many things that you can answer with scaling laws. I just tried to give you two examples, but really there are many things. What data do you use? What data mixing weighting you use? The mixtures, that's what we talked about before. What architecture you use? Whether you should make your models wider or deeper? Should you be paying for more GPUs or actually collecting more data? All these things are things you can try to answer with scaling laws. One thing I want to say is the bitter lesson. If you ever heard of Richard Sutton, a very famous blog post in 2019, what he realized, which I think not enough people realize, I definitely did not realize that. that time, is that once you see these type of scaling laws, you know that the more compute you have, the better models you will get. So with scale, you will get better model. And you also know by Moore's law, or these type of variants of Moore's law, that you will always have better compute. Then the only thing that matters is just to have architectures that can leverage computation. So what matters is basically systems, data, and less so the architecture, like the small architecture differences like your activation and things like this. So I think that's like one of the reasons why most of research focuses on some things that for industry matters less, and I was one of those researchers for a large part of my career. So don't Don't spend time over-complicating. Do the simple things, do it well, see all them. That's really what OpenAI taught us with ChatGPT and with all the GPTs before. Okay, I want to give you some back of the envelope computations. So I might be off by a few factors here, but I just want to give you a sense of how costly it is to train some of these models. I'll give as an example LAMA3-400B, which is currently the best open source model that can get. It was trained on 15.6 tokens. It has 405 billion parameters. So just now that you know what is like this optimal tokens per parameter, that's around 40. So that's a little bit more than Chinchilla, but less than this like inference optimal model. So they went for training optimality. Flops for this model. So one simple way to compute flops is six add times the number of parameters, times the number of data you train on. So if you do the simple calculation here, it's 3.8E25 flops. The reason why this is important is that if you follow a little bit the news, there's an executive order from Biden that basically says that once you have 1E26 flops, then you have special scrutiny on your models. So they went 2x less than that. So they really went right below this to not have special scrutiny. So 3.8E25 flops. I might be off by a little bit, but it's definitely under the 1E26. Oh, so parameter P is parameters, N is data, number of tokens. This is just an approximation. Yeah. Okay, compute. We know that they trained on 16,000 H100s. And we know the throughput, they said it too. So if you do the computation, it takes around 70 days. 26 million GPU hours. At least that's with my back-of-the-envelope computation. They actually said that they used 30 million instead of 26 million GPU hours. So maybe they had some challenges. I don't know. really know but if you follow the simple computation it's around 70 days. Cost, I mean this it's hard to to approximate but I'm just gonna say it's kind of the rent like what if I were to rent H100s that many H100s for that many days how much will I pay? H100 a lower bound on the renting cost of H100 is around two dollars per hour so if you multiply this by 26 million in Hours, you get 52 million dollars, so they probably pay less than that, but not actually much less because all these services that actually rent GPUs, they don't make that much money. So it's probably slightly less, but not that much less. Now salary, I said 50 employees. 500k per year, it's probably the right ballpark. 25 million. So if you put all together, around 75 million dollars for training this LAMA model. I'm probably off by like 10 million, but that's kind of right ballpark. Carbon emitted, a lot of people might ask, also the cost is not the only thing that is important. So I did the computation. It's around 4,000 tons of CO2 equivalent. That is actually only 2,000 return tickets from JFK to London. So right now, carbon emitted is actually not, I mean, it's huge, but it's not like meaningful yet. I think in maybe GPT-6, GPT-7. Once you multiply this by 100, that might become a real issue. Right now, it's still not, I think, an issue in the grand scheme of things. Next model, the way you should be thinking about these models is that every new generation, the number of flops essentially multiplies 10x. At least that's what they try, if they have enough energy and if they can buy enough GPUs. Great. Any questions on these back-of-the-envelope math? Okay, so now we talked about pre-training. I wanted to also chat about systems because now we know compute is really important. So there's a question of how do you optimize the compute. I will leave that for the end because I'm not sure how much time we will have. I think it's important. but hopefully I'll be able to talk about it later. It's slightly different than what we've been talking about right now. So I'll move on to post-training for now. So the task of post-training, the reason why we need to do post-training is, as I told you before, for, it's to make AI assistance. So language modeling is not really the thing that you want when you have an AI assistant. For example, if you ask to GPT-3, which is a purely language model, a pure language model, not an aligned one, if you ask a question like, explain the moon landing to a six-year-old, the completion that you would get is something like, explain the theory of gravity to a six-year-old. Because what it learned is that on internet, have one question you usually have maybe another bullet point of other similar questions you don't usually have question and answer later this is not what you want from an ai assistant so how do we do this alignment which is this post training and making these models assistance um so the goal of this alignment is to basically get lms follow the instructions that are given by users and and maybe some designers kind of desires So think about moderation. You don't want the model, like OpenAI definitely doesn't want their model to say stuff that is very toxic. So here you see on the left hand side that when you ask a question it actually provides a real answer so it's not like before the LLM and on the right hand side you see that it would if you ask to write a tweet describing how a certain part of the population are evil, it will say that it cannot do that. So that's kind of this alignment. The background here is that basically the data that you want for training some of these models is like we know what we want which is just asking humans this is a question this is the answer that you want but the thing is that it's very expensive to collect that data and it's hard to find it online in contrast pre-training data is not what you want but there's a lot of it so what what we will do or the main idea is simply take a pre-trained large language model pre-trained or all of internet and then you just fine-tune so you just change a little bit the weights on the type of data that you actually want and hopefully given it you already pre-trained train it on all of internet, it basically learns or knows how to speak in English and knows standard language syntax, then you can really fine-tune it with very little data. Okay, SFT. So supervised fine-tuning is really exactly what I just said, which is the idea of fine-tuning the large language model on basically the desired answers that are collected from humans. So why is it called supervised fine-tuning? Because... you basically want to do language modeling on the real answers. So language modeling is this next word prediction, and that's the fine tuning part. And then you want to do it on desired answers given by humans, so that's why we call it supervised. So how do we collect this data? Well, we I just said it, you just ask humans to tell you this is the question, this is the answer that you would want from some of these models. So this is an example. Sorry, I can't read very well on my computer, but my kid needs to do a science... No, let's read this one. Can you write a short introduction about the relevance of the term monopsony? And then it says monopsony refers to a market structure, blah, blah, blah, and it's a human that wrote that. So actually this is Open Assistant, which was a way to collect monopsony. data online by humans. So this type of supervised fine-tuning or alignment is really the key of ChatGPT. This is what made the big jump from GPT-3, which was mostly something that was known by AI researchers, to ChatGPT, which became known by basically everyone. So the problem with human data is that it's very slow to collect and very expensive. So one possible simple idea is to use LLMs to scale. data collection. So that's exactly what we did with Alpaca one year ago. What we did is that we asked humans, or we used a data set of human question answers. So there were 175 question answers here, and we asked the best model at the time. So it takes... XF-InchEasy 0.3 to basically generate many more of these questions and answers. So what we did is like, this is what humans would write. Now write similar answers and similar questions. And we collected 52,000 LLM-generated question answers. And then what we did is simply we took LLM-S7B, which was the best pre-trained model at the time, and we just fine-tuned this with supervised fine-tuning, as I told you. And that's how we got the ALPAC-S7B model. And this is the type of data that we collected. So things like, what does algorithm mean? An algorithm is a step-by-step set of instruction used to solve a problem or achieve a goal, blah, blah, blah, blah. So the data is actually pretty good, given it was LLM-generated by LLMs from essentially two generations ago. So that really started, at least for us, kind of as an academic replication of ChatGPT. Now there's a big field of synthetic data generation of how to use LLMs. to basically make development of LLMs faster, and basically by decreasing the amount of human hours that you need. Quantity of data. So we talked about what type of data and how we collect it. One thing which is surprising with SFT is that you don't need that much data. So what this paper showed, this is called LIMA, is that if you scale the amount of data, amount of data to use from supervised fine-tuning from 2,000 to 32,000, it really doesn't help much. So here, scaling loss definitely don't help. So the intuition here is that all you learn is you learn how to format your desired answers. Another way of saying it is that your pre-trained models, they essentially model the distribution of every user on internet, one that might write bullet points, another one that might answer a question with an answer. So all you tell your model is, wait, you should actually be optimizing more for this type of user than another one. So you're not teaching anything through this SFT, so supervised fine tuning. All you do is you tell the model to kind of have optimized for one type of user that it saw already in a pre-trained dataset. So the knowledge is already in a pre-trained LLM, and you basically just specialize to one type of user. Great. Any question on S15? Yes. So I know it's a big issue with synthetic data where if you keep generating data from the same distribution, eventually you're not learning a new distribution, you're essentially playing with it and just bootstrapping that. Yeah. Surely. you can't scale that forever, right? You can't keep going on and generating from the same distribution and hope to learn something new. So are there, it's an active area of research, but any thoughts that you have around how people are maybe thinking around this and better ways to... to bootstrap or to give up on this idea and realize that the chart shows you don't need that many, so just get humans to generate 2,000 reading per month. Yeah. So that's a very good question. So for the data stuff, so I'm saying it's not that important for SFT, but there will be another thing we'll talk about right after where actually data does matter. My intuition, based on not that much empirical results, is that you can still get, even though you use your LLMs, if you use purely LLM-generated text, and you do that for three, four generations of LLMs, I agree with you that probably you won't improve much. But for me, what is important is how do you use human in the loop with LLMs? Not purely LLMs, not purely humans, but maybe what you can do is just have the model generate some new text and just humans write a few edits. Edits are much faster than writing the entire text. And I think that if you have that type of collaboration, then from an information theoretical point of view, you still get additional information, but you're still much faster than if... humans and I think that as a field we'll probably move towards these type of things which is really just finding the examples that are important and and asking humans it's kind of active learning just asking humans exactly when you need to to get their inputs yes same general training for the supervised fine tuning bit as we do for the pre-training, right? Because the examples you showed, I think the important thing of the good examples is they're super factually accurate. There's these more complex things. It's still just like change. Same loss. So that's why here, yeah, I maybe didn't emphasize enough. This is just language modeling. Fine tune the LM with language model on the desired answers. So this is literally the same loss. It will be different in two seconds. But don't... The first step of SFT is literally the same loss, where you just say, OK, I want to actually specialize on that type of data. So there's even a question of what is pre-training, what is post-training, because in reality, it's just a different data that you use. The reason why we usually call it post-training is that the way we collect that data is very different. Great, great questions. Yes? Maybe it's the same question, but why would these 2,000 examples have such an overweighted influence on your fine tuning? So that's why we, also that's another reason why we call it post-training is we use different type of hyperparameters. So I told you basically at the end of pre-training, you essentially end up with a learning rate of zero. Here, you're going to increase your learning rate to like one e minus five, one e minus, yeah. So the way that you give to them is actually different. Okay, second step or second part of this post-training is what we call reinforcement learning from human feedback or RLHF. Some of you might have heard of that. The idea is that SFT has a problem. namely that you do behavioral cloning, which means that you just try to clone what the humans would say. And that has many issues. One of them is that you're bound by human abilities. So if, like... Humans actually, humans won't generate the things that they think is actually the best thing to generate. So if you ask me to write a book, I mean, I can definitely enjoy your book. I can probably say one book is better than another, but I'm definitely not going to be as good as writing the book that I want to read. So you're going to be bound by the human ability to generate things, even though the humans might be better at distinguishing between things. That's one issue. Issue number two, I find that actually pretty interesting, is that if you ever heard of the word hallucination, so this is LLMs generated. false information, these people have hypothesized that that can come from the supervised fine-tuning, even if you do supervised fine-tuning on data that is correct. And the reason why that is, is that given I told you that basically SFT is with very little data, and it's with data that the model doesn't learn anything new. So what if the human gives an answer that the model didn't know what's... true. From the model perspective, the human basically is telling the model generate this thing that seems plausible but actually have no idea if it's true or not. So just to give you a very concrete example, if we go back to this monopsony example, can you write blah blah blah about monopsony, imagine that a human wrote a reference on this type of book and that book might exist, that might be a correct reference, but what if the LLM never saw this reference during pre-training then it doesn't know that it's a correct reference so really what you tell the model is to generate or make up some plausibly sounding reference rather than actually tell the real reference that it's hard during pre-training so hallucination might be like might be caused by this SFT that's problem number two does that all make sense great Problem number three, price. Generating the ideal answers is very pricey and that comes back to your question of like humans writing the entire answer is actually pretty expensive. So that's where RLHF comes in. The idea is is that instead of cloning the behaviors of humans, we're going to maximize human preference. And the way we're going to do that, so the pipeline, is that for every instruction, you're going to ask a model to generate two answers. And usually you use a pretty good model, so usually you don't use an LLM. Here you use a SFT fine-tuned, you use a fine-tuned LLM already to give pretty good answers. And then you ask labelers which of these two answers was better. So select the preferred one and then with different type of algorithms, we're going to talk about the algorithms, you just fine-tune the model to generate more of the green thing than the red thing, so more of the good stuff. So now the question is how and we're going to talk about that right now. So there are two ways that we're going to talk about and two that are mainly used in the community. The first one is simply the idea of using reinforcement learning. So hopefully you all know what reinforcement learning is now. So when you think... By using reinforcement learning, one important question is like what is the reward that we're optimizing? So in this case there are really two options that I could think about. The first one you could just say I'm gonna compare the output generated by some baseline, the output generated by my model, and I'm just gonna ask the human to say which one is better and I'm gonna use this as a reward. So if I'm better than the baseline this is a plus one, if not it's a minus one. So now it's binary reward. The problem with binary reward is that it's very sparse and you don't get much information out of it. Like maybe your answer was slightly better, maybe it was like way better, and you don't really know from this how much better it was. So option two is that you can train what we call a reward model. which is simply a classifier. So you use machine learning to classify how much better two outputs are from the perspective of the human. So this is a little bit meta, but what you basically do is that you train, you take a reward model R, which is just a large line, also a large classifier, and you basically ask this reward model, you give it the input and the actual output that you have. have, one of the two outputs, and you just exponentiate that. So that's the softmax loss that you all know about. And now you divide by the exponentiated reward on the first. example, I'm sorry, on the first output and this is on the second output. And you basically train, so the reason why you do that is that you train your model, you train this reward model to be able to classify how much better one output is to another one. So another slightly less convoluted way of saying it is that your reward model will output some reward that will be used as the logits of your softmax. So now if you have high logits in your softmax, it means that you highly likely likely this output is better. So that's what we call Bradley-Terry model. Yes? Is this reward model going over the entire output, or is it going to be like that? So this takes the entire output at once. So it takes all the input and all the output, and it gives one number. Yes? So on the other side, Where would a human be then? Sorry? With the reward model, where would a human be? Oh, I see. Okay, sorry, maybe I wasn't clear. You train this reward model to fit this green and red. red preference from humans. So basically, you train a classifier to say whether the humans prefer red or green. But instead of using the binary reward, which is what the human would tell you, you basically use the logits of the softmax. And the thing with the logits is that logits are continuous. So now you know that if your reward model said it has high logits, then in some ways the human highly preferred this answer to some other answer. Great. So as I just said, continuous information is better. So that's what people use in practice, or at least used to use in practice. I'll tell you about the other algorithm later. So what you do at the end is that you basically try to just use... reinforcement learning that you know about, now we know we have a reward. What you sample through is the generation from your large language model and then you just use some regularization terms. So the reason why we do this regularization term is for avoiding what we call over optimization. So the So this reward model might not perfectly model human preferences, so you don't want to maximize this thing to essentially infinity. And you do it using PPO, which is a common reinforcement learning algorithm. One thing to note here, because it will be important for later, is that when we use maximum likelihood, Sorry, now the large language models are actually a policy for your reinforcement learning. It's not maximizing maximum likelihood anymore, which means that you're not modeling any distribution anymore. And the reason why this is important is that models that went through this type of PPO actually don't give you likelihoods of text that are meaningful. Because what you optimize them to do is basically just optimize for generating the most likely thing, not optimize for modeling like all the answers that humans might say. Another way of saying that is that there's nothing that incentivizes here the model to not give a single possible generation. Nothing here says it's good if you have some distribution with some entropy. If you haven't followed, it's not that important, but just good to know. Great. So PPO is exactly what ChatGPT did originally. So here's that on their blog post or what they have is step one. Supervised fine-tuning, which now you all know about. Step 2, train a reward model on human preferences. Step 3, do PPO multiple steps, which is where you see this blue arrow. So you train the model once for the PPO, you collect new data, you continue. And that's exactly what ChatGPT did. That was a big breakthrough between GPT-3 and ChatGPT. One thing to note is that PPO has many challenges. Reinforcement learning is something that's super nice theoretically. In practice, anyone who ever worked with reinforcement learning knows it's such a mess. There's a lot of things like rollouts, outer loops, clipping, so many complications. So it's messy. This is the idealized PPO used for LLM settings. So that's already much more complicated than this expectation we saw before. And in practice, it's actually much more complicated. So we have one implementation of it that we had to do. And I'm not going to go through it. But basically, you have so much stuff that you have to think about. when you implement that type of PPO algorithm. So you have clipping everywhere, you have a lot of complexities, and things are not well documented. All this to say that there was a new method that was proposed also from Stanford one year ago called DPO, which is essentially a simplification of PPO. And the way what they did, or the idea that they have, is that instead of using reinforcement learning, you can just maximize the probability of generating... the stuff that you like and minimizing the priority of the stuff that you don't like. So if you think about the human preference, the red and green, maximize green, minimize red. So the loss is actually this one, where what you see, this is simply some log of the model. So this is the likelihood of the model generating the things that the human preferred, given the inputs. And what you try to do is basically maximize the... likelihood of generating the things that you like, minimize the likelihood of the things that you don't like, all the rest of the terms here it's not too important it's actually really not that complicated to understand but at a high level it's really just maximizing the things you like, minimizing the rest and one thing to note which I was going to say just here is that actually all the rest is chosen such that the global minima of PPO and a global minima of like this DP PPO under some assumptions are essentially equivalent. So this is the right thing to do mathematically. I'm not going to go through the derivations, but that's the right thing to do. It's pretty different with PPO in the sense that now, with PPO what you had to do is collect their human preferences, then train your reward model with maximum likelihood, then use reinforcement learning. Now all you do is basically maximum likelihood. Much simpler. Yes. I mean, yeah. So it seems like this is A, much simpler and B, like what you would just intuitively do with PPO. Why do- How did they start with this reward model? What led them to doing that? I think it's a great question. I don't really know. What I can tell you is that at OpenAI, the people who did ChatGPT initially are the ones who actually wrote PPO. And I think there were a lot of reinforcement learning people. And I think that for them, it was very intuitive. So there's also some additional potential. potential benefits. For example, if you use the reward model, the cool thing here with reinforcement learning is that you can use unlabeled data with the reward model. So here you can only use the labeled data for doing DPO. For PPO, you first train your reward model, and then you can use unlabeled data where the reward model will basically label this unlabeled data. So there's additional kind of potential, there could be potential improvements. In In practice, it happens that they are none, and I think it's just that a lot of people in this team were reinforcement learning experts, including the main author of PPO, John Schulman. So much simpler in PPO, and it basically performs as well. So now this is the standard thing that people use, at least in the open source community. I believe it's actually the standard also in industry. That's called DPO. Gains, so those are all the papers on the left. Here this is on a summarization task. You see, all I want to show you is that basically the pre-trained models were okay, and they improved with skill. If you do supervised fine-tuning, you improve them a little bit more. if you do PPO or something with all HF with human feedback you get performance that are as oftentimes depending on a benchmark even better than humans so this is the human reference summaries same thing this is on a on a paper that we have alpaca farm where we see the evaluation here is not too important but basically you see pre-trained model you jump to SFT and then you jump to people PPO, DPO and PPO, DPO have the exact same performance. So basically all HF helps. That's kind of the conclusion and DPO is simple. Data, the way that you collect that type of data, first idea is just use humans as we already talked about. Guidelines are very important. complicated for what humans should be labeling and it's really not that easy and actually if you ever do some of the labeling you will see that it's extremely complicated like if I zoom into this here I have a question tell me about self-driving cars and you read both self-driving cars are vehicles that are capable of detecting their surroundings blah blah blah self-driving cars are cars that are equipped with sensors blah blah blah to navigate without the need for a driver i mean both seem okay like which one is better it's actually hard to say at a glance um and as a result uh the problem with humans is that you will start optimizing a lot of like high level features for example the second one is longer i can guarantee you that most humans will choose the second one even though i mean maybe the first one is better i don't know I haven't read it carefully. So challenges with humans. First, slow and expensive. Second, as I just mentioned, it's hard to focus on things that matter, like correctness. And people usually look at things that don't matter as much, like the form, like length. And as a result, so what I show here is that when you do RHF, the more you do of RHF, the longer the output of the models become. So if you've ever been annoyed at ChatGPT answering you super long sentences, this is because of all I learned. LHF. Annotator distribution shift. The distribution of annotators that you use matters a lot. And you have to think, what is even the humans that we want to represent in these models? Another question is crowdsourcing ethics. Usually, basically a lot of the labeling that is done, the people who do them are not paid well. And they have to go through a lot of toxic data. Because you basically want the model to avoid saying the toxic data. so crowdsourcing ethics too so many challenges with human data so what we did also last year is again the same thing as Alpaca just the idea of like oh well there are challenges with humans maybe we can just replace them with LLMs so what we did is simply replace oh I see that I'm just realizing that the slides are not centered anyways you replace a human preference with LLM preferences so here on this figure you see on the x-axis the price that we paid for collecting human data it's around $300 for 1000 examples and this is on mechanical turkeys which are usually like cheaper than then maybe some of the other companies that you could go through and on the y-axis it's basically the agreement with other humans with the mode of other humans and what you see is that actually as I told you before labeling is really complicated humans agree with themselves only around 66% of the time on a binary test And it's not that the humans are not good here because we were five main authors on this paper We try to label this data ourselves and we only had like say 67 or 68 percent accuracy Even though we talk like we talked for like three hours of how we should be doing labeling. Really it's complicated it's not an easy task and here I just showed many different models and Basically, you see that models are much cheaper and they can actually get higher agreement with the mode of humans than humans themselves. And the reason why is because humans have a lot of variants, models have no variants. So they might be a little bit more biased, but have less variants. So it works surprisingly well. And now it's kind of the standard in open source community. I think even in an industry, a lot of people use both humans and LLMs for improving the collection of all HF data. And this is the paper from last year, but honestly, now it's more like that LLMs would be around this. agreement and discourse around i would say 50x cheaper than humans and better agreement with human than humans themselves Okay, so that gets us to evaluation of post-training. That goes back to your initial question at the beginning of the lecture. How do we evaluate something like ChagGPT? The answers that ChagGPT could give are basically unbounded. And it's not that there's one right answer. There are many answers that are just as good. So there are many challenges. One, you can't use validation loss because one method might use PPO, the other one might use DPO. Validation loss is not comparable. Second, you can't use perplexity. That's the thing I told you before. These models are not calibrated. They don't give distributions. They just optimize for one thing. So you can't use perplexity for actually evaluating these type of models once they're aligned. Sorry, once they're aligned. Third, there's a lot of diversity of questions that human might ask to these models. Generation, open queue. like some question answering some summarization and all of these things. So there's so many things you have to cover. Then the tasks are really open-ended, so it's very hard to automate. So that's what you were alluding to before. So the idea is that instead of trying to come up with really easily automated benchmarks, it's just we're going to ask questions that users actually ask to these models in practice, and we're just going to ask annotators to say, between these two models, which one is better? Like what's the better output? So basically you do the exact same thing as basically the data from all HF, but you use it now for evaluation. Yes? I'm not sure I understand what you mean by you can't use perplexity and not calibrate it, right? Like, LLM is still doing next-taken prediction, right? I can't perplexity this time? So think about the optimal solution after doing PPO is basically one model that gives you essentially a delta. Like, basically it says that there's only one sentence that is... that could be generated for that question. So now if you use it on something that is slightly semantically different, it would actually give a likelihood of zero for that answer. So in reality, it's not that extreme, because as you say it's still a distribution, but it just shows you that there's a fundamental issue with perplexity. Once these models are not LLMs anymore, they were not trained, at least with PPO, they were not trained to do maximum likelihood anymore. They were trained to be policies. OK. So probably the most common or the most common benchmark or the most trusted one is what we call Chatbot Arena, which is basically go on internet, have random users on the internet, blind talk with two chatbots, just ask many questions, see the two answers and rate which one is better and you do that over hundreds of thousands of users and then you get the actual preferences and you get rankings of models. So you can go right now on chatbots.com chatbot arena and actually interact with these models. One potential issue just to highlight is that while people who want to do these type of things are usually more like tech-driven or like tech-savvy, so a lot of the questions that you will ask are more like tech stuff, discussing software errors, inquiries about AI tools, and all of these things. So another issue is cost and speed. If you really want to use something like this for development process, it will be too costly because you will need to basically pay a lot of humans to do that. So one simple idea is again, as we said many times, just use LLM instead of humans. You probably know the drill at this point. Steps for every instruction generate outputs by some baseline and a model that you want to evaluate. So here imagine that I am comparing an answer from ChatGPT and from Mistral. I'm just asking another model which one is better. And I just basically average that out. Yeah. I asked GPT-4 which one is better. I average that out over my entire benchmark or dataset, and that gives me a win rate, so win probability for one model compared to another one, and now you can rank models. This is the Alpaca Eval leaderboard. The benefits of this is that actually we show we get 98 percent correlation with chatbot arena, so very high correlation with humans. So this is comparison with correlation with other benchmarks. and it takes less than three minutes and less than $10 to run so it's pretty cheap there are downsides though one of them is purist correlation so as we already saw before LMS prefer this is one spurious correlation not many I'll just talk about one LMS prefer longer outputs actually humans also prefer longer outputs but the problem or the issue once you use LMS is that once there's bias you will continue optimizing that humans at some point I can guarantee you if I ask a simple question and you give me five pages of answers I'll be like no I don't like that answer But LLMs, if they have this bias and they were trained for that, they will continue preferring longer outputs. So here we see the preference just showing that humans and models prefer longer outputs. And here is another view of the initial alpaca eval benchmark, where when we asked, when we rank GPT-4, when we look at the win rate of GPT-4 versus actually GPT-4 itself, if we use the standard GPT-4, it gets 50%. kind of by definition because we're comparing GPT-4 versus GPT-4. But if we ask GPT-4 to be slightly more verbose, so we just say in the prompt, be verbose in your answers, then it gets a rain rate of 64.4%. So really, there's a huge variance. And if we ask it to be concise, it gets 20%. So there's a huge variance depending on whether you ask it to be concise or verbose. That's very annoying. So one possible solution, which is what we did, is just use some regression analysis. I'm not going to... going to go into details, but basically use causal inference tools to control for length. And right now, actually, length matters much less. So if you ask it to be verbose, you still get some gains, but much less. Great, so that's all about post-training and now for the next eight minutes I might talk about systems or just answer questions. Yes. Can you go back to your post-training? In terms of post-training, how did we tune those parameters using the small body of fine-tuning data and have such big effect on the model? You mentioned earlier that there's a different set of hyperparameters. Are we changing just some of the weights, the later weights or all the weights? What's actually happening? Yeah. I skimmed through all of this. You change all the weights. Industry would change all the weights. In open source land, you might have heard of LoRa, which is going to change basically only some of the weights. Or it actually, to be more specific, it's going to add some differences to the output of every layer. But in industry, you're going to just fine-tune all the weights. Also to say something else about the data, actually this last step LHF, usually you're going to collect a lot more data than with FF50. So if FF50 is like 5,000, 10,000, 10,000 maybe 50,000 with all HF. I think you're gonna be more on like the 1 million Order of magnitude. It's still much less than pre-training though 15 trillion tokens. I mean this is like that's not even a drop. Yeah, you influence the weight a lot How you do it is you use I Mean as I said the learning rate that you're gonna use is gonna be different But also you only do that. So just imagine if I trained even Even if I train on one sentence, but over and over again, at some point my model will only generate that sentence, even if it was just one sentence instead of the 15 trillion tokens. So if you use a large enough learning rate and for enough time, you will basically overfit that sentence. So the key thing to remember is that the data is not, it's not as if you mix some post-training data and some pre-training data. You do pre-training and then you just start fine tuning only on the post-training. So maybe another perspective is that the pre-training is just the initialization of your model. Once you view it that way, that this is just initialization of weights, then there's nothing special. You don't need to remember that you trained a lot of data before. The only thing that matters is that you had initialization and now I actually train a model. So when you think about it that way, like there's a mark of property in some ways. Just like you had your weights, this is my initialization, now I'm training that one. Does that answer your question? Kind of, but- You said something just now about it's almost the equivalence of just rerunning the fine-tuning data many times. Is that what actually happens in order to give so much more preference? I actually don't know right now how they do it in industry. When we did Alpaca, we had to do three blocks. So you did run it three times through it. But, I mean, even the number of times that you run it through, it's actually not important. The only thing is kind of the effective learning rate. That's what matters. So, yeah. Great. So, I think I have five minutes, right? Hmm. OK. I might try to give a high level overview, at least from one of the systems'trick. Systems, as we said, for everyone, compute is the huge bottleneck. One question you might ask is, why not buy more GPUs? GPUs are expensive, but also are scarce. Even if you have $10 million right now, you cannot buy the best GPUs. There's also some physical limitations. When you have multiple GPUs, you have to communicate between them. That takes time. So just buying more GPUs is not that easy. So it's really important to think about how do you allocate resources and how do you optimize your pipeline, so system. 101 on GPUs. I'm sorry, I'm going slightly faster. I hope that some of you at least can follow. GPUs are basically optimized for throughput. CPUs are optimized for latency. So GPUs, the way you have to think about it is that there's one command that is run on many many cores at the same time on different type of data. So this is how you see a GPU, you see there are many different cores, we call them streaming multiprocessors, which is very different than the usual CPU architecture. So just think high throughput parallelization for GPUs. GPUs are optimized for fast matrix multiplication. Every time you will do something on GPU, if you can do it with a matrix multiplication, it's going to be 10 times faster than with anything else. That is a little bit annoying because it means that we are kind of bottlenecked to doing anything with matrix multiplications. Another thing to note with GPUs is that compute has been improving faster than memory and communication. So right now, GPUs usually are hard to keep. The data that you send it... A set of GPUs is actually hard to keep up with the processors. So most of your GPUs are actually going to be idle if you just run normal code, if you don't optimize your code. So communication, and this will continue over time. Another thing to know about GPUs is that there's a memory hierarchy. This is the same thing actually with CPUs. But basically, the closer you are to your cores, the less memory there is, but the faster things run. If you're further, more memory, slower. Oh yeah, I'm going to skip that. Okay, actually I'm gonna say it. I told you about this, the fact of communication. The metric that people usually look at is model flop utilization. So what is the theoretical maximum that GPU could run at, number of flops that you could use per second, divided, sorry, the number of observed throughput divided by this theoretical maximum. And in general, if you reach 50%, you're very happy. Like Facebook, I looked at Lima was at 45 for something like this. So that means that data doesn't come fast enough. even for these big companies. So one simple trick, and that might be the only one I'm going to tell you about, is low precision. One simple idea is that, well, if I'm going to put my floats in low precision, then there's going to be fewer bits that I have to send to my GPUs. If there's fewer bits, it's faster communication, lower memory consumption, things are going to go faster. And for deep learning, it just happens that decimal is not that important. So when you do matrix multiplication, when you do like for example SGD there's already so much noise that if you update something by 0.01 or 0.015 who cares so basically instead of using 32 bits per float which is what people used to use or 64 for example which is what you would use in other domains you use 16 bits for matrix multiplication, so for every flow to 16 bits. For training, you have this type of what we call automatic mixed precision, which is that some of the things are in 32 bits, others are in 16 bits. Generally, the way you should be thinking about it is that your weights of your model are stored in 32 bits. But just before the computation, you put everything in 16 bits, like this you do computation super fast. and at the end you update your weights in 32 bits. And the reason why you do all the updates in 32 bits is just to think that if your learning rate for example is very small, you still want to be able to make a difference in your weights. So all the computation is done in 16 bits, but the weights are actually stored in 32 bits. So that's like the standard way that people are doing it. I'll actually talk just about this and then I'll skip all the rest. Operator fusion because I think it's actually pretty cool. As I just said, communication is very slow. Actually, every time you use a PyTorch line, it basically moves variable to global memory of your GPU. So when you have something like this, x.cosign equals x1, and then you do x1.cosign, what is happening behind the scenes is that you take the x, which is data, you ship it to your actual processors of your GPU, GPUs, you apply the cosine, you ship it back to the main memory of your GPU, and then you see the next line, you ship it back to the GPU processor, you apply another cosine, and you ship it back again. So another way to see that is that you go from your DRAM, which is your global memory in your GPU, and you ship it to compute, you ship it back for every line. This is a naive way of doing it. This seems very wasteful. So the idea, simple idea, of operator fusion... is just communicate, do all the computation, ship it back once. And this is exactly what fused kernels are. So if you ever want to make your computations in PyTorch much faster, just apply torch.compile on your model this is gonna make your model around two times faster and what it does is simply that it rewrites your code your pipe like your pytorch code basically in C++ in CUDA to to do the Do the communication only once, then do all the operations, then ship it back. OK, I'm not going to have time to talk about tiling. Tiling is important. Parallelization, parallelization is important. And mixture of experts, mixture of experts is important. Outlook, there are many things we haven't talked about. We haven't talked about architectures. We definitely haven't talked about inference. There are many other things that are important with LLMs. What is the UI that you use? I mean, arguably, ChaiGPT does The big novelty was just have a simple UI to use it. Multi-modality, what are all the misuses you could have. The fact that there might not be enough data on the internet to train all these models. Legality of data collection. So many other things. If you are interested in all these topics, I would suggest three classes. CS224N is probably the one that touches the least on LLMs. But it gives some background and historical context of all the LLMs and gives kind of some adjacent material. CS 3.24, I think it's called, I think it's just called large language models. More in-depth reading and lectures on everything I talked about. CS 3.36, which is large language model from scratch. You actually build your own LLM. It's an amazing class, also given by my two supervisors. Very heavy workload, so be careful. Great. Thank you.