Majestic has made significant investments into our training and undergraduate programme.  Indeed, Majestic was incredibly honoured to receive the recognition of the Princess Royal Training Awards, first in 2018, and then 2021.

We have had the opportunity to invite talented students from local universities to work for us for a year during their degree course, with many returning after their studies to become fully-fledged graduate Majestic employees.

We are often delighted with what our team do, both in and out of the workplace, and are delighted when we can help support and share individual passions with a wider audience.

One of these people is our graduate developer, Vanessa, who has a particular interest in Machine Learning and ChatGPT. We were very proud to support her as she delivered a wonderful and funny talk on ChatGPT at Fusion Meetup.

We understand the talk has gone done well with the local developer community. While this is a little different from the usual SEO-oriented material we publish, we are delighted to share Vanessa’s presentation with a wider audience.

Fusion is a networking event that promotes tech knowledge sharing, and always has engaging, thought provoking talks talks from showcased local talent alongside industry specialists from further afield.

Many thanks to Fusion for allowing us to clip and use this video. You can watch the full stream from the event on the Fusion Meetups website.


Transcript

Well, without any further ado, I will begin first by hijacking this talk and discussing a little bit about myself. I am Vanessa. I am a recent graduate from Aston University, currently working at Majestic, and, for my final year project, I had the stupid idea to do some machine learning.

I did some NLP, it barely worked, and I thought well, this knowledge is gonna get shoved in the back of my mind. Then, 10 years from now, when the children are doing all the cool stuff. I’ll be like “Oh, I was there when it was written. I was there when transformers became a thing!”

But fortunately, or unfortunately, for me in 2022, actually the year that I graduated, we saw the release of ChatGPT.

Now this has been pretty well received; it’s pretty impressive. I thought it was quite impressive. So I thought,“Wait, why shouldn’t I? Why shouldn’t I capitalize on this, and make a lovely talk and subject to it”.

So for anyone fortunate enough to not have had to question their job security. ChatGPT is this AI machine learning chatbot that’s been made by Open AI and I’ve actually got a description of it right here.

“ChatGPT uses advanced machine learning techniques such as deep learning and neural networks to generate human like responses. It is capable of providing information, giving advice and engaging in casual conversation. ChatGPT is a cutting edge technology that is changing the way people interact with machines, and it has many potential applications in fields such as customer service, education and health care.”

Now if I’d written that, I’d probably get kicked off the copywriting team. But this was actually written by ChatGPT itself. It wrote all that text, which actually is quite impressive. Given that a few years ago, we struggled to get AI to write like a few coherent sentences.

I’m not the only one who thinks this. Because if you try to use ChatGPT these days, you’ll be greeted with this lovely screen. And you’ll have to stay there and keep refreshing until they finally let you in so you can ask chat GPT how you make your béchamel sauce, or whatever the heck you’re gonna do.

So I think we can all agree, ChatGPT is pretty big, and it’s pretty smart. But what does that really mean?

I think we as programmers tend to imagine these problems as hard coded logic; hard coded business logic that you’ve written within your Java server. So like: if sentence pattern matches this, then we run this function or maybe if this query is of this type: we will run this other routine.

But, what if I told you that I could summarize what ChatGPT does using nothing more than this photo. More specifically, this part of this photo? Yes, Fusion goers, I am saying that ChatGPT is nothing more, and nothing less than a next-word predictor.

Now for us to understand what I mean by this, we have to look a bit deeper. We have to actually see what machine learning does and how it works. We have to pop the hood of ChatGPT and see how some of this technology came about and was made. When we look underneath, what we see is one of these things. A very lovely and confusing unidirectional graph.

I won’t give you a lecture on how machine learning works today. But just as a quick recap to those who know and introduction to those who don’t. What you do is bang in some numbers here at the beginning. These will represent, in ChatGPT’s case, your query, what you’ve just put in, bang in there. Then off it plods through this network through these connections, and these numbers will get multiplied in various ways, they will get summed, more numbers will be added to them. And as we plod through, we come to the end, where the result is what ChatGPT is outputting.

So machine learning, to summarize all this, finds the statistical link between what you’ve inputted and what should be coming out. So your query and what ChatGPT is answering. So all it’s doing is taking your question, doing some fancy maths to it, and then out pops this like long sentence.

All of this magic happens in here. If we abstract all this out, we see this input going into our machine learning model, and out pops our predicted output. I’ve got a few more examples if I lost anyone along the way.

So, we could train a machine learning model to predict the price of houses by giving it some information about them, like house type, maybe size, location, year that we’re currently in. If it’s had enough examples from the past, it should be able to predict with some degree of accuracy, what the price of a house should be.

Or, we could show it some photos, some pictures of cats and pictures of guacamole. And, we may ask our machine learning model to classify whether this is cat, or guacamole, or, I think, that’s Dame Judi Dench.

All of this is powered by something made by this man. Does anyone know who this man is? Aaron don’t say anything. Anyone know who this man is? Sir Isaac Newton! Aaron, nothing from you. Sir Isaac Newton, or, for you maths nerds out there, also this man. There’s a bit of contention, a bit of maths drama, we’re not sure who really came up with this first, and what they came up with is: calculus.

This is magical juice, maths juice, that helps us train our machine learning models. And, to visualize this, because I have sworn not to put any algorithms or matrices in this talk, I’ll simply show you this graph. Where we’ve got this statistical model, how our numbers are being multiplied together, and we have how wrong our AI is. So for this, we do need to know what the correct answer should be when we’re feeding stuff into our AI.

And, we can use calculus to go down this gradient, to get a little bit less wrong. Over time, what we hope is that we’ll plot down this error by changing our machine learning model’s statistical model. We’ll plod on down, until we reach some level of acceptable error, I mean, most likely not going to hit zero, but something within the realms of deployability.

And the way we perform this is by giving examples, like from earlier. So we know that these are cats, and this is guacamole, so we can start feeding this into our AI. Say, “when you see this photo, this is a cat,” and, “when you see this photo, it’s guacamole.” And once it’s got enough examples, what we will then do is show it an unseen previously photo of a cat (this is my cat). And, what we hope is that our AI will be able to predict, with some degree of accuracy, that this is, indeed, a cat, not guacamole, and not a terrible movie from 2019.

If I lost anyone there, just think we take this randomly initialized AI model, and we turn it into this very highly tuned, very quality AI that makes some really good predictions. And the way we perform this is using some training.

Now, the best part of this talk! We have been granted exclusive access to the dog gym, where we will be observing three dogs performing their training routines. And yes, I did draw these all myself. Thank you very much.

First, we will be observing the supervised learning dog and we see it here with a trainer. And what they’re doing is having a conversation now what are trainers trying to train our dog to do? And, the dog is a machine learning model in this case. What our trainer is trying to get our dog to learn is a translation problem from English to French.

So what we’re going to first do as the trainer, we’re going to say to a dog, a phrase in English. “Cheese omelette”, and then our randomly initialized dog will have a random guess. And, it will be absolute gibberish. And, we’ll say, “okay, that’s fine!” The real answer is “Omellete du fromage”, and off our dog pops. And it’s going to a bit of thinking. And it’s going to do a bit of that, gradient descent that we spoke about, and it’s going to get a little bit less wrong. Then, we’re going to repeat this exercise, telling our dog “Cheese omelette”, to which our dog responds with something that’s gibberish, but slightly less gibberish. And we’re going to correct it again, dog’s gonna plod on… it’s gonna do some of that machine learning… it’s gonna do some of that calculus… gonna get a bit less wrong. And three hours later, what we hope is, that when we say to our dog “Cheese omelette”, our dog can correctly predict “Omellete du fromage”. We’ve reached that acceptable level of error.

And we can do this with all kinds of problems. So we can have like a question, “how are you” and when our dog reads this, it knows it should say, “Good, thank you”. Or we can have some Bulgarian text here, and we can get our dog to translate it into various languages.

So to capture this as a general case, our model is trained on a set of labelled data and it’s labelled because we have: input, output. And, it’s taught to predict the correct answer.

Now, in the unsupervised learning corner of the gym, we see a meme, our brains are cast back to 2016, a simpler time, when this tweet is posted. This person claims to have forced a bot to watch 1000 hours of Olive Garden adverts, and then made that bot generate their own. And, while I don’t believe that this was possible with the technology available at that time, this is an example of unsupervised learning.

It looks a bit similar, doesn’t it? We’re having another conversation. What our trainers doing here is quoting a book called The Art of War by Sun Tzu. And, they’ve hidden the last word very cruelly from the dog, and they’re making it guess. So, they say to our dog: “All warfare is based on”, and our dog has some gibberish to tell us. We correct it, no dog, silly, you, “deception” and dog’s gonna pop off. And it’s gonna do some thinking, it’s gonna do a bit of that gradient descent again, get a bit less wrong, and we’re gonna repeat this exercise. Dog says something slightly less gibberish, we correct it, off it pops, and many unbearable hours later. What we see is that when we say this quote to our dog, our dog will predict correctly: “deception”.

Now, what we’ve done here is we’ve just randomly plucked a sequence from a piece of text, and we’ve hidden the last word in the sequence. And we made it our dog guess it. And we can actually do this with a large corpus of texts. So we can do all of Wikipedia, which you can download, you can feed that into an AI and make it learn how to finish this article, essentially. We can give it a bunch of books, a large collection of many books from many places. If you see it, you see it. Or, movie scripts. We could feed it all the Shrek, we could feed it, song lyrics, we could feed it guitar tabs, anything that you can shove in this AI, you can teach it how to predict.

As a general case, we can sum this up by saying that our model is trained on unlabelled data. And, it’s taught to predict the next word, the next bit of the sequence. This sounds quite similar to supervised learning, actually. So in this context, they’re basically the same except for some, like, semantic differences. In other contexts, they will be quite possibly wildly different. But we’re training language models at the moment. Answer the question, finish the sentence, basically the same thing, subtle differences will come into play later.

And finally, we observe our reinforcement learning dog. This is, ladies and gentlemen, the first dog I drew for this talk, you can probably tell. And this dog has already been taught to speak, it’s a clever dog, not its first rodeo. And we’ve asked our dog a question: “Who are you?” And, our dog being, actually, a robot in disguise has said: “beep boop I’m a robot boop”. We don’t want it to say that. No, we want our dog to sound human. So we say “Bad dog! No. Wrong! You should not be sounding like that”, and our dog’s gonna get really sad because it’s got no treats and it’s going to change its answer.

So might ask it again, a bit later. “Who are you?” and our dog might respond with something far better. “I’m a robot” and we say “Brilliant! Good dog, have all the treats that I have, because that’s the answer that I want you to give”. And we’ll repeat this process again many, many times, really zoning in on the kinds of outputs that we want our dog to be giving us.

Now a bit of audience participation, I know everyone loves this. What was ChatGPT trained with? I’ve given you three different training methods. What do you think? I think I’ll get hands up. Hands up for supervised learning. Hands up for unsupervised learning. Okay, a lot of you. What about reinforcement learning? A few. Yeah, okay. Well, actually, you’re all right. And you’re all wrong. Because that was a trick question. It was trained with all of them. In the wise words of lyrical genius Hannah Montana, “we get the best of all worlds.

First, we begin our journey with our randomly initialized, new, fresh brain AI, by doing some unsupervised learning and training, this really proficient document completer. So, we’re going to say that quote to it, it’s going to guess. And actually, if we take this further into all of the different types of texts, that we might feed it, we might see this quote being repeated, like, various times, in different ways. So it might be phrased like this, “deception is the basis of” and then we would teach our dog to guess. Or, we might feed it this long, long piece of text, to get the prediction really precise from our dog, it’s got all of this information to work with. So we can really get in there and make a really good guess.

And what we’ve kind of done here is actually distil some knowledge into our model. So our dog now kind of knows that deception and warfare, there’s some kind of link here. And this was written by like some fella that lived, God knows how many years ago, Sun Tzu, what’s up with that? And our model knows complete text we’ve given it. So if we start a Wikipedia article, the dog finishes that… if we start part of a book, it’s on it… if we wanted to write a recipe, maybe it’s read a few of those… boom, it’s got us covered.

But that does not a very good chatbot make, because all it’s going to do is try and complete, like a Wikipedia article out of your query. So what we can then do is use that supervised learning I spoke about earlier, to really fine tune this question and answer format. So we, as the trainer can ask the dog a question, “how do I bake a cake?” and the dog is going to try and kind of elaborate on that, because like it read some forum from five years ago, where someone asked this question, and they then proceeded to elaborate by saying, “Well, I want this carrot cheesecake, blah, blah, blah.” And, we correct our dog, we say “No, dog, I don’t want you to finish the sequence of the forum you read before. I want you to answer the question.”

So we’re really changing the way our machine learning is thinking about this problem from like, “finish the Wikipedia article” to “have a conversation with me”, like, this is a script, you now need to complete. Now our model will answer questions if we do that for long enough. But as I’m sure we’ve all seen on Stack Overflow, some people can be very mean and condescending and not fun to talk to. Or, like books can sound terrible, which probably doesn’t appeal to most people.

We want this machine learning model to be very appealing, right? So we can make lots of money. So what we can finally do is use some reinforcement learning to train our dog to talk a little bit more human, a little bit less robot.

So you can start asking questions, and giving it treats based on its answers. We can even teach it stuff like human culture, like memes. I mean, if we say “what is love?” which one do you want your dog to respond with? If we do this enough times, our dog will end up a little bit like this, saying, “how do you do fellow humans?” And when we combine all these, (this is my favourite animation I’ve made,) we get ChatGPT.

So is ChatGPT just a document completer? Is it, like, completing just the script we fed into it? What do you think? Are we in agreement? Or do you all think I’m still lying? Can I get hands up for yes, it is just this document completer? A few of you believe me, well this talk goes on, don’t worry.

Yes. Yes, was the right answer. ChatGPT doesn’t know that it’s talking to you. All it’s seeing is this script that needs completing and it’s like: question from you, answer from it… question from you, answer from it. Maybe, elaborate a little bit on this, and it will learn how to elaborate on this. Because all it sees is this document that needs completion.

You don’t exist. I don’t exist. The script, the script is all! It is all that exists in ChatGPT’s brain, and I can prove this. Because I got a bit mean with ChatGPT, I started like pushing the boundaries a little bit, I gave it half a quote:

“The prince was written in 15”.

And, surely if ChatGPT was reading our question and really processing it and trying to think like, how do I answer this? How do I pattern match this question? It might say, “Well, actually, the Prince wasn’t written in 15. What are you, stupid? It was written in 1516.” But without skipping a beat, ChatGPT finishes this date, and then like, adds an author as though speakers didn’t change.

Okay, maybe we’ve, like, trained that into it. Maybe we as people like it when ChatGPT finishes our sentences. Because with that reinforcement learning we’re teaching it memes, maybe we’ve taught it this as well.

Okay, well, I then went ahead and started introducing myself as ChatGPT, and you think “okay well now you’re pushing it a little bit. Okay, Vanessa, calm down, it’s gonna call you out now. It’s gonna say, oh, no, you’re not ChatGPT. I am.”

But what we find instead, is it just kind of continues. It’s like, “yeah, I guess I started this sentence, I’m gonna finish it, I’m going to finish the sequence.” And, maybe this was also trained in, maybe we’ve taught ChatGPT how to complete this sequence as well. Maybe this is just completing our sentence, because that’s what we’ve told it to do when it reads this type of question.

Well, I go on, I asked ChatGPT half a question and, like, now this is getting silly, right? Surely ChatGPT will see that that’s not a real question. It’s gonna be like, “You’ve not finished this”, right? There’s more to write here. But what we find instead is that ChatGPT comes up with its own question, and then answers it, because it’s finishing the script. I’ve started the first part of the script, and it’s just carried on. it’s been like, “Okay, so the first part is done. So how would this continue?” It comes up with the statistically likeliest question that it thinks and then answers that question.

Okay. Okay. Now, this is my favourite one coming up. I fully introduced myself as ChatGPT. And I asked it what it needed assistance with today. And you’re thinking right now, “Okay, this is getting stupid. You’ve like, you’ve doctored this. Haven’t you?” But no, this was a brand new conversation. And, I simply introduced myself as ChatGPT. And you’d think, “Okay, now it’s gonna be like, No, you’re not ChatGPT. I know how this script goes. This isn’t a question. You’re trying to trick me!” But it instead…

It took the role of a person. We’ve flipped the script, and now it’s completing the role of a person. It’s asking me questions, and it’s acting all interested.

So yeah, I’m going to ask the question, again, is ChatGPT, just a word predictor? Can I get a show of hands? Do you guys think it’s just following the script? Right? Okay. Some of you still aren’t convinced. Well, unfortunately, I don’t have time to go on, but I could. ChatGPT is just a word predictor. And what I hope I’ve demonstrated here is that it’s a really well trained word predictor that’s gone through this super-duper long process of talking to people and having things explained to it, reading texts. And all of that training, lets it do all of that amazing stuff that I read out from the beginning.