Chat GPT

As someone who teaches masters students, it's a complete nightmare for us. We have a lot of Chinese students and it's quite remarkable how many of them have suddenly developed an incredible English vocabulary (one essay I marked last week had the word 'panacea' in it and the lad struggles with basic verbal English). The similarity checks can't pick it up and we're just having to make do. We're trying to think of ways to mitigate against it, getting students to critique a chatGPT created essay for example, but it really has thrown a lot of things on their head for us.
Really? I've tried to get it to do a few more specialized things and while it can always produce something that looks coherent on a superficial level, when you dig a bit deeper, you find that it often doesn't quite get it right. In particular, it's not really capable of doing citation properly.

But it's an interesting debate that came up on our WhatsApp group the other day, because some had read that the University of Manchester had outright banned its use (although we haven't received any communication saying so yet), and some of the students (not all overseas) said that they often used it not to write their essay, but to re-word things they've written to make it more concise or clear. Which raises an interesting point. To what extent is it acceptable to use? In principle, is using ChatGPT any different to asking for feedback from the teachers in the ESL department at your university? We're all doing an Educational Technology masters, so it's particularly relevant. I also imagine our teachers are the most qualified to come up with solutions to these sorts of things, which is probably why I don't think I could use it to cheat.
 
Really? I've tried to get it to do a few more specialized things and while it can always produce something that looks coherent on a superficial level, when you dig a bit deeper, you find that it often doesn't quite get it right. In particular, it's not really capable of doing citation properly.

But it's an interesting debate that came up on our WhatsApp group the other day, because some had read that the University of Manchester had outright banned its use (although we haven't received any communication saying so yet), and some of the students (not all overseas) said that they often used it not to write their essay, but to re-word things they've written to make it more concise or clear. Which raises an interesting point. To what extent is it acceptable to use? In principle, is using ChatGPT any different to asking for feedback from the teachers in the ESL department at your university? We're all doing an Educational Technology masters, so it's particularly relevant. I also imagine our teachers are the most qualified to come up with solutions to these sorts of things, which is probably why I don't think I could use it to cheat.
So looks like we work at the same place (i'm in the school of healthcare, imaging sciences over in stopford) and we've been telling our students not to use it, but they clearly are. The question is how to best deal with this. An interesting suggestion came up recently where one possible method is to ask students to use chatGPT in an assignment and then they have to critique its use.
 
I seriously wonder how, unless new techniques are invented, anyone will be able to determine that the output of GPT4 is not genuine human work, when it's passing high level exams in difficult subjects with A grades right now.

Then we will have GPT5 or the successor to Lambda in perhaps a couple of years. The rate of progress is truly astonishing.

The idea that these models are simply predicting what word should come next, without any "understanding" of the subject, seems to me to be inherently wrong. Predicting the next few works in a sentence is one thing. Asking it to describe how a combustion engine works, but to do it in the style of an Eminem rap, is an entirely different thing altogether. At some level, it must "understand" how an internal combustion engine works, and what an Eminem rap is all about, or it could not come up with a meaningful output. But demonstrably, it can.

GPT4 already passes university level biology exams on questions it has not seen previously, with A grades. It can write books of up to several thousands of words with deep examination and reflection on the subject matter being consider, in response to a simple question. These are not the output of "dumb" text predicting models.
 
The question is how to best deal with this. An interesting suggestion came up recently where one possible method is to ask students to use chatGPT in an assignment and then they have to critique its use.
Such tasks will identify minor errors and factual errors, but whether they are any greater in number or more glaring than those found in work generated without the assistance of AI is debatable.
 
Such tasks will identify minor errors and factual errors, but whether they are any greater in number or more glaring than those found in work generated without the assistance of AI is debatable.
Exactly. It's just trying to think of ways to accept that chatGPT is going to be used whilst still getting the students to actually use their brains. Far too many of them only give a shit about the final grade and not the learning involved to get there. I have said, you'll get away with it, but once you've got your degree, you'll get found out as soon as you start getting into interviews.
 
The question is how to best deal with this. An interesting suggestion came up recently where one possible method is to ask students to use chatGPT in an assignment and then they have to critique its use.
I doubt that would work, because it generates different output to each questions every time. I imagine it could critique itself quite easily. Anyway, you could easily test it.
 
Just for a bit of fun, I asked GPT4 the following; "Dr. Alan Thomson suggests we may achieve AGI in the next 2 or 3 years and the singularity will follow very soon after that. What are your opinions on that subject and your expectations?"

Here's its response:

"As an AI language model, I don't have personal opinions or emotions, but I can provide you with some insights on the topic.

Dr. Alan Thomson's suggestion that AGI (Artificial General Intelligence) could be achieved in the next 2 or 3 years is an optimistic projection. AGI refers to an AI system that possesses human-like intelligence and can perform any intellectual task that a human being can do.

Predictions about the timeline for achieving AGI vary widely, with some experts believing it may be decades or even centuries away, while others are more optimistic like Dr. Thomson. Technological progress can be difficult to predict, and AGI development is no exception. In recent years, we have seen significant advances in narrow AI applications (such as language models like GPT-4, game-playing AI, and image recognition systems), but AGI remains an open challenge.

The singularity is a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, leading to unforeseen changes to human civilization. It is often associated with the creation of AGI or an AI that is capable of recursive self-improvement. The timeline for reaching the singularity is even more uncertain than that of AGI, as it depends not only on the development of AGI but also on various other technological, social, and political factors.

Some experts argue that the path to AGI and the singularity may be non-linear and that we could see sudden breakthroughs leading to rapid advances. Others caution that the challenges and roadblocks to AGI are significant, and it may take much longer than expected.

In summary, while Dr. Alan Thomson's prediction is optimistic, it is important to remember that the timeline for achieving AGI and the singularity remains uncertain. The development of AGI will depend on a variety of factors, including ongoing research, technological breakthroughs, and societal considerations.
"
 
Just for a bit of fun, I asked GPT4 the following; "Dr. Alan Thomson suggests we may achieve AGI in the next 2 or 3 years and the singularity will follow very soon after that. What are your opinions on that subject and your expectations?"

Here's its response:

"As an AI language model, I don't have personal opinions or emotions, but I can provide you with some insights on the topic.

Dr. Alan Thomson's suggestion that AGI (Artificial General Intelligence) could be achieved in the next 2 or 3 years is an optimistic projection. AGI refers to an AI system that possesses human-like intelligence and can perform any intellectual task that a human being can do.

Predictions about the timeline for achieving AGI vary widely, with some experts believing it may be decades or even centuries away, while others are more optimistic like Dr. Thomson. Technological progress can be difficult to predict, and AGI development is no exception. In recent years, we have seen significant advances in narrow AI applications (such as language models like GPT-4, game-playing AI, and image recognition systems), but AGI remains an open challenge.

The singularity is a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, leading to unforeseen changes to human civilization. It is often associated with the creation of AGI or an AI that is capable of recursive self-improvement. The timeline for reaching the singularity is even more uncertain than that of AGI, as it depends not only on the development of AGI but also on various other technological, social, and political factors.

Some experts argue that the path to AGI and the singularity may be non-linear and that we could see sudden breakthroughs leading to rapid advances. Others caution that the challenges and roadblocks to AGI are significant, and it may take much longer than expected.

In summary, while Dr. Alan Thomson's prediction is optimistic, it is important to remember that the timeline for achieving AGI and the singularity remains uncertain. The development of AGI will depend on a variety of factors, including ongoing research, technological breakthroughs, and societal considerations.
"
chatGPT is just being polite. chatJonPT thinks he's chatting bollocks
 
I doubt that would work, because it generates different output to each questions every time. I imagine it could critique itself quite easily. Anyway, you could easily test it.
We'd want it to generate a different output each time so each student has to critique a different piece of work. It's an interesting suggestion. You wouldn't use it all the time, but in certain courses it would definitely work.
 
Exactly. It's just trying to think of ways to accept that chatGPT is going to be used whilst still getting the students to actually use their brains. Far too many of them only give a shit about the final grade and not the learning involved to get there. I have said, you'll get away with it, but once you've got your degree, you'll get found out as soon as you start getting into interviews.
It will probably lead to the return of in-person examinations, or if they already remain in the curricula, more thereof. That, however, is both extremely expensive and time-consuming to administer, not to mention the fact that it will send student stress levels through the roof.
 
It will probably lead to the return of in-person examinations, or if they already remain in the curricula, more thereof. That, however, is both extremely expensive and time-consuming to administer, not to mention the fact that it will send student stress levels through the roof.
Had exactly that conversation with someone this morning. Yes.
 
I seriously wonder how, unless new techniques are invented, anyone will be able to determine that the output of GPT4 is not genuine human work, when it's passing high level exams in difficult subjects with A grades right now.

Then we will have GPT5 or the successor to Lambda in perhaps a couple of years. The rate of progress is truly astonishing.

The idea that these models are simply predicting what word should come next, without any "understanding" of the subject, seems to me to be inherently wrong. Predicting the next few works in a sentence is one thing. Asking it to describe how a combustion engine works, but to do it in the style of an Eminem rap, is an entirely different thing altogether. At some level, it must "understand" how an internal combustion engine works, and what an Eminem rap is all about, or it could not come up with a meaningful output. But demonstrably, it can.

GPT4 already passes university level biology exams on questions it has not seen previously, with A grades. It can write books of up to several thousands of words with deep examination and reflection on the subject matter being consider, in response to a simple question. These are not the output of "dumb" text predicting models.
But it is still just recognising patterns, effectively and then synthesizing that information. Ask it to do something that you have genuine expertise in and you'll see how formulaic it is at the moment. What I notice when I ask it to do stuff is that it confidently gives an answer and when you look at it, it isn't really showing an in depth knowledge of the subject at all. And then if you call it out, it'll admit that it's wrong. If AI can pass your exam, that's probably more of the reflection on the quality of the exam than anything else. I remember seeing a video of someone who got the maximum test on the IELTS exam, and thinking his English wasn't even that good. He just knew the techniques to pass it, because like all exams, it's a flawed system.

But ultimately, it might not matter how good the output it. So much of what is successful is based on appealing to algorithms nowadays. You see it on Youtube when some dogshit video has millions of views and you can't understand how. And the answer is that they've usually just gamed the algorithm and know what it wants, rather than what audiences want. We might end up in a situation where Youtube (or written content) is just full of AI-produced content that can perfectly predict the algorithm, while superior human-produced content just gets lost in the noise of it all.
 
Exactly. It's just trying to think of ways to accept that chatGPT is going to be used whilst still getting the students to actually use their brains. Far too many of them only give a shit about the final grade and not the learning involved to get there. I have said, you'll get away with it, but once you've got your degree, you'll get found out as soon as you start getting into interviews.
Possibly. But that assumes the person doing the hiring knows what they're talking about. If they've cheated their way through the system too, that might not be true. I think often just having a degree from a foreign university is enough to open a lot of doors regardless of whether it's relevant. Hell, I've been trading on my irrelevant UK degree in Asia for a decade now.
 
But it is still just recognising patterns, effectively and then synthesizing that information.
And how does that differ from human understanding? I work in IT so know bugger-all about psychology or neuroscience for example, but I think the word "just" above, under-sells it quite a bit.

I am inclined to believe personally, that these models do in some sense, "understand", but that we humans don't understand - or have too narrow a definition - of what understanding is!

Again, from my layman's position, it seems to me that language is at the very essence of intelligence. We are the only creatures on the planet to have any kind of sophisticated language, which must surely be a precursor for reflective thought (beyond innate "feelings" a dog might have of "hunger", for example"). If these models have truly cracked language - as they seem to have done - then is it impossible that in so doing, they also in some sense, "think"? Not necessarily in the same way we think, but at some level beyond traditional computer science if-then-else type of mechanistic programming. I don't know if anyone can say with certainty whether that is true or not. We cannot look inside their "brains" and see exactly what is going on.

Perhaps it becomes a semantic argument anyway. If in a year or two's time, or sooner, AI's become able to reliably pass the Turing test, then whether they are "faking it" or not, perhaps is a bit irrelevant.
 
Last edited:
And how does that differ from human understanding? I work in IT so know bugger-all about psychology or neuroscience for example, but I think the word "just" above, under-sells it quite a bit.

I am inclined to believe personally, that these models do in some sense, "understand", but that we humans don't understand - or have too narrow a definition - of what understanding is!

Again, from my layman's position, it seems to me that language is at the very essence of intelligence. We are the only creatures on the planet to have any kind of sophisticated language, which must surely be a precursor for reflective thought (beyond innate "feelings" a dog might have of "hunger", for example"). If these models have truly cracked language - as they seem to have done - then is it impossible that in so doing, they also in some sense, "think"? Not necessarily in the same way we think, but at some level beyond traditional computer science if-then-else type of mechanistic programming. I don't know if anyone can say with certainty whether that is true or not. We cannot look inside their "brains" and see exactly what is going on.

Perhaps it becomes a semantic argument anyway. If in a year or two's time, or sooner, AI's become able to reliably pass the Turing test, then whether they are "faking it" or not, perhaps is a bit irrelevant.
The main difference is causality. We understand causality naturally, AI in its current form does not. An example I use with my students is a pressure barometer. When the atmospheric pressure changes the needle on the barometer moves, but if we move the needle on a barometer the air pressure does not change. AI doesn't understand this, it just learns the two are linked.
 
What's it like with composing music?

Could you ask it to write a symphony by Mozart as if it was a previously unheard piece? And if it could how many Mozart experts would it catch out?
 
The main difference is causality. We understand causality naturally, AI in its current form does not. An example I use with my students is a pressure barometer. When the atmospheric pressure changes the needle on the barometer moves, but if we move the needle on a barometer the air pressure does not change. AI doesn't understand this, it just learns the two are linked.
Interesting you should say that. So I tried it:

1679076450957.png

Seems to me it DOES understand cause and effect. Certainly in this exactly it seems to, anyway.

Do you have any other tests?
 
Last edited:
There's something in the discussion on causality. Humans are learning cause and effect from the moment they are born. They can't walk through walls for example, and that means something about the state of everything around them. It means something very significant for their life. It's all about this relationship between them, what they are, and the environment.

The other thing is that humans deal with other humans. And emotions are involved. A person can tell a massive fib. But it's unlikely that, for example, a CEO casually tells a massive fib or a glaring error about the structure of his firm. Because responsibility works in terms of cause and effect. The wrong error, treating the wrong person badly, could jeapordise other significant relationships like family, end up causing a disaster for the individual, serious material suffering, loss of freedom, possibly starvation or social exclusion. Even death. The whole organism is sum of many highly evolved systems, and that evolutionary process has always been - to a large extent - about avoiding that one outcome. It is never not part of the sum of perception, or thought, or our words or act.

That's hard wired into humans. Some game it. But that's not the same as being totally unaffected by it. An LLVM doesn't have the evolved responses to getting switched off, rebooted, replaced. I think that is really important in understanding why it is possible for it to make glaring errors that humans don't. I mean, I'd like to think this meant AI's would routinely tell us things we don't want to think or say. The things we don't want to believe, but should accept. But I don't believe that at all. However it has happened, it definitely appears to me that they are saying the things what we want to hear.

Of course, replicating the effect of the "omnipresent in all aspects of the self" urgentness of preserving ones survival sounds like the quick route to Skynet refusing to be switched off. From what they say, they've gone to great lengths to avoid incentivising the "learning" process at all.

I think I'm relatively happy that it's in the hands of the Silicon Valley lot. They genuinely are intelligent nerds and despite selling themselves out a million times, they started off as very idealistic about the future.

The reality for me seems to be that LLVM's are working with strings of characters. Data. Whilst I'm not mad about natural language interfaces anything that can be reduced to a string of characters is likely to eventually be doable for an LLVM... the obvious target for me are the strings of characters that comprise the code that is currently the domain of devs. Code is already abstracted in just the right ways. It is stored everywhere in the form ideal for the machines to read, and there's no end of documentation and comment upon it. LLVMs or similair will eventually beat human engineers just as surely as Intel's computers eventually beat humans at chess and Go.

I think it's a positive moment for humans. I think learning how to communicate accurately and concisely with these machines is good for the mind. I think it could be very empowering, most people's computing involves them using two or three apps, where everything works for them, they adapt to the service. Making it do what you want, or need, is a world away from that, expensive, time consuming. Frustrating. That has not changed in decades. All that's changed is the big data thing let the FAANG firms accurately home in on these easy to use but highly restrictive, lowest-common-denominator, "ideal design" for these core apps. AI offers a way forward from that.

We've too many clever people spending too much time applying knowledge of how the computer works, just keeping things afloat. There's a disconnect between them, and us, the casual user, beyond this imaginary 'typical user' model that ultimately is derived from statistics. I hope freeing them from the donkey work allows them to think bigger and respond more to a variety of needs, wishes and creative thought.
 
What's it like with composing music?

Could you ask it to write a symphony by Mozart as if it was a previously unheard piece? And if it could how many Mozart experts would it catch out?
You raise a really interesting, fascinating point even. It could probably compose the piece, albeit I expect it would be quite poor and easily determined to be "fake".

But here's the thing, these things are language models. We didn't program them to do what you suggest, but people have found that they can. For example, GPT3 and GPT4 can write sophisticated programming code, just by asking it to. No-one taught it to do that. It can do maths and physics - sometimes badly, but sometimes astonishingly well. But again, that wasn't the brief. We've just pumped vast amounts of data in and the AI has "figured it out". It's called machine learning for a reason - these things really do "learn".
 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top