ChatGPT and Erasmus Montanus
Utterance
Yes, shouldn’t ChatGPT be an occasion to ask if we shouldn’t organize our studies in a different way. Aren’t there things other than what machines can do that we should emphasize?
After OpenAI launched its chat robot in November last year, it has been a terrible fear around educational systems around the world: What should we do when a machine can solve exam tasks as well – and sometimes better – than students? When a machine can type better than most students? Solve math problems and do text analyses? It has not taken long before the reaction has come: ChatGPT has already been banned in some places and other places, which are carried out with pen and paper. Keep the machine away!
Is this a real strategy? What can’t we do now with ChatGPT that we couldn’t before? Instead of spending a lot of time searching the web and retrieving information from here and there, ChatGPT can quickly put together information from different sources. It can even find other links than the ones we manage to find ourselves from the information we have been fed with. I’ve asked it about areas I don’t know much about, like paleontology. Here I have made it come up with several theories – with references to publications – about the age of the earth, more than most textbooks have. I have had it write a theory chapter on educational management – perfectly fine, and with good references. And I have had it make not just one, but several proposals for the structure of the method chapter in a dissertation – and as a method teacher, I think the proposals were interesting.
ChatGPT is based on artificial intelligence. Artificial intelligence is intelligent in the original sense of the word: Inter-lego, the ability to build or put (lego) together (inter). It can just put together the material it has, or the material it can build from what it has. And here, in many areas, it is much more intelligent than us: It can relate much faster if and to much more material than I can in any case, and it can find connections that I have to spend a long time finding – I in general tatt finds them. It does this by finding a probable connection between many levels and levels, both for the writing and the information it finds.
ChatGPT’s algorithm is mathematical modeling. The most important theoretical contribution to these models can be found in Claude Shannon’s «Mathematical Theory of Communication», published as early as 1948. Shannon showed how to help with probabilistic models that could find information in noise. That’s exactly what ChatGPT does: The larger information of it has been fed from the Internet, and it does so by finding the most probable composition of the information, most often in the form of probable sheaths.
But this is where ChatGPT can go completely wrong – and often does. I asked it a little about something I know something about, namely Friedrich Nietzsche. Then it was able to say with the most certainty that Nietzsche had been a teacher in Plauen and conducted the boys’ choir there. Maybe not improbable, but completely wrong. Surprised by the claim, I’ve scoured the internet and checked several biographies to see if it could be true. No one has claimed anything like that, and Nietzsche was never again in Plauen for another few weeks. ChatGPT has constructed the claim itself by linking information about Nietzsche, Plauen and probably Nietzsche’s relatives with the name Nietzsche in Plauen. It even insisted that what it had written was correct when I pointed out that what it claimed was actually wrong. Claims from ChatGPT must actually be checked – to see if what it claims is actually true.
Ludwig Holberg railed against the 18th century education system in the comedy Erasmus Montanus. There is much he Rasmus Berg, who through his studies at the University of Copenhagen, had acquired the name Erasmus Montanus, shows the result of his learning in practice. Berg was trained in syllogisms, which modern computers also use with their if-then algorithms. Berg demonstrated his skill to his mother:
– A stone cannot fly
– Ney, that’s probably enough, except you throw it
– I can’t fly
– That is true
– Ergo, Moerlille is a Stone
Holberg showed how bad things can go if you go wild in the world with logic and mathematical models outside the walls of the university. Well, Berg’s use of the syllogism was hardly in line with the logical principles for it and which he should have learned at university, and ChatGPT can easily show what mistake Berg made. But Holberg’s point (without having checked it with ChatGPT, it is so overloaded) was not that you can get lost in the syllogism algorithms. The problem was that Berg did not understand what he could use them for. The lieutenant’s speech towards the end of the comedy gave Berg some advice: “The best advice I can give you is, . . . if you finally want to continue your studies, that you arrange them in a different way.”
Yes, shouldn’t ChatGPT be an occasion to ask if we shouldn’t organize our studies in a different way. Aren’t there things other than what machines can do that we should emphasize? Are our assessment criteria images of it is information processing that satisfies the qualification framework?
Nietzsche also railed against the education system of his time and described what was expected of it:
– What is the purpose of higher education?
-To turn man into a machine.
I asked ChatGPT to find out where Nietzsche wrote this. It couldn’t do that. Instead, it produced a lot of wikipedia babble – ie ChatGPP.
– Does ChatGPT challenge our education system?
– Yes.
– Is it a problem with ChatGPT or the education system?
– The education system.
ChatGPT says there is nothing wrong with that syllogism. But it is not a syllogism.