ChatGPT: It’s Just Adding One Word at a Time

Stephen Wolfram explains:

That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them. (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.)

The best explanation I have seen so far.

Please read on >

3 thoughts on “ChatGPT: It’s Just Adding One Word at a Time”

  1. Ich finde auch die „Tests“ von Prof Weitz sehr interessant (und teils lustig):
    https://youtu.be/medmEMktMlQ – ChatGPT und die Mathematik
    https://youtu.be/5cYYeuwYF_0 – ChatGPT und die Logik
    Beide zeigen ganz schön die Auswirkungen des LLM, und wie es bei vermeintlich einfachen Rechen- und Logikaufgaben versagt. Und wie man ChatGPT (manchmal) helfen kann, doch zu richtigen Ergebnissen zu kommen.

  2. That was an excellent read. Thanks for sharing it. I’ve added it on The Evil Social Network (No, Not Elon’s Evil Social Network, The Other One) with credit to you.

  3. It seems that Stephen Wolfram has a rather lax definition of the word “article” 🙂
    This looks like >8h reading.

Comments are closed.