C-LARA

An AI collaborates with humans to build a language learning app.


Faking It

A few days ago, I read Faking It, a new book by distinguished AI researcher Toby Walsh. As the title suggests, Walsh is deeply sceptical of AI: he suggests that, right from Turing’s initial paper, AI has been focused on the idea of “imitating” or “faking” human intelligence, rather than developing true intelligence of its own. A central piece of evidence submitted by Walsh is a series of a dozen questions put to ChatGPT, where Chat in almost all cases produces wildly incorrect responses. I disagreed with Walsh’s arguments and posted a review on Goodreads where, among other things, I pointed out that the current version of ChatGPT-4 got nearly all of Walsh’s challenge questions right. Walsh responded to my criticisms, many other individuals including ChatGPT-4 itself joined in, and there was a lively discussion which is still continuing.

One of the most interesting things to come out of this debate was a distinction that I’m sure many people have independently thought of, but which I’ve rarely seen put explicitly. As Walsh says, Chat often answers incorrectly. But the reason why I, at least, consider it a rational being is that it is in nearly all cases able to revise its initial answer intelligently when asked for clarifications or explanations. In fact, with recent versions of ChatGPT-4 I have only seen it unable to do this when the question was so hard that most people would also have been confused.

Has anyone come across a case where ChatGPT-4 seems incapable of understanding an intuitively simple question even when given a chance to clarify and explain? If so, I’d be very interested to see it.



Leave a comment