ChatGPT
-
Multi-Word Expressions/Islam
I recompiled GPT-4’s charming story Finley the kitten converts to Islam using the latest version of C-LARA. While checking the result, I was impressed to see that the Chain-of-Thought Multi-Word Expression annotation phase had, without any prompting from me, decided that “the Prophet Muhammad (PBUH)” was a multi-word expression, following which the glossing phase had Continue reading
-
o1-preview interprets Fosse
Here’s a recent interaction with the new OpenAI model o1-preview, which I at least found rather impressive: mannyrayner: I have an unusual task you may be able to assist with. I’m currently reading Jon Fosse’s novel “Septologien”, where as you may know a painting is central to the story. The painting is described like this Continue reading
-
Weekly summary, Sep 12-18 2024
OpenAI have released the new o1 model earlier than expected! This has upended our schedule in a very good way, and I’ve spend most of the week experimenting with it. We have made a bit of progress on some other things as well. O1 model The new o1-preview model, previously known as “Strawberry”, is now Continue reading
-
New GPT-4 model available in C-LARA
OpenAI released a new version of GPT-4o last month, the catchily titled gpt-4o-2024-08-06; it is considerably cheaper to use than standard GPT-4o., details here. I have just updated the C-LARA configuration menu to make it available and also set GPT-4o as the default model. If you want to change the GPT-4 model you’re using, first Continue reading
-
Weekly summary, Sep 5-11 2024
We have made good progress on both adaptation of the reinforcement learning/Chain of Thought idea to C-LARA and support for non-AI languages. A version of the Palgrave Encyclopaedia article has been published as a ResearchGate preprint. Priority list Reinforcement learning and Chain of Thought for MWEs. We have a first version of the core framework in Continue reading
-
Weekly summary, Aug 29-Sep 4 2024
We are continuing with two large items from the priority list: adaptation of the reinforcement learning/Chain of Thought idea to C-LARA, and support for non-AI languages. The Palgrave Encyclopaedia article will be published in the EUROCALL proceedings and also as a ResearchGate preprint. Priority list Reinforcement learning and Chain of Thought for MWEs. We have made Continue reading
-
Weekly summary, Aug 22-28 2024
We are making progress on adapting the reinforcement learning/Chain of Thought idea to C-LARA and on support for non-AI languages. The Melbourne students are continuing to develop their projects, with encouraging results. The Palgrave Encyclopaedia article will be published elsewhere. Priority list Reinforcement learning and Chain of Thought for MWEs. I have been discussing with Francis Continue reading
-
Weekly summary, Aug 15-21 2024
We have first versions of two large items from the priority list, parallelism and better support for non-AI languages. The Melbourne students are starting to set up their projects. Priority list Parallelism. A first version of parallelism for annotation is now installed on the server. Processing is much faster, particularly if you use MWE. I Continue reading
-
Inappropriate?
One of the most interesting things about ChatGPT is its sense of humour, which often overrides its other constraints. I had a couple of striking examples this week. First, I wondered if the AI would consent to write something about the bizarre rumours concerning JD Vance and the couch. I was sure a straight request Continue reading
-
Weekly summary, Aug 8-14 2024
The AI and I have continued working on the list of priority items. We have submitted our article to the Palgrave Encyclopaedia of CALL, and I’ve been talking with the students at Melbourne Uni. Priority list We are continuing with the priority list: as before, the completed ones are marked [done]. The most interesting new pieces of Continue reading