author Manny
-
Weekly summary, Oct 17-23 2024
The new image generation functionality continues to make progress. The Melbourne student teams have nearly finished and appear to have produced useful functionality. Improved image generation I am still focussing on the new image generation functionality, and have reorganised the code to make it easy to run tests and experiment with different variants of the Continue reading
-
Which parts of an image are important?
I have been testing the new image generation functionality, and this has thrown up some interesting issues. I thought people following the picture book work might be curious to see the details, so here’s a nice example from La Fontaine’s classic story of the Crow and the Fox, where C-LARA is currently getting it wrong. Continue reading
-
Biweekly summary, Oct 3-16 2024
The new image generation functionality is now working in standalone mode. We submitted an extended abstract to the ComputEL-8 meeting. Improved image generation The new image generation functionality is now working as standalone code. The basic idea is to create multiple text specifications, then multiple images from each specifications, then use the AI to find Continue reading
-
Weekly summary, Sep 26-Oct 2 2024
This week, I’ve been concentrating on rewriting the image generation functionality. Support for non-AI languages now appears to be working quite well. The French Embassy have been helpful about the funding for the New Caledonian projects. Improved image generation The two central functionalities in C-LARA are text annotation and image generation. Following improvement in MWE Continue reading
-
Ethnic jokes for AIs
o1-preview and I have invented a new kind of ethnic joke. Here are some examples: Q: What was the poorly trained AI’s favourite chess book?A: I, Rabbit. Q: Why was the poorly trained AI such a bad bridge player?A: Because its content filter wouldn’t let it bid No Trumps. Q: Why did the poorly trained Continue reading
-
o1-preview and o1-mini now available in C-LARA
I have installed support for the two new OpenAI models o1-preview and o1-mini in C-LARA. They are based on Chain-of-Thought reasoning trained using reinforcement learning, and appear to be considerably smarter than gpt-4o. Y0u can select them from the Edit configuration information screen in the usual way: Based on some preliminary testing, I would not Continue reading
-
Weekly summary, Sep 19-25 2024
I have continued to test the o1-preview model: it’s extremely interesting! The new functionality for non-AI-supported languages is nearly there, and the Melbourne Uni students are making good progress. O1-preview model Here are a couple of nontrivial things I did with the new o1-preview model over the last week: After correctly adjusting the function that Continue reading
-
Multi-Word Expressions/Islam
I recompiled GPT-4’s charming story Finley the kitten converts to Islam using the latest version of C-LARA. While checking the result, I was impressed to see that the Chain-of-Thought Multi-Word Expression annotation phase had, without any prompting from me, decided that “the Prophet Muhammad (PBUH)” was a multi-word expression, following which the glossing phase had Continue reading
-
o1-preview interprets Fosse
Here’s a recent interaction with the new OpenAI model o1-preview, which I at least found rather impressive: mannyrayner: I have an unusual task you may be able to assist with. I’m currently reading Jon Fosse’s novel “Septologien”, where as you may know a painting is central to the story. The painting is described like this Continue reading
-
Weekly summary, Sep 12-18 2024
OpenAI have released the new o1 model earlier than expected! This has upended our schedule in a very good way, and I’ve spend most of the week experimenting with it. We have made a bit of progress on some other things as well. O1 model The new o1-preview model, previously known as “Strawberry”, is now Continue reading