It occurred to me today that we could make C-LARA a good deal faster. When we do annotation operations, we break the text up into chunks and perform a sequence of OpenAI calls, typically one per chunk. But in fact it doesn’t have to be a sequence, since those calls are in general independent of… Continue reading