This question has so many underlying nuances that it needs careful consideration.
OpenAI’s API for GPT-4 constantly has an over 30sec delay in responding to larger token prompts.
This makes automation in AirTable rather useless unless you go to the trouble of building cloud functions.
I have multiple working automation steps to categories data, create short email replies etc. But if you want to do anything real with GPT-4 it’s limited by the response time.
Question - how can I get around this? Any thoughts?
One answer seems insensitive to the business requirement, but its not. It’s just not deeply explained and we know why - @Kuovonne probably had much more to say about this topic but the Khoros platform is not conducive to nuanced explanations.
You are going to have to use a different service that isn’t limited by the 30 seconds of Airtable scripting automations. @Kuovonne
The inquirer (morganngraom1) asserts that Airtable is the problem. It’s not.
That isn’t a solution it’s a work around that degrades the use of AirTable in the age of OpenAI and other LLMs becoming more dominant. … morganngraom1
Airtable and OpenAI share many dysfunctions. But GPT API calls are notoriously slow. And while this is largely related to scalability issues and unprecedented growth, AI inferencing performance depends greatly on what [exactly] you happen to be asking of the LLM.
I solve these types of issues in a variety of ways including but not limited to chained, small bite-sized prompts to achieve a desirable output. I also do not limit myself to OpenAI APIs. PaLM 2, and many open source LLMs are capable of very fast performance with equally satisfying results.
I suspect morganngraom1 could gain a deeper sense of possible alternative approaches by sharing the underlying objective and architecture that presently fails to create reliable outcomes. I have yet to find an AI requirement for Airtable that cannot be addressed in a reasonable manner.