Airtable and OpenAI API

This question has so many underlying nuances that it needs careful consideration.

OpenAI’s API for GPT-4 constantly has an over 30sec delay in responding to larger token prompts.

This makes automation in AirTable rather useless unless you go to the trouble of building cloud functions.

I have multiple working automation steps to categories data, create short email replies etc. But if you want to do anything real with GPT-4 it’s limited by the response time.

Question - how can I get around this? Any thoughts?

One answer seems insensitive to the business requirement, but its not. It’s just not deeply explained and we know why - @Kuovonne probably had much more to say about this topic but the Khoros platform is not conducive to nuanced explanations. :wink:

You are going to have to use a different service that isn’t limited by the 30 seconds of Airtable scripting automations. @Kuovonne

The inquirer (morganngraom1) asserts that Airtable is the problem. It’s not.

That isn’t a solution it’s a work around that degrades the use of AirTable in the age of OpenAI and other LLMs becoming more dominant. … morganngraom1

Airtable and OpenAI share many dysfunctions. But GPT API calls are notoriously slow. And while this is largely related to scalability issues and unprecedented growth, AI inferencing performance depends greatly on what [exactly] you happen to be asking of the LLM.

I solve these types of issues in a variety of ways including but not limited to chained, small bite-sized prompts to achieve a desirable output. I also do not limit myself to OpenAI APIs. PaLM 2, and many open source LLMs are capable of very fast performance with equally satisfying results.

I suspect morganngraom1 could gain a deeper sense of possible alternative approaches by sharing the underlying objective and architecture that presently fails to create reliable outcomes. I have yet to find an AI requirement for Airtable that cannot be addressed in a reasonable manner.

Lol. What is reasonable to you is not necessarily achievable by other mere mortals.

I don’t think I have much more to say. I don’t do much with AI. I have found the learning curve for working with AI to be more than I want to tackle right now. Instead I’m having fun learning Coda.

I do look forward to hearing whatever else you have to say, though. It’s how I test the waters to decide if it is time for me to look into AI again.

Well, we all have strengths and weaknesses, but this particular issue involves integration requirements where timing is standing in the way of success. It doesn’t have much to do with AI. Rather, it’s more about working with known service limitations.

I take issue that he’s singled out Airtable as the problem. OpenAI is providing a web service incapable of meeting usual and customary API timeout thresholds. To overcome this, he probably needs a proxy that can stream responses, a concept that Airtable doesn’t support, right?

1 Like