GPT Doesn't Get Airtable - Here's Why

This is not likely to happen for a few years, if ever.

Airtable is a closed formulaic system like Coda and many other no-code platforms. There is no “code” [per se] for it to learn from because you are writing psuedo code to begin with. Formulas in Airtable are an abstraction from actual code, so it has no corpus to gain intelligence from.

So either Airtable must train a fine-tuned LLM and bake it into the formula editor, or it must open-source the underlying proprietary abstractions. It’s a lot of work, and it requires internal access or independent development of a comprehensive formula parser.

If you spend a lot of time trying to create an AI approach to AI-assisted formula creation, I predict you will be deeply disappointed. Airtable will do this internally [eventually]. The market pressure to make formula development possible, fixable, and understandable from natural language prompts will be intense. They’re likely working on this already because they have the psudo-to-code translator, and that’s all that’s needed to:

  • Create a few-shot prompt/training process that transforms a natural language query into code and then into their formulaic representation.
  • Create a few-shot prompt/training process that transforms a natural language query to read a formula and explain what it does.
  • Create a few-shot prompt/training process that transforms a natural language query to explain how to fix a formula that is not working.

This is the trifecta of AI and formulas; create, fix, explain. No one outside of Airtable will ever be able to do this well, and if someone manages to find a way, it probably won’t be financially practical. Best to keep the pressure on Airtable to do this right away.

If you want to use AI to make something useful in Airtable, focus on users and their data . That’s where the value will be for external AI solutions.

It’s not a horse, it’s a donkey but better donkey than carrying my own weight.

It still makes cute mistakes too. It spontaneously missed 2 brackets, it didn’t count the left and right brackets.

And the right way through switch formula.

Precisely my point…

When will y’all realize that you’re using a prototype chat experience built on AGI — NOT domain data — expecting it to act like it has domain expertise?

Airtable’s behaviours are an abstraction from code. Python and Javascript code used to train GPT models is not an abstraction. As such, you will get pretty good performance from one thing but absolutely no reliable benefit from the other thing.

The one thing that won’t work is to ask an AGI model to understand how no-code platforms work. No-code, for all its excellent benefits, is a closed architecture. Understanding it through neural frameworks is not likely to occur anytime soon.

@growwithjen said…

An interesting opportunity is to train a model to understand the lexicon of a closed platform.

Spot on! But a big task indeed. Only Airtable possess the data to build such a model and since they just added oAuth support (ubiquitous in 2010), when will they get to this task?

Even heavily peppered solutions based on Airtable script will never see the light of a GPT training process. OpenAI must use only the little fragments and snippets published in the Interwebs. There is no equivalent of vast projects based on Airtable’s SDK either. Unlike the open-source resources of Python and Javascript, there’s nothing it can sink its teeth into.

My advice - stop wasting your time debating how nutty and how ineffective AGI is concerning Airtable. Instead, become AI practitioners with client data like this example.