Any idea of ChatGPT can check for Airtable formulas errors

Any idea of ChatGPT can check for Airtable formulas errors?

That would be so nice. Haven’t tried it yet, still waiting for invite.

1 Like

I haven’t tried it, but I wouldn’t trust it.

People have reported ChatGPT creating incorrect Airtable formulas, so there is no reason to expect valid error checking.

Keep in mind that ChatGPT does not actually understand the logic behind an Airtable formula. It looks for patterns and strings together patterns that commonly appear together.


ChatGPT has been open to the public since November 2022.

As @Kuovonne said above, I’ve heard that it does not do a great job with Airtable formulas.


Well even finding missing bracket or missing +/:. or missing quotes would be already a benefit if it can find :wink:

It might actually be able to do simple things like that very well!

Airtable’s formula editor has gotten a little bit better in these regards, too.

I mixed up with Bing GPT and I forgot I already used the OpenAI, my goodness, I need AI to start remembering things for me.

So I tried the OpenAI GPT but it is quite incapable to deal with a large formula. It tried with an Airtable formula with 15,000 characters and it did choke itself. I tried to tell me I need to shorten the formula … One of responses it gave me:
Formula length: Airtable formulas have a maximum length of 8,192 characters. If your formula is too long, you may need to break it up into multiple formulas or simplify it.

LOL that’s so 2018 smart pants.

I tested with a shorter formula and surprise, it does work, however, it doesn’t find all errors. Works well with brackets but it had trouble with quotes. However, once I explained where it missed the other errors, it learned and find the missing quotes in another formula fine but it wasn’t consistent with the response, in one case it silently corrected by removing one quote that was too much and in other case it showed the corrected formula, told me where the error was but it cut off the result at the last line where was the error, it didn’t give me the whole corrected formula.

So it does work to some extend. That’s useful.

I actually just found out this week that ChatGPT has an API, and even better, supports it!

I’m super excited to build artificial intelligence capabilities into my client’s Airtable systems!

Great, that is really big potential for Airtable. I wish Airtable builds integration for Automations and their own extension too. There are 3rd party extensions on Github. Any idea of these work? I am leery about installing 3rd party extensions, since I am not a programmer so have no idea about code and go around the Gitbub repositories with a 2 meter pole except for the known contributors who I know have good reputation here in the forum. There are 2 GPT extensions. I get headache when I hear word terminal commands, I usually break things when I type something in that black box :wink:

Network Dependents · Airtable/blocks · GitHub

I am not a coder, so I would not have any insights there.

1 Like

I personally find that using a style guide for writing formulas makes identifying missing or extra brackets, quotes, and commas fairly easy.

1 Like

What is a style guide. Is it an extension?

Here is an example how GPT works like charm, with those pesky little quotation marks " and “ . I need magnifying glass for this, Airtable formula box is torturing my eyes. GPT was quick:


A style guide is a set of conventions for how to format things. Think of how papers in school needed to be typed in 12 point font with one inch margins.

For Airtable formulas, I developed a style guide where every each parameter of a function goes on a new line at the same level of indent, and closing parenthesis are vertically aligned with their opening functions. This makes it easy to scan if parentheses line up. It also makes it easy to scan for commas at the end of lines.

I expect that ChatGPT picks up on straight versus curly quotes because it was a very common mistake with formulas posted on the internet a few years ago and if there were curly quotes in the formula, straightening the quotes was invariably the reply.

However, with a bit of experience with the Airtable formula editor, you can pick up on straight versus curly quotes easily yourself by looking at the color coding in the editor. Functions, text strings, and field names should all be color coded differently.


Maybe ChatGPT will mature enough to help with code, but Co-Pilot is filling that bill for me right now.

I’ve been using Github Co-pilot and VS Code for a couple of months now for both long formulas and Scripts. (For Formulas I just tell VS Code to treat them as Javascript).

Your mileage may vary . . . . Co-Pilot isn’t a ‘must have’ but it’s code completion suggestions are often exactly what I want. I wouldn’t pay more than the current monthly rate, but it’s worth the $10 bucks for me.

Working a little is also like saying not working a lot. :wink:

This is not likely to happen for a few years, if ever. Airtable is a closed formulaic system like Coda. There is no “code” [per se] for it to learn from because you are writing psuedo code to begin with. So either Airtable must train a fine-tuned LLM and open-source it, or a group of developers could take this challenge. It’s a lot of work and it requires internal access or independent development of a formula parser.

If you spend a lot of time working out this approach, I predict you will be deeply disappointed. Airtable will do this internally. The market pressure to make formula development possible, fixable, and understandable from natural language prompts will be intense. They’re likely working on this already because they have the psudo-to-code translator and that’s all that’s needed to:

  • Create a few-shot prompt/training process that transforms a natural language query into code and then into their formulaic representation.
  • Create a few-shot prompt/training process that transforms a natural language query to read a formula and explain what it does.
  • Create a few-shot prompt/training process that transforms a natural language query to explain how to fix a formula that is not working.

This is the trifecta of AI and formulas; create, fix, explain. No one outside of Airtable will ever be able to do this well or financially practical.

If you want to use AI to make something useful in Airtable, focus on users and their data. That’s where the value will be for external AI solutions.


Yes, it works well with formulas. Scripts it’s about 50/50

There is a lot more to crafting a well written formula than simply getting the correct output for the few test cases that most people think of.

Until StackOverflow removes its ban on AI generated answers, I don’t think that asking AI for help on formula/code will be much help beyond very basic things.

AI has its place and it is getting better. But I do not think it is ready for producing or debugging custom code/formulas.

The AI forums are saturated with statements like yours. In this tiny community, there aren’t enough experienced AI/CoPilot users to tell you how misinformed you may be.

Indeed, AGI and CoPilot are far from perfect. There are plenty of naysayers who can effortlessly construct prompts that fail. Examples of AGI failure get a lot of air time because it’s good for clicks and attention. The users who thrive on developing faster and writing more and better code are not seeking attention. Historically, success in new and disruptive tech is a silent movement.

If you dig a little, you will learn that enterprises are …

  • Signing up their teams for CoPilot
  • Developers are using it (because we’re lazy)
  • Engineers want to make faster progress
  • Programmers aren’t afraid to learn faster from the collective of generally more experienced engineers who have already written what we need to write
  • Engineering leaders are seeing in test metrics that show improvement that are directly causal to CoPilot’s use

If I can get the gist of a C++ class written by an engineer who left the firm a year ago and understand it in 30 seconds, what would otherwise take me 30 minutes - I’d call that a win.

Integrating that basic understanding into the code base in one second is another big win.

If I can get two quick hypothetical ways that same class might fail in 10 more seconds - it’s a win because programmers are terrible about hypothesizing all the ways code could fail - giant leap.

There’s no shortage of ideas that completely disrupt a segment and have done so with less-than-perfect performance. But that’s the definition of market disruption - it doesn’t have to do the entire job better than the human (in this case). It only needs to begin to do parts of the entire job better for the disruption to occur.

Example: Cable Television → Netflix. How crappy was Netflix when the disruption began? How fast did Cable Television vanish? 12 years.

Example: Horse → Vehicle. How crappy was the steam car? How fast did horse-drawn carriages disappear in America? 12 years.

Example: ICE → BEV. How crappy were the first Tesla EVs? How soon will ICE vehicles all but vanish from our roads? Quite possibly 12 years.

Almost 100 million people have decided flawed AGI is far better than what they’ve been doing.

Naysayers in general are often the ones who may be overcome the fear of change, and that is completely fine as it is is in human nature to preserve what we know and avoid the risk of unknown, since age progressively reduces the human capacity to react and understand quickly and flexibly. There is some unease in programmer circles about programmers job’s being in surplus because of the AI. But I think there is nothing to worry. The fact a non-programmer will now be able to use AI tools to create a program does mean there will be more jobs needed to create these tools.
Where I am more worry about is in many other areas like sales, administration, customer support. That’s where we will see lots of people on the street in the next 10 years because adapting to change is so hard once we achieve the comfort zone of predictability, which is what every human being strives for, despite the constrains it leads to.

Blacksmith circles had the same trepidation.

Name a single technological advance that resulted in mass unemployment. I don’t think that hypothesis holds.

Free markets are resilient. As demand slows, displaced workers adapt. Silently, balance is sustained.