I am not a coder, so I would not have any insights there.
I personally find that using a style guide for writing formulas makes identifying missing or extra brackets, quotes, and commas fairly easy.
What is a style guide. Is it an extension?
Here is an example how GPT works like charm, with those pesky little quotation marks " and “ . I need magnifying glass for this, Airtable formula box is torturing my eyes. GPT was quick:
A style guide is a set of conventions for how to format things. Think of how papers in school needed to be typed in 12 point font with one inch margins.
For Airtable formulas, I developed a style guide where every each parameter of a function goes on a new line at the same level of indent, and closing parenthesis are vertically aligned with their opening functions. This makes it easy to scan if parentheses line up. It also makes it easy to scan for commas at the end of lines.
I expect that ChatGPT picks up on straight versus curly quotes because it was a very common mistake with formulas posted on the internet a few years ago and if there were curly quotes in the formula, straightening the quotes was invariably the reply.
However, with a bit of experience with the Airtable formula editor, you can pick up on straight versus curly quotes easily yourself by looking at the color coding in the editor. Functions, text strings, and field names should all be color coded differently.
Maybe ChatGPT will mature enough to help with code, but Co-Pilot is filling that bill for me right now.
I’ve been using Github Co-pilot and VS Code for a couple of months now for both long formulas and Scripts. (For Formulas I just tell VS Code to treat them as Javascript).
Your mileage may vary . . . . Co-Pilot isn’t a ‘must have’ but it’s code completion suggestions are often exactly what I want. I wouldn’t pay more than the current monthly rate, but it’s worth the $10 bucks for me.
Working a little is also like saying not working a lot.
This is not likely to happen for a few years, if ever. Airtable is a closed formulaic system like Coda. There is no “code” [per se] for it to learn from because you are writing psuedo code to begin with. So either Airtable must train a fine-tuned LLM and open-source it, or a group of developers could take this challenge. It’s a lot of work and it requires internal access or independent development of a formula parser.
If you spend a lot of time working out this approach, I predict you will be deeply disappointed. Airtable will do this internally. The market pressure to make formula development possible, fixable, and understandable from natural language prompts will be intense. They’re likely working on this already because they have the psudo-to-code translator and that’s all that’s needed to:
- Create a few-shot prompt/training process that transforms a natural language query into code and then into their formulaic representation.
- Create a few-shot prompt/training process that transforms a natural language query to read a formula and explain what it does.
- Create a few-shot prompt/training process that transforms a natural language query to explain how to fix a formula that is not working.
This is the trifecta of AI and formulas; create, fix, explain. No one outside of Airtable will ever be able to do this well or financially practical.
If you want to use AI to make something useful in Airtable, focus on users and their data. That’s where the value will be for external AI solutions.
Yes, it works well with formulas. Scripts it’s about 50/50
There is a lot more to crafting a well written formula than simply getting the correct output for the few test cases that most people think of.
Until StackOverflow removes its ban on AI generated answers, I don’t think that asking AI for help on formula/code will be much help beyond very basic things.
AI has its place and it is getting better. But I do not think it is ready for producing or debugging custom code/formulas.
The AI forums are saturated with statements like yours. In this tiny community, there aren’t enough experienced AI/CoPilot users to tell you how misinformed you may be.
Indeed, AGI and CoPilot are far from perfect. There are plenty of naysayers who can effortlessly construct prompts that fail. Examples of AGI failure get a lot of air time because it’s good for clicks and attention. The users who thrive on developing faster and writing more and better code are not seeking attention. Historically, success in new and disruptive tech is a silent movement.
If you dig a little, you will learn that enterprises are …
- Signing up their teams for CoPilot
- Developers are using it (because we’re lazy)
- Engineers want to make faster progress
- Programmers aren’t afraid to learn faster from the collective of generally more experienced engineers who have already written what we need to write
- Engineering leaders are seeing in test metrics that show improvement that are directly causal to CoPilot’s use
If I can get the gist of a C++ class written by an engineer who left the firm a year ago and understand it in 30 seconds, what would otherwise take me 30 minutes - I’d call that a win.
Integrating that basic understanding into the code base in one second is another big win.
If I can get two quick hypothetical ways that same class might fail in 10 more seconds - it’s a win because programmers are terrible about hypothesizing all the ways code could fail - giant leap.
There’s no shortage of ideas that completely disrupt a segment and have done so with less-than-perfect performance. But that’s the definition of market disruption - it doesn’t have to do the entire job better than the human (in this case). It only needs to begin to do parts of the entire job better for the disruption to occur.
Example: Cable Television → Netflix. How crappy was Netflix when the disruption began? How fast did Cable Television vanish? 12 years.
Example: Horse → Vehicle. How crappy was the steam car? How fast did horse-drawn carriages disappear in America? 12 years.
Example: ICE → BEV. How crappy were the first Tesla EVs? How soon will ICE vehicles all but vanish from our roads? Quite possibly 12 years.
Almost 100 million people have decided flawed AGI is far better than what they’ve been doing.
Naysayers in general are often the ones who may be overcome the fear of change, and that is completely fine as it is is in human nature to preserve what we know and avoid the risk of unknown, since age progressively reduces the human capacity to react and understand quickly and flexibly. There is some unease in programmer circles about programmers job’s being in surplus because of the AI. But I think there is nothing to worry. The fact a non-programmer will now be able to use AI tools to create a program does mean there will be more jobs needed to create these tools.
Where I am more worry about is in many other areas like sales, administration, customer support. That’s where we will see lots of people on the street in the next 10 years because adapting to change is so hard once we achieve the comfort zone of predictability, which is what every human being strives for, despite the constrains it leads to.
Blacksmith circles had the same trepidation.
Name a single technological advance that resulted in mass unemployment. I don’t think that hypothesis holds.
Free markets are resilient. As demand slows, displaced workers adapt. Silently, balance is sustained.
As I was saying…
Glad to know that I am not alone in my thoughts. Thank you also for letting me know that you believe I am wrong.
I think that AI is going to be a huge benefit to reading and writing code. I think it can be a useful tool for knowledgeable developers.
And perhaps I should have stated that I don’t think that AI is ready now to write complex, custom formula/code to be used in production environments without the results being reviewed/tested by someone with technical knowledge of the language and/or use cases.
I think that non-technical people want to use AI generated code/formulas that they can use without the input of someone with technical knowledge. The current reason why I dislike this is because there is a significant risk that the AI produced code will not actually do what the person wants and the person will either not know realize that the output is flawed, or the person will have no clue how to fix the output and have unreasonable expectations about fixing the code.
Well let’s revisit this in 1 year again. Markets work quietly while tyranny of change brings those ones who can’t or do not want to adapt to extinction.
Exactly 1 year ago I asked a question about how to summarize text and @bfrench responded suggesting GPT-3. Well I couldn’t start anything but hey how times change, it’s going pretty quick. It’s unstoppable.
Ha ha ha! You just described what Airtable users currently experience all the time. Do you think these largely clueless users will be worse off, or better off with an AI-paired formula attendant? To guide someone who needs only a teensy bit of expert help is still a very low bar given the questions I see you address all the time on the forums.
I have a hunch if Airtable introduced a paired-formula assistant, support and questions about formulas would all but vanish except in extremely complex cases. The law of large numbers suggests the 80/20 rule would apply.
The data is in; it turns out even less knowledgeable developers; Jr developers, and even newcomers to the world of actual coding are benefiting significantly from an AI-paired programmer. The ship has sailed.
That’s a fair concern. Do you think the risk falls without a paired-formula or scripting assistant? Would they be better off without AI? The data is already coming in, so answer this one carefully.
Are happier developers better developers?
Ha! I remember you asking too! That was half the lifetime of GPT ago. I was testing GPT in 2021 and CoPilot as well. No one believed me when I showed them little snippets of AI-generated code. They thought I was nuts. Some still do.
I don’t think it’s that simple. Change is not abrupt for most workers. The supply of developers will transition slowly because wholesale replacement of engineering skills is going to take time. @Kuovonne is right about one thing - AI is not ready to completely replace skilled workers. And what will these displaced engineers do? Many will move into the AI economy doing stuff like this. Some will become prompt engineers. 50% will retire.
We all worry about AI, but there seems to be little concern for ICE mechanics or OEMs, who currently employ 2 million people building internal combustion engines and whose skills will be entirely unnecessary by 2040 (16 years from now). Vastly these people will transition slowly because EVs will – like AI – transition slowly. Two-thirds will retire, and one-third will do something else.
I think in general that is the biggest risk in AI code at the moment - code that looks correct, but has some flaws in it. I use CoPilot in VSCode and accelerates my process by taking away quite a bit of typing.
1-2 line suggestions are fine, but every now an then I get tempted to take suggestion for a bigger block of code. 10 min later, when something doesn’t work - and quite often it is the AI generated code block. Usually it is just some minor points, that do not fully reflect the intention of the function. On the other hand sometimes these are also mistakes that I would make as well.
There are some pitfall but Copilot is an productivity improvement for me. I would really like to see CoPilot integrated in the scripting block in Airtable. I looked at it and while it is possible to get to data in the script editors, the scope to build it is a bit too much for me.
Let’s be clear, a bigger risk is code created in isolation by a programmer who has not seen and knows all the successful design choices. This is vastly all of us.
Feel free to reject the risks of AI-paired programmers, but I have a hunch (and the early studies indicate) you are actually increasing, not reducing, risks of poorly designed code or code that will fail more without paired assistance.