I have just posted this on the Airtable forum, but wanted to share more weeds with you here - maybe you find it useful for your projects.
I have been playing recently with Chrome extensions ( I regret it ) and OpenAI (definitely potential) . I have just released a Chrome extension, which adds formula suggestions based on ChatGPT API - directly to the formula editor in Airtable.
You can see the green “Hint GPT” button next to “Save”:
Obviously you are all versed with formulas. I have been testing it myself and a few times it was easier for me to type what I want, rather than write formulas. Not that often, but I am actually surprised. The original idea came to me when I wanted to change a super long nested IF statement to SWITCH - it certainly works well for tedious tasks like that.
On other hand sometimes the formula suggestions are just hilariously wrong or wishful thinking. Like suggestions to use IS_EMAIL() formula to validate emails
The technical weeds
The extension code is available MIT on my Github here (or you can download for free complied code here)
If you would be looking to build Chrome extension working with Airtable, you can take a look at above repo as there are a few interesting things:
I did implement encapsulated elements on top of Airtable using Tailwind (via shadow DOM and twind.style) - this way you can implement your own Tailwind styles without crashing with Airtable stylesheets
Airtable has a crazy level of DOM event capture - for example the “Backspace” key does not work… outside of editing fields. As per code comment it is blocked to prevent “Back” function on older browsers… I needed to deactivate those listeners to allow editing of my Setting fields
Similarly the click events are captured in the way that simple e.preventDefault() won’t stop Airtable from doing its default behaviors in the background. I needed to use jQuery to disable it, so that I can show Settings modal on top of Airtable
There is a lot of jQuery on Airtable page
The OpenAI implemention is trivial, I am using both ChatGPT and GPT completions - the results are fairly similar, with main difference that OpenAI charges 10x less for the ChatGPT API usage. The chat completions end up sometimes in long didactic rants, despite pre-conditioning to the contrary.
The formula editor is using Monaco editor - simplified version of what VS Code is using. It is possible to set / get data in and out of it, just like this extension is doing it. What I would really like is Coplilot like code completion inside of scripting block… but maybe another day…
The main div holding the cells is called “hyperbaseContainer” I wonder if “Hyperbase” was an alternative in the naming contest years ago
Here you can also take a look at my video of how the extension works in action (or doesn’t - I think jobwise we are all safe).
Let me know what you think! Forks or contribution to the repo more than welcome!
This is good stuff. Certainly, a fun direction to take LLMs. Leaning into easier pathways for everyday users to create more complex formulas is a good use of AI. Ultimately, though, there are far bigger fish to fry up - I think LLMs will change everything for users, not just solution-builders.
But two dimensions of this approach concern me.
Prompt engineering must be used, and that requires a fair bit of maintenance because prompts (that work) for GPT-3.5 are vastly different from GPT-3.0. Where will this go in GPT-4 and GPT-5?
The nature of Airtable formulas is unique; the patterns are inconsistent, so how could AI provide consistent (and smart) recommendations?
This response is also relevant. It explains why trying to do anything AI-related wth formulas may be extremely difficult.
That is a very interesting point! Certainly Airtable is in the best position to pretrain the models. What I see now as the biggest challenge with the gpt-3.5-turbo (GPT3.5 chat completion) and text-davinci-003(GPT 3? 3.5?) models is that they cannot further fine-tuned. At least not via public API.
It would be a big improvement to limit the model to just valid formulas, so it does not come up with creative solutions like IS_EMAIL() . Yes looks like Airtable formula and certainly useful one. Cute, but wrong!
OpenAI documentation provides API to fine-tune the earlier base davinci model. from which the 2 latest models are originating. It seems though like a lot of work went in to that fine-tuning as the responses from base davinci are mostly useless:
This will change, but it’s not needed. GP3 is an ideal base model to use to create a fine-tuned variant that is extremely productive for formulaic intelligence. CoPilot proved this using GP2 in 2021 and GP3 in 2022.
Not possible without a fine-tuned model.
I think you’re missing something in your assertion. Useless responses are an indicator of a lack of data to support good responses, not an indicator of the inability of the model to perform well if fed incomplete data.
Hey @itoldusoandso, you can actually download the compiled version here and load the Chrome. The process is fairly straightforward. I just geeked out a bit above about the technical aspects above ;). Let me know if you find the extension useful.
@itoldusoandso yes, you are right. Airtable changed the Cancel button (from being a <div> to being a <button> …) so that messed up where extension inserts itself. I hope they are done with changes like this to the layout for a while.
I am also adding support for GPT4, some people already have preview access. Seems to be having better responses. I will ship update later today.
You’re dreaming. If anything, changes like this will accelerate.
Especially concerning math computations.
While GPT-3 scored only 1 out of 5 on the AP Calculus BC exam, GPT-4 scored 4. In a simulated bar exam, GPT-4 passed with a score around the top 10% of test takers, while GPT-3.5 – the most advanced version of the GPT-3 series – was at the bottom 10%.
But this comes at a much higher cost per inference.
If processing 100k requests with an average length of 1500 prompt tokens and 500 completion tokens cost $4,000 with text-davinci-003 and $400 with gpt-3.5-turbo, with GPT-4, it would cost $7,500 with the 8K context window and $15,000 with the 32K context window.
Not only is it expensive, but also more complicated to calculate**. That’s because the cost of the prompt (input) tokens differs from the cost of completion (output) tokens. The GPT-3 pricing experiment, clearly demonstrated that estimating token usage is difficult as there is a very low correlation between input and output length. With the higher cost of the output (completion) tokens, the cost of using GPT-4 models will be even less predictable.
Are you saying that changes to the tags and IDs that Airtable uses in its HTML will continue to accelerate? Is this to discourage people from building add-on tools or scraping data? Is this for security purposes? Is this an industry wide trend versus specific to Airtable?
Perhaps. We don’t see their metrics, but it’s a possibility. I was recently asked to scrape FI data. I learned that approximately every 21 seconds, the rendering style changes for precisely this reason. The answer? AI. Train a model to look for labels like entities and names; parse using regex().
Possibly. Many companies have used this technique to throw hackers off cross-site injection probes.
Parachuting a button onto someone else’s UI can land mixed results. I have updated it now, but it might be a bit of a jQuery whack-a-mole in the future.
I only have access to the GPT-4 - 8k token version, but I have noticed also longer processing time. I cannot quantify it, but the wait for responses is noticeable.
I was unaware that GPT-4 will have 2 difference cost components. When they dropped price of gpt-3.5-turbo to 10% of text-davinci-003 my guess was that initially they started with “value pricing” and they had plenty margin to drop price, with higher volume. Maybe that is not the case, maybe they made gpt-3.5-turbo cost too low?
Reversing then hacking client-side for personal and non-malicious reasons
such as improving the UI/UX in my opinion:
It was a bit my favorite sport when I was a student, then a beginner in the professional life
and it’s even by doing that that I had learned the frontend well at the moment.
For the reasons mentioned in Kuovonne’s questions and Bill’s answers,
I felt later that it had become a sport with unpredictable results,
which is quite annoying when you want to improve the UX by the UI,
and especially very short-lived when I got something.
So I left those approaches behind a while ago.
Bill has taught us a lot in the area of APIs,
that obviously that’s what pays dividends when you’re developing on your own
but it’s limited to what the API is offering.
The case that you have heroetically handled and then proposed by opening this thread
is an example: you wouldn’t have managed to do it I think by creating a frontend on top of the current API that airtable is giving us access to.
Moreover, this frontend would not have been integrated in the Airtable UI
so no interest here I think.
I experience the same problem in the most closed content creation apps on the market like avid.com: everything is done to not be able to access or manipulate avid objects
Is your Bin Edit corrupted? Don’t ever hope to blow the responsible Dissolve out of your Timeline to get 99.99% of your edit back on your UI.
I used to do this in Lightworks but it was in the early years of the WEB in Belgium…
We have to give up AF : Augmented Frontend, I think,
except maybe for the pleasure of competition during a sleepover with non-malicious goals.