Script duration

Hello all:

I have recently completed my first Airtable script. It compares two tables (one with ~3000 records and the second with ~2000 records). If a record in both tables matches, we update a field in the first table.

Everything is working fine, and it does what is required, so the following question is to help me improve any future scripts.

Is an execution time of ~6 seconds normal? The ‘found array’ has ~500 records.
About 80% of the execution time is consumed by writing the update back to Airtable in blocks of 50.

Is there anything one can do to improve this execution time? I changed the update field from single-line text to a checkbox and the improvement was negligible so I’m assuming the datatype isn’t the issue.

Yes, it is normal for each batch of 50 records to take about a second to process. The actual time can vary depending on how busy Airtable servers are, and the complexity of any calculated fields in the table.

As for improving execution time, it is hard to make suggestions without knowing more about the code. Are you updating only records that need a data change versus ?all matching records Are you reading only the fields you really need? Are you using nested loops when looking for matches?

Hi @Kuovonne - thank you for the quick and thorough reply. It is appreciated.

The one second per 50 records is good to know.

No, there aren’t any nested loops and I have filtered out any records that don’t need to be updated.

On the most recent run, the execution time to establish the matching records was 1336ms, and the overall process, including updates, was 6301ms … assuming I am logging these times at the appropriate part of the code. I’m happy with this as a first effort but I will continue to tinker.

1 Like

As @Kuovonne mentioned, the nature of the code will often dictate the performance. One must ask - what is the definition of a match?

Hi Bill @bfrench,

In this case, there is a short string (i.e. xyx/20c23a) in both fields that need to be equal to qualify as a match.

I managed to reduce the initial filter down to ~200 ms (I was unnecessarily grabbing every field in both tables), but the update still takes 4 or 5 seconds based on ~500 matching records. The update is simply toggling a checkbox to true.

I read your excellent post on hash indices on that other forum and will endeavour to learn that method when I have larger tables to contend with.

I’m not unhappy with the 5 or 6 seconds it’s taking assuming that this is the time it takes for 10 or 12 slices to update.


Yep - 5x faster; this is a good change.

Right. Agree. I’ve often felt that Airtable should support background updates with a very fast ability to queue up thousands in milliseconds. Imagine when you encounter a case where 25,000 records need to be updated. 100 records per second is 250 seconds and that’s bad. You can get engineer a lazy update process by creating a queue in a table that contains embedded JSON lists of the updates in a long string field and then an automation that churns through the list. This just seems nutty, though.

You can send up to 15 batches per second. With 50 records per batch, that’s 750 records per second. Just don’t await the result. However, the actual updates still take about the same amount of time. And it can bog down the rest of the base when it gets slammed with thousands of update in a few seconds.

How are you identifying matches with neither a hash nor nested loops? Those are the most common methods I know of. I’m interested to learn more techniques.

But, then, you won’t have any mechanism to know if there were errors and gather retry analytics, right?

Yeah. Not saying that leaving out the await should be done. But it can be done if you are really squeezed for time.

As it turns out, I did use a hash. After querying two tables, I used a Set in table2 and the array of table1’s records to filter out any records that didn’t match or were already set to true in the update field.

In my reply to Bill, I meant properly learn and put into regular practice. Sorry for the confusion.