Calling NOW() multiple times in the same formula

Based on your comments on the other thread, @bfrench, it seems like you are using a pause of 400ms to guess when a user is done typing, and then use AI to verify if the user is done typing.

If any payload list for any given even collection is idle for more than 400ms (Doherty’s threshold), we assume the entry is complete.

If a user pauses in typing, even for half a second, your system thinks data entry is done, pending verification by AI.

Or am I missing something?

No, that’s the general approach. There are some other things that cause some variability such as API back off algorithm. The event API, after all has the same limit as the data API.

And there is the indicator that the server has a hunch there’s more events queued for the stream. If that is true on the latest list payload, the 400ms is extended.

But as I mentioned, even if it encounters an event stream that is interrupted, it’s still able to recover and push through the completed values.

The delay logic is more possible in the event API than in the Airtable client because you have a stream of events always available to decide when to accept a change.

Airtable could improve this event system significantly by adding things like beginning and ending field focus, but this is not easy at the server.

Cool, so as we previously figured out, Airtable has no native way of knowing when a user wants to truly commit his or her data (i.e. when they have finally stopped typing).

I suppose that people who really need this functionality to be natively built into their database software will need to turn to a more advanced — but also more complicated — database tool like FileMaker.

1 Like

LOL. Yes. Or build something like this, a solution that almost everyone, especially in the no-code realm, wants to avoid. Enterprises are more inclined to do this, but even so, it’s a very narrow audience who will go to this trouble to incrementally improve that which the Airtable client automation is completely inept at achieving.

Users want optimistic saves and no-ode solutions in the worst way, and they got them. Now, they want features that cannot be provided, given those constraints. The irony of all this code for a no-code platform continues to expand. :wink:

To tidy this up, I’ve had a difficult time getting Airtable community users to engage here in Table Forums, so I’ll finish this topic by addressing the inquirer’s follow-up comments here.

The expectation is not unreasonable. You can specify which table or even field update should send a notification. It is not unimaginable that you would be able to specify what specific interaction(input in progress, completed input) would send a notification as well. But as far as I understand that is not currently possible.

Whether it’s reasonable or not is a great debate, but not without recognizing technical challenges. The big one is optimistic saves. Is it reasonable to enforce a button to save every record change? If not, it’s unreasonable to expect this requirement is a reasonable expectation.

Information that would let to identify what interaction happened with the field is also not present in the payloads.

As a collection, the payload list does provide the data necessary to determine with high probability what occurred in the client app. I think the inquirer is not looking at the events notifications as a signal to fetch the latest stream of user consciousness. Supporting my hunch, this passage seems to suggest this is the case.

A workaround would be to check the payloads every x time …

This is not a workaround, it is the documented approach except for one thing - the polling part. You don’t need to poll every so many seconds or at any interval for that matter. This entirely defeats the benefit of an event-driven API. You use event broadcast signals to request the latest and most complete stream of events which are provided as a collection for a given event type/users/field. It is your job to interpret this stream of activity however your use case demands.

… but again that defeats the purpose of using webhooks.

This user has assumed that all webhook systems are designed the same. Many, not all, webhook architectures send a hook notification that includes the field data before and after the change, and such hooks are broadcast when - and only when - the platform has deemed a change has completed.

This more “helpful” design choice has some drawbacks, such as removing the ability to decide for yourself when hooks should be acted upon and under what circumstances. A “helpful” design is useful to many low-code developers because it is less work, less complexity, etc. Airtable has not provided this “helper” attitude to its Enterprise Events API, and I can understand why - they chose an agile webhook architecture that disfavors the low-code developer but probably favors the enterprise developer.

This is a key concept worthy of a little deeper dive.

With the current client-side automations, time delays are complicated to apply, as everyone has experienced. And really - what is the correct delay factor? Is there one?

… when they have finally stopped typing

It’s conceivable the answer is “never” in certain use cases. Specific data is often in a state of flux all the time. But, one could argue that Airtable could take steps to add a server-side delay that’s configured at the client. This would help no-coders overcome this challenge, eh?

WRONG

Imagine a delay of 60 seconds from the moment a field is edited, and imagine the heuristic also couples that with a loss of focus on the field and editing activities on all fields in the record. And right about the time your automation kicks in to send an email based on this presumably fully saved data, a different user decides to change the data because the customer name was flawed. Too late, the errant email was dispatched with a partially modified field. Oops!

In case Julian (or anyone else) has been following this thread, and given the additional information Bill has provided about how Enterprise webhooks work, I still believe the following:

Possible reasonable choices for most Airtable developers:

  • Have users manually indicate when they are done typing with something that is not subject to confusion about when data entry is done, such as a checkbox or single select or pushing a interface button. (This requires user training and trust that users will follow that training.)

  • Restrict data entry to forms, possibly by using an interface.

  • Stick with the “when field has a value” trigger, but introduce a pause in the automation via scripting, followed by a fresh read of the value.

  • Stick with the “when field has a value” trigger, but do not include the actual value in your notification. Instead, have the notification say that there is an update, but direct the user back to Airtable for the actual message. (I forgot to mention this one above.)

  • Some combination of the above.

Things to NOT do:

  • Rely on NOW() for timing when a user is done typing
  • Trigger on “when updated” for fields that expect typed input
2 Likes

Can you please explain what you mean by “client-side automations”? My understanding is that AIrtable automations occur server-side.

This is why I prefer systems that do not rely on a delay to determine when a user is done typing.

Another issue is that Bill’s system is designed to figure out when a user is done typing within half a second of the actual end point. However, in other use cases, a much longer delay is acceptable. In Julian’s case, a delay of five minutes was acceptable.

There are obviously multiple possible ways of tackling the issue. And the method in this thread that gets the fastest, most accurate result is also the method that is the most difficult to build and maintain. And sometimes having something that is easier to build and maintain is more important.

1 Like

That’s the baseline. Did you see I circled other heuristics present in the stream of consciousness?

Yeah, this is semantically complex. In Airtable’s vernacular (Enterprise API) they refer to the “client” as the service interpreting the events - it is a “client” requesting event stream payloads. So, it’s really a server-side process but it is acting like a client to yet another server-side intelligence.

In almost every case, it is most important. :wink: But that’s not what Airtable gave us.

Not unreasonable in some cases. Imagine, though, that ten different users are banging on this data, and at least one user manages to touch one field dependency every 4.5 minutes. Now you have an automation that may be triggered many hours later.

Yup. The mightHaveMore key is really useful information that is not available in regular Automations. It lets you ignore triggering values when there might be more later values. But I don’t see how that changes the fact that you wait 400ms after there are no more data changes.

Ah. Thanks for the clarification. I was picturing the person doing the typing as the “client” in terms of native Airtable automations. I don’t think of the Enterprise Webhook Notification system as “automations”.

Seems like you specialize in use cases where it is not possible to build something that is simple and easy to maintain.

So you have to understand the use case better.

I believe that Julian was originally trying to trigger the automation 5 minutes after the last modified time of the field. In this case, yes, the actual notification could be long delayed.

One of my proposed alternatives was to wait 5 minutes after data entry starts. In this case, there would be only one notification after the first change, and no notifications for subsequent changes (unless someone cleared the field).

Of course, it is possible to build more complicated systems that take into account things like when the editor changes. But that gets really complicated.

There are lots of edge cases. At what point do you draw the line and say which edge cases to include in your system and which just aren’t worth the effort?

It’s a predicate. We reset the 400 second clock if Airtable believes there might be more coming.

To me, this logic is obvious.

  • Lacking any other data to suggest the user may still be typing, the elapsed time between detected keystroke events is all we have to make an inference.
  • Given more data (such as is provided by mightHaveMore) makes it possible to briefly suspend our zest to assume completion based solely on a quiet period in the event stream.

There are many aspects of my approach, and not all of them are explained. Another heuristic is the terminal point of the payload stream. I’ve observed the stream typically includes events as long as the user has the field in focus. If you make two consecutive requests of the stream payloads and neither contains data changes to the field, but the final event has mightHaveMore set to false, you have much more data to work with than a quiet time interval.