As you may know, Airtable webhook endpoints are like defenceless little Koala bears alone in the wild. Anything can post into them without any authentication required. Their only security is obscurity; no one knows the endpoint address. Any nefarious actor who may gain access to your Airtable base could discover these easily. By and large, the risk is minimal but not zero.
CSOs and other compliance folks will quickly say that security through obscurity is a foolish idea. With that comes the question - how do you secure Airtable webhook listeners? There are only two ways.
Route all requests through a proxy that instruments the payload with an encrypted value while also ensuring the POST request is from an expected source. Then, in your webhook listener script, decrypt that value and compare the two; reject any posts that do not match.
If you have control over the shape of the webhook post, instrument the payload with a signature of any type and ensure that your webhook listener processes only payloads with this unique signature.
These two approaches are about the best security you can achieve until Airtable realizes the security that governs access to bases should also be defending webhooks.
A third approach is to test the shape of the data and even the ranges of the data in such a way that no nefarious attack would even match the data expectations you set. This is pretty good, but it requires additional script and logic which may complicate the process or add some latency.
This issue is true for webhooks in integration services like Zapier and Make as well.
Some of my Airtable scripts call Make webhooks to perform tasks that only a few people are authorized to do. However, anyone with access to the Airtable base (even read-only access) can click the run script button. I configured the script to ask the user for a passcode that is stored in the company’s password manager. Then the Make scenario checks if the password is correct. If the password is incorrect, then the Make scenario reports back to the script that the password was wrong and the task was not performed. I think this is a variation of option 2.
Could this be made even more secure? Probably. Do we need to? I don’t think so, but @bfrench might correct me.
This is a fine addition for better security. In this regard, nothing is ever enough because hackers are typically not humans; bots can perform probes at rates that do eventually cause a breach. Your approach is humanistic - it keeps honest people honest. This is especially helpful if the honest [mostly] trusted members need to be stopped from doing stupid things. However, the use of glue-factories tends to create more attack surfaces, so the CSOs will become increasingly nervous as the surfaces for penetration rise.