GPT Plugins: Existential No-Code Threat?

Bold headline, right? An existential threat to what exactly? Well, um, everything compute-related. But let’s narrow this missive down to, Airtable itself (for example).

Unless you live in a cave, you’re aware of ChatGPT. It’s in the process of roto-tilling pretty much everything compute-related. While not generally seen as a threat to much, we all see the writing on the wall; the computing world will change profoundly with LLMs, and the rate of advancement AGI has demonstrated in recent weeks and months.

Yesterday, OpenAI opened the kimono to reveal yet another massive game changer; GPT Plugins.

Let that sink in.

While all aspects of this new GPT capability are significant, one flies in the stratosphere of all other possibilities - to run computations. While everyone is drawn toward “access up-to-date information” in a mesmerizing mental trance like a Star Wars tractor beam, the most stunning part of the announcement will be vastly overlooked.

We crave the day when GPT can see the Interwebs in real-time. GPT plugins make this happen in an instant with yesterday’s announcement. But as cool as that is, it will ultimately be seen as a distraction away from the tectonic plates that are about to collide.

Run Computations

What will the future be like when you can ask a system to build a process (as code), provide it with some data, and then run that process against it?

As demonstrated here, OpenAI has revealed:

  • ChatGPT is a full compute stack.
  • ChatGPT has CPUs and GPUs available for everyone.
  • ChatGPT has a file system.
  • ChatGPT can transform your words into processes and execute them.

Let that sink in a little deeper.

Imagine Zapier comes along and says to OpenAI -

Hey, with that new plugin architecture, what if we made it possible to use our five thousand connectors in natural conversations to get data, documents, and other information that the LLM can use to answer questions, write reports, and otherwise serve as the dynamic gateway to everything the LLM wasn’t trained on?

This hypothetical is not fiction; it’s a done deal.

Airtable Threat

And then Zapier says -

BTW, what if we could also provide the database backend for all manner of natural languages to store or transform data?

This part is hypothetical but a likely reality in the near future. I warned here not long ago that Zapier is coming for more than your adhesive dollars; it wants your data.

Suppose you can interface with any datastore, retrieve data, perform computations on that data, and build your apps’ computational aspects by using your natural prompts in any language; who (or what) stands to be disrupted?

For starters - everything related to integration. GPT can understand how to POST or GET data to and from any endpoint based purely on API documentation. It has this capacity today. GPT plugins take this innate integration ability to another level - they can execute data interchanges, build log files, track performance, and report exceptions.

Let that sink in.

On the Upside

GPT plugins make it possible to ask ChatGPT for information stored in Airtable. The mechanism for building features that interface directly with Airtable is relatively simple. On this vector alone, we can anticipate massive integration opportunities. It will soon be possible to make queries like this:

Collect all records from my Sales table for March 2023 and give me a synopsis of the most significant sales in Montana; analyze those sales and produce a summary of trends by product type.

When Airtable users see what GPT plugins can do, they will demand it. They will expect it.

Airtable could easily provide GPT plugins. However, it’s the Airtable aftermarket companies, consultants, and users themselves who will likely meet this vastly underserved market that is about to explode.

2 Likes

How are people building the capability to upload files of any sort to gpt? Is that the point of the twitter thread? The plug-ins? It’s just a language model.

People aren’t building the ability to upload files - GPT is about to offer it pervasively to all users. But let me add a finer point. In the Twitter example, the author (a) has access to this new feature in beta, and (b) he used it to give a video to a Python script that GPT wrote for him, and (c) he used that script and the uploaded file (a video) to perform video editing. All without writing code and all within the context of ChatGPT itself.

He demonstrated how an “app” that he built (in this case a video editing process) could exist inside ChatGPT and such app could be generated on the fly, and it could be executed in the context of the conversation. It demonstrates that ChatGPT is not just a language model but something much bigger.

The point of the Twitter thread is to show us all that what we believe ChatGPT is, is not really the complete story.

Plugin’s make it simple work for “app” creators to publish functionalities into the LLM much the way developers currently use the app store to publish apps into the iOS ecosystem. If you thought the $200b app store economy was large, the LLM app store economy will tower above it. It is likely $1T in size and that tweet was the first anyone realized this was possible.

OpenAI has been working on this for a few years. They know that if they could write pretty good code, they could also execute it. But, even lacking LLM-based code generation, they knew that developers would want to offer AI experiences where users could ask Airtable (for example) to find all records related to a subject. Or search and replace all instances of a text pattern. Or, summarize all the comments collected in a table.

UPDATE: WebGPU + WebLLM Browser Plugins - what could possibly happen when a sharp developer realizes a database could be built and executed without any backend service?

As it turns out, GPT Plugins are not the only threat to databases as we currently know them.

WebGPU: Sleeper, But a Big Deal

WebGPU enables high-performance 3D graphics and data-parallel computation on the web.

GPUs have been around for a long time. Browsers haven’t cared. Now they do. They can access GPUs for faster processing, especially in mathematics, data science, and AI if — and only if — your hardware has GPUs.

Google Chrome has shipped WebGPU in the Version 113 Beta. WebGPU is a programming interface that sits on top of many low-level languages and allows developers to write GPU code that can run on most smartphones and computers with a web browser. One significant improvement is the inclusion of ‘compute shaders’, which permits writing programs that process data and convert it into something new. WebGPU is intended to execute math functions quickly, making it capable of performing various tasks, including conducting inference on machine learning models or performing multiplications on data frames. It will eventually be available on other web browsers.

Suppose your business is about data, data science, performing computations across big spreadsheets, or dealing with many numbers or AI processes locally. If that’s the case, it may be time to re-evaluate the hardware your team uses.

As software apps advance, so do the underlying hardware requirements. Imagine giving the best F1 driver a Mini-Cooper. He will win the race against all the other Mini Coopers but will fail miserably to compete with F1 cars.

Ben Schmidt walks through the benefits nicely.

How soon will it be before GPT4All runs in a browser? It turns out - it’s already here.

Web LLM

This project brings language model chats directly onto web browsers. Everything runs inside the browser with no server support and accelerated with WebGPU. It will soon be possible to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.

Check out the demo webpage to try it out

The architecture is surprisingly straightforward, with Python at its core.

As I envisioned a while back, AI copilots would emerge everywhere and for every imaginable process and domain of work. When browsers can independently offer AI, suddenly browser plugins will get an extended life.

1 Like