The AI Shoehorn

The primary tool for AI implementers [thus far] has been the shoehorn.

Let me explain.

The vast majority of projects involving LLMs attempt to utilize inferencing that often result in unpredictable outcomes. Much energy is spent adding guardrails to control and bend LLMs to the will of new application visions. The more we are forced to pressure LLM inferencing to conform to our hopes, the less likely these apps will be successful. They run contrary to the inherent qualities of AGI.

Many developers scramble to package soup-to-nuts processes that replace (x) jobs or compress (y) time. This is a noble quest however misdirected some of them may be, it’s like pushing a rope.

Deliver Intentions

This is probably the smartest paragraph ever written about AGI so far:

The startups that come out on top of the AI hype wave will be those that understand generative AI’s place in the world: not just catnip for venture capitalists and early adopters, not a cheap full-service replacement for human writers and artists, and certainly not a shortcut to mission-critical code, but something even more interesting: an adaptive interface between chaotic real-world problems and secure, well-architected technical solutions. AI may not truly understand us, but it can deliver our intentions to an API with reasonable accuracy and describe the results in a way we understand. — Isaac Lyman

The final sentence of this quote applies to databases and Airtable in a profound sense.

… deliver our intentions to an API with reasonable accuracy and describe the results in a way we understand.

If we examine the nature of the CyberLandr FAQ prototype, the underlying technical solution comprises content designed to address questions and educate customers. As simple as it may seem, the FAQ content base is “well-architected”; it’s been carefully vetted and designed to create a knowledge base of controlled understanding. In this use case, words are both “code” and the API.

In contrast, CyberLandr customers live in a chaotic realm; they have many questions and want to ask them straight away. They don’t want to wade through 75 items searching for questions that approximately match their own questions. They’d much prefer to ask them in their own words.

They want to go from question to answer in a straight line.

It’s faster, more natural, and it has other benefits such as establishing their own context for deeper conversational inquiries.

AGI makes this possible, and it is perfectly suited for this task.

Data is the new API

We can lean on the FAQ example to launch our thinking to a new plateau of conceptual AGI implementation; the database. It contains values, patterns, words, and pictures, ultimately, lots of answers.

Traditionally we regard APIs as middle-ware designed to transform application intent into information extractions. This is a hard line separating the data from the request. It’s similar to the line we try to create when we separate data from rendering. For decades APIs have acted as rigid pipelines.

This ends now.

In the world of AGI, the line separating intent and data is not so definitive. In fact, the data itself becomes the API. Replacing the rigid pipelines is precisely what Isaac Lyman means -

… an adaptive interface between the users’ chaotic real-world and your well-architected database solution …

This is not to suggest that all database interfaces will be natural language. However, this pattern has begun to emerge, and the most successful apps and consultants will recognize this opportunity to connect the chaotic real world to the answers sought by transforming the data itself into the API.