Snipity-Do: A GPT CoPilot for Script Examination

I’ve written about CoPilots before, and here’s an example of one I built some time ago and find quite useful.

What if you could highlight a block of code in the Airtable editor, and with a single keyboard shortcut, it could instantly examine the code to reveal:

  1. What its purpose is and how it works.
  2. The most likely ways it might fail.
  3. The tech debt that it carries and the estimated cost of that debt.

The [Paste] button inserts this content as a comment above the examined code.

This is pretty cool, and I use it a lot, partly because I’m lazy; I don’t want to examine my own code, let alone code written by others.

I have used this when reading code examples in various communities; it is designed to work everywhere you are looking at code, and it includes ignoring statements like output.Markdown().

I used a script from @Oglesbeard above as an example to see how well it would compare with his excellent documentation of the function. Snipity-Do did alright.

In my practice, this utility also posts analytics into a Coda document, such as who wrote the code, the date/time it was assessed, what additional comments were added by the person assessing the code, etc. I use it as a tool to track progress and change as much as it is used to inform a developer of some things to look out for.

I can use this tool across Windows, Mac OS, and Linux. It can be used inside any editor, app, or desktop app and Slack.

I’ll be happy to field questions about the approach I used to create this utility, but first, I need some answers…

  1. Is this something that fills a gap?
  2. Does it have value?
  3. Who amongst you would love to take this basic functionality and go nuts with it in a commercial way?
1 Like

Maybe not the answer you were looking for, but this could potentially be very valuable in educational contexts (where a lot of my background comes from).

In the burgeoning ecosystem of research and literature about computer science education, one of the stronger results/theories is that it’s impactful to treat reading code as a separate and co-equal skill to writing code.

There are tons of environments and tools that help teachers operationally with creating/assigning/assessing coding problems and projects, but very few tools I’ve seen that can help with the reading side.

A tool like this would help students check their understanding of code they are seeing for the first time and/or ensure that the code they’ve written is doing the thing they think it is doing.

1 Like

No doubt. Good idea.

Yes, and Yes. You already know this. You wrote it because it filled a gap for you and provides you with value.

Well, that’s a really big question. I know the answer is not me. But I hope it is someone.

Here’s some questions in return…

  • How much effort and knowledge are required to set this tool up and configure it?
  • What is the learning curve for how to use this tool?
  • How does this work on code that is spread across multiple files / libraries?
  • How well does the tool work for non-functioning code, either code that does not compile or code that does not produce the intended results?
  • How well does the tool work if the code includes misleading or otherwise poorly named variables?
  • What experience level do you expect of intended users (newbie, intermediate, advanced) and how to you expect their usage to differ?
2 Likes

Details another time. It’s all built on ScriptKit.

  1. Highlight some code
  2. Command p (or whatever)

All questions for the LLM-makers.

Nice tool! Definitely fills a gap.

Is there code correction ability? Is it extensible? I know you mentioned you integrate it with Coda, Including that feels essential.

1 Like

Perhaps. I think there’s a pathway to build that capability with this approach.

Yes. It’s just NodeJS running in the OS.

Correct - but that’s just me being analytically nuts about data. :wink: It could be integrated with anything.

This is one of the projects I was eluding to.

As I mentioned here, there is a full compute stack behind GPT. Despite trumpeting what this means, few have realized what myself and other peripheral visionaries like @swyx have concluded. The software world is about to change in ways we cannot begin to imagine.

Several people have asked me to share the code. In the new pay-attention economy, I’m going to make some code available as a condition. I’ve published it here.

I work at a pretty loosely structured “start-up” company, but I’m still not sure they’d appreciate me sending even snippets of our source-code to OpenAI servers :man_shrugging:. So I’m not sure I could implement this in my day-to-day.

I also find that many of the coding challenges I face daily have more to do with larger architecture questions and cross-service communication, or at least incorporate some element of those things. I’m not sure how useful this could be in dealing with those kinds of challenges, or dealing with scripting issues within the scope of the larger architecture or communication context. I feel like you’d have to give the AI model so much of the surrounding context that it’s no longer a snippet we are talking about.

Indeed, there’s a ceiling of viability to this thingie. It’s why CoPilot (from GitHub is more helpful; it sees the code in a more complete light).

Anything that goes to OpenAI is no longer stored or used for any model training effective mar 1, 2023.

Good to know

It through the api.