I’ve written about CoPilots before, and here’s an example of one I built some time ago and find quite useful.
What if you could highlight a block of code in the Airtable editor, and with a single keyboard shortcut, it could instantly examine the code to reveal:
What its purpose is and how it works.
The most likely ways it might fail.
The tech debt that it carries and the estimated cost of that debt.
The [Paste] button inserts this content as a comment above the examined code.
This is pretty cool, and I use it a lot, partly because I’m lazy; I don’t want to examine my own code, let alone code written by others.
I have used this when reading code examples in various communities; it is designed to work everywhere you are looking at code, and it includes ignoring statements like output.Markdown().
I used a script from @Oglesbeard above as an example to see how well it would compare with his excellent documentation of the function. Snipity-Do did alright.
In my practice, this utility also posts analytics into a Coda document, such as who wrote the code, the date/time it was assessed, what additional comments were added by the person assessing the code, etc. I use it as a tool to track progress and change as much as it is used to inform a developer of some things to look out for.
I can use this tool across Windows, Mac OS, and Linux. It can be used inside any editor, app, or desktop app and Slack.
I’ll be happy to field questions about the approach I used to create this utility, but first, I need some answers…
Is this something that fills a gap?
Does it have value?
Who amongst you would love to take this basic functionality and go nuts with it in a commercial way?
Maybe not the answer you were looking for, but this could potentially be very valuable in educational contexts (where a lot of my background comes from).
In the burgeoning ecosystem of research and literature about computer science education, one of the stronger results/theories is that it’s impactful to treat reading code as a separate and co-equal skill to writing code.
There are tons of environments and tools that help teachers operationally with creating/assigning/assessing coding problems and projects, but very few tools I’ve seen that can help with the reading side.
A tool like this would help students check their understanding of code they are seeing for the first time and/or ensure that the code they’ve written is doing the thing they think it is doing.
As I mentioned here, there is a full compute stack behind GPT. Despite trumpeting what this means, few have realized what myself and other peripheral visionaries like @swyx have concluded. The software world is about to change in ways we cannot begin to imagine.
Several people have asked me to share the code. In the new pay-attention economy, I’m going to make some code available as a condition. I’ve published it here.
I work at a pretty loosely structured “start-up” company, but I’m still not sure they’d appreciate me sending even snippets of our source-code to OpenAI servers . So I’m not sure I could implement this in my day-to-day.
I also find that many of the coding challenges I face daily have more to do with larger architecture questions and cross-service communication, or at least incorporate some element of those things. I’m not sure how useful this could be in dealing with those kinds of challenges, or dealing with scripting issues within the scope of the larger architecture or communication context. I feel like you’d have to give the AI model so much of the surrounding context that it’s no longer a snippet we are talking about.