What's pi? What's a Coding Harness, and Why Should I Care?

Picture of Marco Gorelli

Marco Gorelli

TL;DR: pi is an open source tool which you can use to guide AI models in such a way that they can help you accomplish your goals faster, automate repetitive tasks and prompts, and share your workflows in a reproducible and auditable manner.

My Relationship with AI

I don’t use AI much as a programmer. I occasionally ask an LLM a question, but I don’t do vibe-coding. I haven’t found it to be very useful, but an obvious question has been lurking in the back of my mind: “what if it’s a skill issue?”
When my colleague Nick Byrne announced that he would give an internal presentation on pi, a new coding harness, I figured it was the perfect time to dig deeper. I would follow along with his presentation, and then try it out. It turned out to be quite an interesting presentation, so I’d like to share the main points with you today.

Wait, What's a Coding Harness?

When we talk about AI, most conversations seem to revolve around models. And that makes sense, because the model is the engine that propels us forwards. However, if we really want to be intentional with how we move forwards, we also need something to steer us in the right direction: that’s what a coding harness does.
The answer is that a coding harness provides you with a way to interact with models, without being tied to any particular one. It handles context windows for you, and can switch between models. It also neatly keeps a history of your interactions which you can access later, either for auditing purposes or for sharing them with colleagues. The analogy that Nick gave is that if a model is a horse, then a coding harness is the bridle + reins.

OK, so What's pi?

Pi is a coding harness developed by Mario Zechner and open-sourced under the permissive MIT license. It’s free to use and install (at time of writing, it’s as simple as “npm install -g @mariozechner/pi-coding-agent”). What might not be free are the models you choose to use with it, and that’s where the decoupling between harness and model comes into play: pi is just the harness, and you can use it with any model you like.
Once installed, you can start it by typing “pi” at the command line, at which point it’ll show you some useful info:
  • Context: this is a set of instructions which get bundled with your prompts. For example, you may want to instruct a coding agent to use the Walrus operator as much as possible when writing Python.
  • Skills: these are just markdown files in which you teach the model how to do certain tasks. For example, you may have a skill to teach it how to use “ripgrep” to search for text.
  • Prompts: these are kind of like macros for repetitive prompting you may need to do.

I'm Still not Convinced. Can't I Just Chat with a Model?

There was one point in Nick’s presentation where it started to hit home for me why it’s really useful to have a harness. To explain, let me illustrate how interactions with LLMs typically go for me:
  • I start a new chat with a model (for me, usually DeepSeek). Ask it to solve some problem for me.
  • Based on the model’s answers, I keep refining my instructions and providing more refinements.
  • If at some point the model’s answers seem nonsensical, or I realise I provided a bad prompt, I just open a new chat and start over.
The workflow with pi, on the other hand, looks much neater and more efficient:
  • Start – ask pi to perform a task.
  • Query – the harness asks the model (possibly multiple times) based on intermediate answers.
  • Inspect / Share – type “/tree” to browse the full conversation; use “/fork” to export a JSON slice for a colleague; use time‑travel commands to keep only the parts you need.

So, Do I Use pi? Conclusion

I tried using pi to address an open issue in Narwhals, using Copilot as my model. As an open source maintainer I receive a free Copilot quota that I’d never exceeded before. After roughly one hour of experimentation with pi, however, the quota was exhausted and I received a notice that it would reset only after a month.
To avoid making the same mistake I made, keep an eye on how much your vibe‑coding is costing you. pi makes this easy: type “/session” to see cost, token usage, and other useful metrics. I wish I’d remembered to run that command while I was experimenting!

I plan to try using pi again in the future but with some extra care to make sure I don’t exceed my quota. I share Travis Oliphant’s sentiment that there is good reason to be cautiously optimistic about AI usage in coding, so it’s good to see high-quality open source solutions in this space. If you would like to know more about open source and AI and how to incorporate them into your processes please do get in contact with OpenTeams.

Unimportant but amusing side note

About 10 years ago, some friends convinced me to perform a 5-minute routine at a comedy stand-up open mic. “It’ll be a small audience, and you’re moving away from Reading [the town I used to live in] soon anyway, nobody will remember it,” they said. “You’ve got nothing to lose, just do it for the experience,” they said. Fair enough, I conceded – I signed up, did my 5 minute bit, got a few laughs (yay!), and thought that was the end of it.

Until, that is, I was two years into my employment at Quansight, and had the pleasure of meeting my new colleague Nick Byrne. I’ll never forget what he first said to me. “Hey, I think I’ve seen you before. Did you do stand-up in Reading once?”

Image of Marco's standup

Share:

Related Articles

Share Your Feedback

Share your feedback on Collab and Nexus. We read every response.

Collab (Desktop App)

Nexus (Intelligence Hub)

Closing