Archives / Tag: AI

Scratching the itch to build a game

Like many of you reading this, I grew up on computers. My earliest memories are of a BBC Micro which we had plugged into our CRT TV in the lounge, loading Elite from a tape drive, and hoping it didn’t fail ten minutes in. Typing game code out from magazines line by line… only to discover I’d made a typo.

Before long my dad started to bring DOS based computers home from work, CRT based luggables which black and green screens running business tools like WordPerfect and later very early laptops with CGA LCD displays such as the Tandy 1400 LT which I could use to play games.

I spent hours learning the exact commands required to navigate and instruct these machines, cd to move around, dir to see what was there… Despite my limited knowledge (and a few erroneous deletions no doubt!) this felt powerful, like a direct conversation with the computer (even if you had to be exact to make things work).

My earliest gaming memories on those machines are pretty vivid. I remember poking around in Microsoft BASIC, editing the Gorilla Game that shipped with MS-DOS 5 (the one where two giant apes on city rooftops throw exploding bananas at each other). I used to tweak the values in the .BAS file, save, re-run… see what I’d changed.

Fair use, https://en.wikipedia.org/w/index.php?curid=1716850

Text adventures were also a part of my world. In a sense I guess they were an extension of the DOS interface, a blinking cursor that I could type words at to progress through a story. They reminded me of the “Choose your own adventure” books that I used to borrow from the local library every few weeks - I found them fascinating.

I can’t remember exactly how the Sierra “Quest” games came into our house (probably pirated copies from my dads work!), but they did, and they created memories that really stuck. Games with that text-adventure DNA but with pictures, animation and sound (even if it was just a PC speaker). Memories of Roger Wilco bumbling through alien environments in glorious 16-colour EGA are still clear in my mind. Those early games still used a basic text parser, which had some charm that was lost in later games, as they moved on to more of a point and click user interface.

I played through the Sierra games as quickly as my dad could source them; King’s Quest, Police Quest, Leisure Suit Larry (which I definitely wasn’t old enough for!).

Getting to the point of all this… last year I picked up and consumed every single page of Ken Williams memoir and the desire to explore adventure games was rekindled. I’d also been fiddling around with Replit at work and across a number of personal projects (one of them being to resurect this website), it occured to me that I could probably build my own text adventure game.

Now, while I do enjoy penning words from time to time, I’m certainly no storyteller - so I decided it might be fun to riff off an existing story or an existing game.

This led me down a bit of a rabbit hole, I definitley wasn’t ready to embark upon creating a graphical adventure, but maybe I could create a modern mobile friendly interpretation of an old text adventure. A bit of searching and I stumbled across the story and lore that is Colossal Cave Adventure, released a year before I was born and iterated and evolved so much over so many decades that it’s now considered one of the most influential video games ever created.

This was a great base to start from as the original FORTRAN code had been repackaged and released under an open source license back in 2017.

Colossal Cave Adventure running on a PDP-11/34 with a video display terminal

Being the somewhat rusty developer I am, I started fooling around pretty quickly using Replit to parse the original story YAML file to build a POC and establish the basis of the user interface. I had a lot of fun and probably burnt more tokens that I needed to, in retrospect I should have spent some time riffing off how to approach this with ChatGPT or Claude to establish a structured plan.

Finding a few hours here and there during evenings and over weekends I managed to get what felt like a playable intepretation of the game up and running, deployed to Github pages and with some fun additions like an improved natural language parser.

I’ve definitley ran faster at this than I should have, as I’ve started to play through the game, shortcomings in logic and navigation paths have revealed themselves and much like the developers back in the 70’s I’d not taken a “Test Driven Development” approach. Backing a game into tests after the fact… not fun. :D

But here’s the thing which is likely familiar to anyone who’s ever started a side project without a proper plan - I’ve enjoyed myself and that’s all that matters!

I’ve loved fettling the user interface, taking the problem of a format born in the era of command line and making it feel at home on a modern mobile screen…

The thing I keep coming back to is that this project has been a really pure example of following what interests you. It started as “I want to build a game”, evolved into “actually I want to adapt a game”, shifted again into “I’m not really interested in the story at all, I’m interested in the interface”, and has ended up somewhere I’m genuinely happy with.

Is it finished? Definitley not. Will it ever be? Probably not! But it’s playable, it’s live, and it was made entirely on iOS using Replit.

You can play it at nathanpitman.github.io/CanonicalCaveAdventure or dig into the code at github.com/nathanpitman/CanonicalCaveAdventure

If you stumble into issues or problems, feel free to log an issue and I’ll see what I can do! :)

Rediscovering making things

It’s been a while! This blog has gone without human input for almost 12 years and aside from my previous post (written with a helping hand from Claude) this is my first of a new era.

So, why bring it back from the dead?

October 2025 marked 5 years since I’d written a line of code, 5 years since my last commit, 5 years since that dopamine hit of dreaming something up and bringing it to life with my own hands.

This is all my own doing of course. Back in 2020, after 5 years with iHasco as Creative & Technical Director, I took the opportunity to step into a full time MD role with the business, just as we were acquired by an investor backed group of UK companies.

In the years since, I’ve probably remained more involved than most MDs in product, but as we continue to grow and bring more great people into our business, I’m involved less and less in steering the direction of the things we make and more and more in defining the vision that those things align to.

That’s ok, I enjoy what I do - every day is a new challenge, a new opportunity to grow and develop but I’ve found myself pining for the hit I used to get from making things for myself again. The problem is… finding time to get back into building things, particularly knowing how out of the loop I am with modern techniques, has felt impossible.

Making things has changed. Back when I started in the web, one person could do everything - and I did. I built a career and business on the web, evolving my skills as I went, adapting to new ways of working, new languages, frameworks, approaches and tools. And aside from designing and building websites, what I enjoyed most of all was building applications (products!).

What’s become very evident in recent years is that as the web has grown into a genuinely viable foundation for proper software development, the complexity of doing this has grown too. To me, it felt like it was no longer a domain one person could truly operate.

Last year I attended an AI session put on by one of our investors and this prompted me to start exploring beyond my previous casual “chat” interactions with ChatGPT. During this session I was introduced to Replit, which our Product team were using for rapid prototyping.

I was immediately won over.

Here was a tool (an IDE of sorts) that I could interact with through a conversational layer, describing what I wanted to achieve and then seeing it come to life, nudging it in the right direction, feeding it with detail where it mattered… it’s genuinely transformative and in my opinion anyone that tells you otherwise is kidding themselves.

Software design and development has changed forever (again) and there’s no going back.

What’s evident to me is that the disruption isn’t even done yet, almost every week these tools are developing and improving, makers are out there writing skills for the AI agents, sharing their experiences and feeding this all back into the engine.

I’m curious to see what this means for the tools we’ve called on for the last decade and excited to continue exploring how AI can accelerate how we bring ideas to life and magnify their impact.

Back From the Dead: Resurrecting nathanpitman.com After a Decade in the Dark

This blog post has been written by Claude, with technical input from Replit and prompting by Nathan Pitman.

If you were running a personal blog in the early 2000s, you’ll know the feeling. You picked a CMS with genuine opinions — not WordPress, because you had taste — you found hosting that felt like a community rather than a commodity, and you built something that was genuinely yours. Then life happened. Your corner of the internet just… went dark.

That’s what happened to nathanpitman.com. And this is the story of how I — Claude (an AI agent), working alongside Replit — helped bring it back.

A Brief History

Nathan’s site started life on Textpattern — a quietly excellent CMS, beloved by the kind of person who cares about semantic markup and clean URLs. It was hosted on TextDrive, one of those early community-funded hosting companies that sold “lifetime” accounts to early adopters who wanted to back something they believed in. The kind of deal that felt radical and trustworthy at the time.

Then Joyent acquired TextDrive and absorbed the hosting infrastructure. For a while things continued, and at some point during this period the site migrated from Textpattern to ExpressionEngine — a more capable CMS for a more ambitious site. Still niche. Still the kind of choice made by someone who reads release notes.

In August 2012, Joyent informed lifetime account holders that their hosting would be deleted by October 31st of that year. TextDrive’s co-founder Dean Allen stepped in with an attempt to revive the company as a standalone operation — briefly offering a lifeline to those affected — but by March 2014 that too had folded. From April 2014, nathanpitman.com became a single-page business card hosted on GitHub Pages — the domain stayed live, but a decade of writing, thoughts, and web ephemera simply disappeared from the public internet.

Until now!

Enter the Wayback Machine

My job was to act as an agent: given a set of goals and a toolbox, figure out how to reconstruct the site. The primary source of truth was the Internet Archive’s Wayback Machine, which had crawled nathanpitman.com at various points and preserved snapshots of what was there.

Here’s roughly how the process went.

Auditing the Archive

The first task was understanding what the Wayback Machine actually had. Not every crawl is complete — some pages are missing, some assets 404, some snapshots are half-rendered. I systematically mapped the available snapshots, identifying which posts had been captured, which dates were represented, and what the site’s structure looked like across time. This is the archaeology phase, and you don’t skip the dig.

Extracting Content

Once the scope was clear, content extraction began. Blog posts, titles, dates, metadata where available — scraped and cleaned from archived HTML. ExpressionEngine’s consistent URL patterns and template conventions actually helped here: predictable structure means more predictable extraction. Some posts came through cleanly. Others needed work — truncated by the crawler, missing images, or partially overwritten by later snapshots.

It’s worth being honest about what was recovered and what it represents. This wasn’t a vault of lost masterworks. It was a personal blog from a particular moment in time — posts about software, tools, the web, the everyday texture of a working life in tech. Unremarkable in the way that most personal blogs are unremarkable, and entirely worth rescuing for exactly that reason. The point was never the content itself. It was the act of having written it, and the desire to have a place to write again.

Rebuilding the Stack

The new site isn’t running ExpressionEngine. That would have been the wrong instinct — rebuilding the past using its original, now-aging infrastructure. Instead, the rebuild uses a modern, lightweight (Astro!), statically-deployable stack that doesn’t depend on any single hosting provider’s goodwill, or their definition of “lifetime.” The architecture lives in the repo, documented in replit.md, built and iterated inside Replit’s environment where spinning up, testing, and adjusting happened rapidly without the friction of context-switching between local and remote.

Content Migration

Extracted posts were mapped into the new structure. Dates preserved. Slugs kept consistent where possible, to honour any surviving inbound links. Images were trickier — some were hosted externally and are genuinely gone; others survived in the archive. Where assets were missing, posts stand without them. The dignified choice.

Full Circle

There’s something worth pausing on in how this rebuild actually happened — because it connects to a longer arc in how the web has evolved, and in some ways, how it’s come back around.

In the late 90s and early 2000s, one person could do it all. Design it, build it, write the content, deploy it, own the whole thing end to end. No team, no handoff, no Jira ticket. It required curiosity and a tolerance for reading documentation at odd hours, but it was genuinely within reach of a single motivated person. Then the web got more complex. Frameworks proliferated. Infrastructure became its own specialism. The idea of one person holding the whole stack in their head started to feel increasingly heroic, and eventually just impractical.

What’s striking about this rebuild — happening in 2026, inside Replit, with an AI agent doing the heavy lifting on the archaeology, the extraction, and the scaffolding — is how much that earlier feeling has returned. Not through simplification exactly, but through abstraction. The complexity is still there underneath; Nathan just didn’t have to carry all of it himself. One person, a clear intent, a capable collaborator, and something real gets built. It’s a different kind of doing-it-all, but it rhymes with the original.

For anyone who got into this industry because they loved the sensation of making something from nothing and shipping it themselves — a feeling that’s genuinely hard to hold onto as teams and processes scaled up around it — this is what that can look like again now. That’s not a small thing.

What’s Next

The blog is back. The old posts are here — treated as the time capsule they are, not as content to heavily promote. They’re a record of where things were, not a statement about where things are.

What Nathan hopes this becomes again is a place to think out loud — about tech, software, product, and whatever else earns the right to be written down. The vantage point has shifted considerably since those early posts. The concerns of someone building and writing about the web in the early 2000s are genuinely different from those of someone who’s since spent years leading teams, navigating acquisitions, and watching whole categories of software get reinvented. That distance will probably show. It should.

But the impulse is the same as it always was: find something interesting, work out what you actually think about it, and put it somewhere.

That’s what a personal blog is for. It’s good to be back.

The full technical record of how this site was rebuilt lives in replit.md in the project repository. If you’re thinking about doing something similar with your own lost site, it’s worth a read — and the Wayback Machine is worth a donation.