I Built Three GitHub Codespaces Walkthroughs for Our Products. Would You Use Them?
I need your feedback to either convince Marketing that I’m a genius and they should put these GitHub Codespaces Walkthroughs on our website, or to tell me I need to keep looking for different ways to make Quickstarts easier.
Bi-directional logical replication is not a simple thing. It's a genuinely complicated problem, and getting it right across multiple nodes in a distributed PostgreSQL cluster is hard. That's what makes what we do at pgEdge special: we've done the hard engineering so you don't have to. Multi-master replication, conflict resolution, failover, all of it wrapped up so you can have this capability without needing to be a rocket scientist.
But there's still a gap between "this product exists" and "I've actually tried it," and that gap is almost always the setup. You need multiple Postgres instances, a replication extension configured between them, and enough infrastructure to actually prove it's working. By the time you've got all of that running on your laptop, you've burned an afternoon and you haven't learned anything about distributed Postgres yet. You've learned about Docker networking.
I wanted to see if I could close that gap. I’ve created three GitHub Codespaces walkthroughs, each targeting a different pgEdge product, each designed to take you from zero to a running environment without installing a single thing on your machine. The issue is that I have no idea whether developers would actually find them useful until I get some data - and that is where you, dear Reader, can help me out, just by trying them and letting me know.
Why Codespaces
GitHub Codespaces gives you a full Linux development environment in a browser tab, backed by a container running on GitHub's infrastructure. The free tier gives individual developers 120 core-hours per month, (so 60 for a 2-core, 30 for a 4-core machine) which is more than enough to run through all three of these walkthroughs multiple times. For us, the appeal was simple: if you can click a link, you can be inside a working environment in about 60 seconds.
The alternative is usually along the lines of asking developers to clone a repo, install Docker, pull a bunch of images, and configure networking. We know what happens with that approach because we've watched it happen: most people bounce somewhere around step two, and the ones who make it through are already sold on the product before they start.
Three Walkthroughs, Three Products
Each walkthrough targets a different pgEdge product and a different audience. Two of them focus on multi-master replication with Spock (our core distributed Postgres capability) and show you that it actually works in real time. The third takes a completely different angle and puts AI-powered natural language queries in front of Postgres.
pgEdge Helm on Kubernetes
This one is the most involved, and probably the most interesting if you're already running Postgres in Kubernetes and wondering how distributed replication fits into that world. The walkthrough takes you through a progressive build: start with a single primary node using our Helm chart, add a standby, then expand into a full multi-master topology with Spock handling replication across nodes. The whole thing runs on a local Kubernetes cluster inside the Codespace using minikube, so you're working with real kubectl and helm commands against a real cluster, not a simulation.
It takes roughly 20 to 30 minutes if you read everything, faster if you skip ahead. The walkthrough uses Runme (a VS Code extension that turns markdown into executable notebook cells) so you're clicking "Run" on each step rather than copy-pasting commands into a terminal, which means you can actually read the explanations between steps instead of just racing through them. By the end you'll have a running multi-node distributed Postgres cluster on Kubernetes, with Spock replicating writes between nodes, and you'll have done it without touching your own infrastructure.
Open the Helm walkthrough in Codespaces
pgEdge Control Plane
The Control Plane is pgEdge's REST API for managing distributed Postgres clusters, and this walkthrough is the fastest way to see it in action. You spin up the control plane stack, create a distributed database through the API, verify that Spock (our Multi-Master extension) is replicating data between nodes, and then deliberately kill a node to watch the cluster handle it. That resilience test is what gets people's attention. You write data to node A, kill node B, bring it back, and watch Spock reconcile everything automatically. The entire flow takes about 10 to 15 minutes, and most of that time is waiting for containers to start.
This is the walkthrough for someone who wants to understand what our management layer looks like, the person who's already past "what is distributed Postgres?" and asking "okay, but how can I actually operate this thing?" If you're evaluating pgEdge for a production workload and you want to see the API before you talk to sales, this is where you start.
Open the Control Plane walkthrough in Codespaces
pgEdge Postgres MCP Server
This one is different from the other two. Instead of focusing on replication and cluster management, it puts an MCP Server in front of a PostgreSQL database loaded with sample data and lets you talk to it in plain English. MCP (Model Context Protocol) is the open standard for connecting AI tools to data sources, and pgEdge's MCP Server is the implementation that turns natural language questions into SQL queries against your Postgres database. Ask it "what are our top products by revenue?" or “find indexing improvements I can make to the DB” and it translates that into SQL, runs it, and returns the results. The walkthrough comes pre-loaded with a Northwind dataset so you have interesting data to query from the moment you start.
The walkthrough needs an API key from Anthropic or OpenAI to run the included chatbot interface that sits on top of the MCP Server itself (set as a Codespace secret before you launch), but the setup instructions walk you through that. Once the environment is running you get both a web UI and the raw MCP Server API, so you can try natural language queries in the browser and then see exactly what's happening under the hood through the API endpoint. If you've been hearing about MCP and wondering what it actually looks like when it's connected to a real database with real data, this is probably the fastest way to find out.
Open the MCP Server walkthrough in Codespaces
What Actually Goes Into Making "Click and Go" Work
There's a reason I wanted to write this post beyond just pointing people at three Codespaces links and asking for feedback. Making something feel effortless takes a lot of effort. The engineering behind these walkthroughs is invisible when it works correctly, and I think it's worth laying out some of what goes into it. You never know, others may follow in my path.
Each walkthrough starts with a devcontainer configuration, which is basically a JSON file that tells GitHub Codespaces how to build the environment. I specify an Ubuntu base image, Docker-in-Docker support (because we need to run containers inside the Codespace), the right CLI tools pre-installed for each deployment model, port forwarding rules, and a post-creation script that handles all the setup steps that would normally take a developer a bunch of copy-pasting from a README or Quickstart. When the Codespace finishes building, the walkthrough opens in VS Code with Runme installed, a VS Code extension that turns markdown files into executable notebook cells. The walkthrough instructions aren't just documentation you read and then manually type commands from. They are the commands, and you run them by clicking a button next to each code block.
I was also creating interactive guides that followed the same “demo” script, and trying to make that all the same code, including being able to run the interactive guide in Codespaces too (if someone preferred that to the runMe) or being able to view it as a regular .md documentation file. Getting that "it just works" feeling required solving a bunch of problems that only surface when you're building for environments you don't control. The Helm walkthrough has idempotency detection built in because users will inevitably re-run steps, either accidentally or because they want to experiment. It checks whether a Kubernetes cluster already exists before trying to create one, uses marker files to track which setup phases have completed, and runs operator pre-checks so you don't end up in a broken state if you hit "Run" on the same cell twice. The Control Plane walkthrough handles platform-aware port detection with fallbacks because Codespaces can run on different underlying architectures, manages Docker initialization (which behaves differently on Linux versus macOS), and orchestrates container lifecycle management so the whole stack (control plane, database nodes, monitoring) comes up in the right order every time. The MCP walkthrough manages multi-provider API key configuration so you can use either Anthropic or OpenAI as your LLM backend, runs layered health checks against multiple services before declaring the environment ready, and handles Codespace URL detection so the forwarded ports resolve correctly in your browser instead of pointing at localhost.
Finally, once I had the Codespaces walkthroughs set up, I realized that its possible our own devs might use Codespaces to get work done - so I had to go create alternative “Dev Profile” devcontainers, and then figure out how to deeplink to the walkthrough profile.
None of this is visible to the person clicking the link, and none of it shows up in the walkthrough itself. But it represents real engineering time and real problem-solving aimed at one goal: making distributed Postgres accessible to someone who has never touched it before. I think there's value in at least giving a heads up of some of the things you might run into if you decide to make codespaces demos of your own.
Please Give Me Feedback
I built these walkthroughs because I believe the biggest barrier to trying server-based products is the setup tax. Every database vendor has documentation, and most of it assumes you already have a running environment configured the way they expect. I wanted to remove that assumption entirely and find out what happens when you can go from "I'm curious about distributed Postgres" to "I'm looking at a running cluster" in a few minutes.
I think they work well, but I honestly don't know how many developers use Codespaces regularly, or would be willing to try one, or would wait the 60ish seconds for the instance to spin up. What I do know is that it's a really easy way to try something out without having to install anything at all, and that should be worth something if you're still in the "should I even bother looking at this?" phase.
If you've been curious about distributed Postgres, or MCP Servers, and never got around to spinning one up, or if you’re just curious as to to how Codespaces can work as a demo/teaching environment, pick a walkthrough, try it out, and give me feedback - did they work? Are they useful? is Codespaces a viable way to show things off? Kubernetes and Helm if you're running containers in production, Control Plane if you want to see the management API, or MCP Server if you want to talk to Postgres in plain English. Each one takes less time than it took to read this blog.
Depending on where you read this blog, please give feedback as comments, or worst case, just email me at antony@pgedge.com

