Running Pull Requests in Github Codespaces
Earlier this week, I was having a lot of trouble understanding how to get Dev Containers working. Dev Containers are the required way to enable Github Codespaces, which was my actual goal. I finally got to something that works, so I want to document what I learned.
Dev Containers is a standard developed my Microsoft for using docker containers to run dev environments for a project. I use Docker for most of my projects these days. I think it’s a great tool both for simplifying local development and for packaging and deploying my projects. In the current project, I already have a Dockerfile and a docker-compose.yml file. I deploy to fly.io which will also run my docker build to deploy the project. This works fine for me, and I had never used Dev Containers. It seemed overly complicated (and I was right about that).
I want to give more of the context about why I’ve been pursuing github codespaces, because I think it’s relevant. But if you wanna skip to the tips for getting Dev Containers and Codespaces working, feel free. (I’m also going to talk positively about using LLMs to code, so if that makes you want to skip too, there’s no hard feelings).
Background
Here’s the idea. I have a solo project that I’m working on, and instead of hacking away on the main branch, I’m using a full feature branch and pull request workflow to keep my changes organized. This might seem like overkill, but it’s helpful for me because I tend to bounce around and leave things half done. I can run things locally and test different things by switching between branches. But I’m also doing a lot more AI-assisted development. Locally I use Claude Code. But I’ve been experimenting with allowing Github Copilot to create PRs. I create a github issue, assign it to copilot, and it spins up in the cloud and does work. Afterwards it opens a pull request for me to review. So even though this is a “solo” project, I end up with pull requests that I didn’t write myself and I need to validate.
Giving tasks to copilot in the cloud is cool for a couple of reasons:
It’s better than setting an LLM lose on my laptop unsupervised. I’m still working on how to use them safely. When I’m using them locally, I babysit diligently and approve every command it wants to run. But this gets tedious, and that can only lead to cutting corners. With github codespaces, I can make it github’s problem.
I get to work on the project even if I can’t sit in front of my computer. When I get excited about a project, I tend to think about it constantly. It’s tough to compartmentalize when I’m having ideas but I’ve got other things to do. The workflow of creating issues and assigning them to copilot helps me move forward even in-between work sessions. I plan to give more of my thoughts on AI-assisted development in a future post.
The result is I’ve got PRs from copilot that I need to review. I can look at the diffs, but we know that even if it “looks good to me”,LLMs can and do make subtle mistakes and require extra scrutiny. What I want to do is actually spin up the app with the changes and kick the tires. Github codespaces seemed perfect for this. But it took a lot of trial and error to get to something that works for me.
Leaning on docker and docker compose
We need to take a slight detour before diving into Dev Containers. We will be using docker and docker compose to help us bypass some of the complexity. So this post assumes that you have a working setup using docker-compose.yml. I’m not going to go into detail about that here. If you’re interested, maybe I can do a future blog post.
Your docker compose setup should also include a database if you need one. Most apps like mine require a database to connect to. There are lots of ways you can go about this locally. In fact, if you’ve got a local setup using docker compose, you’ve probably already figured this out. But there will also need to be a database available once we upload to a github codespaces environment. So for that purpose, this post also assumes that you have a database container alongside your app container. It usually looks something like this for me (stripped down for brevity).
services:
app:
image: my-app
container_name: my-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
env_file:
- .env
postgres:
image: postgres:17-alpine
container_name: my-app-db
environment:
POSTGRES_DB: my_app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- ./backend/data/db:/var/lib/postgresql/data
Note: There are a lot of other details to configuring a non-trivial docker compose environment. That’s out of the scope of this post. Sorry.
This creates an app that uses my project files to build a docker image. The database is pulled from an existing docker image that runs postgres. The resulting app can reference postgres as the hostname of the database. Docker automatically creates an internal network where the two containers can see each other. Because we’re going to have Dev Containers read our docker-compose.yml from the start, this should work as expected.
Getting a Dev Container working
This walkthrough assumes you have a few prerequisites already installed:
- vs code
- Dev Containers extension for vs code
- Docker
Note: The Dev Containers extension uses your local docker system to build and run containers. If you run into early issues, check that docker is running and that vs code has permission to access it.
Reading the docs for Dev Containers seemed straightforward enough. I used the vs code worflow to create a devcontainer.json file. But nothing worked out of the box. It came with it’s own docker-compose.yml file. It wanted me to use a standard Microsoft docker image for the environment rather than the one specified in my own Dockerfile. I’m pretty sure we’re mostly expected to run their pre-built templates for dev containers. You pick the one that seems to match your project most closely, and it’ll probably work by just injecting your code into it. But if you want to be more opinionated about your build, you’re gonna have a bad time. Eventually, I abandoned trying to figure this out and instead configured the dev container to just use the existing docker setup I aleady had.
Open your command palette and select “Dev Containers: Add Dev Container Configuration Files…”. You might run through a couple of options like whether to add the dev container files to the workspace or the user config. I chose workspace. Then you’ll be prompted to choose what approach you want to take to create the dev container configuration. You’ll see the option to create it from a predefined template. But what you want to do is select the From 'Dockerfile' option or the From 'docker-compose.yml' option. The trick is that these options only appear if these files are already present in your project. When I tried to recreate what I did for this blog post, I got tripped up by that. If you wanna get started with this before you tackle the docker stuff, you can just touch Dockerfile docker-compose.yml in your project root and that’s enough to enable the menu items. However, the setup may error at some point if there’s nothing in the files.

Creating a Dev Container config through vs code
The second confusing thing is it asks you to “chose a service”. There is very little context here for what service to choose and why. You’ll see this prompt, because you have to pick one of the services to be the “primary” one for the dev container. Say your docker compose file specifies multple containers, like a web app, a database, and maybe a background worker. You probably want to choose whichever one is the main app. So for example, I selected app, which is the name in my docker-compose.yml that runs the node server.
When you run through these setup prompts, you end up with a devcontainer.json that is pretty different from the one you see from the standard tutorials. Instead of having a bunch of setup for your project in the config file, it just references your docker-compose.yml file.
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
"name": "Project name",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "app",
...
Notice that you’ll see two docker-compose.yml files. The first one is your existing file. Assuming it’s in the root of your project, it gets referenced from inside the .devcontainer folder. The second one is a custom override created by the dev container setup. It lives inside the .devcontainer folder. Leave that as-is. From here, you can add “features” to the dev container and any other things you might want from the setup wizard. For example, I added the PostgreSQL client so that I can use it to connect to my database from inside the dev container. Finally, you’ll choose to expose any ports that your app exposes. These might be added automatically if it exists in your Dockerfiles.
Here is the full version of the devcontainer.json that currently works for my project:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
"name": "Project name",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../docker-compose.yml",
"docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "app",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
// Features to add to the dev container. More info: https://containers.dev/features.
"features": {
"ghcr.io/devcontainers/features/aws-cli:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/devcontainers/features/node:1": {},
"ghcr.io/devcontainers-extra/features/ripgrep:1": {},
"ghcr.io/robbert229/devcontainer-features/postgresql-client:1": {}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [
3000
]
// Uncomment the next line if you want start specific services in your Docker Compose config.
// "runServices": [],
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as an existing user other than the container default. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "devcontainer"
}
Note: As of this writing, it looks like you can only get Node version 22 as a feature in the official Microsoft images. If you need other versions of node, you can get them from third-party community builds.
From here you should be able to tell vs code to Open Workspace in Container. You’ll see it spin up and you’ll see a bunch of logs. If it’s working properly, you’ll see it build an image from your dockerfile and then spin up the containers. Eventually the vs code terminal will open, and you’ll be dropped into a bash prompt inside your dev container. Success!
But obviously it’s not gonna work the first time. I had to debug several issues first.
Required files should be optional
Because I was only using docker compose for local development up to this point, I had my local .env file referenced in the docker-compose.yml under the env_file property.
services:
app:
image: my-app
container_name: my-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
env_file:
- backend/.env
The problem is that by default, this file is required. If it’s not present, docker compose will error. When you go to build the dev container, this file may not be present. Probably because you put it in .gitignore so you didn’t accidently check it into version control. This is the right thing. So the fix here is to use the extended format to specify that the env_file is optional.
services:
app:
image: my-app
container_name: my-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
env_file:
- path: backend/.env
required: false
Adding your env variables back to the build
Because Dev Containers won’t have your .env file, you’ll need to add the environment variables directly to the docker-compose.yml. Yes this is redundant, but I don’t know a cleaner alternative. Make sure you’re only adding standard env variables and not secrets. Secrets will go elsewhere, but they’re fine in the .env for now. According to the docker compose docs, you can have both env_file and environment specified in your config, but entries in environment will always take precedence. So you can have your .env file for local development and the environment entries for when it gets pushed to github.
services:
app:
image: my-app
container_name: my-app
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
PORT: "3000"
AWS_REGION: us-west-2
S3_BUCKET_NAME: bucket-dev
SENTRY_DSN: "..."
env_file:
- path: backend/.env
required: false
Note: You can also add environment variables directly to the devcontainer.json. But I didn’t try this and YMMV.
Build args are different from environment variables
My project uses Hono for the backend. The frontend uses the Vue framework and gets built for production using Vite. If you’ve dealt with Vite, you know it has a specific workflow for how to get environment variables into your frontend. When you prefix your environment variables with VITE_, they get built into your frontend bundle at build time. For example if you use S3_BUCKET_NAME on the backend, you have to use VITE_S3_BUCKET_NAME in frontend code. You may be more familiar with NEXT_S3_BUCKET_NAME in Nextjs. Same deal. When you use vite locally, it knows how to find the build variables in your .env file, and it just works. But when you’re using docker, these variables need to be present when the Dockerfile is being built. This can be a pain to figure out even without the added complexity of Dev Containers. I’ll just give you the answer here. The short version is that you reference the the vite variables as docker “build args”, then you inject them into the environment during the docker build.
ARG VITE_S3_BUCKET_NAME
ENV VITE_S3_BUCKET_NAME=$VITE_S3_BUCKET_NAME
RUN npm run build ...
Then you need to add these build args to your docker-compose.yml in the build section so they get passed into the Dockerfile execution. Yes this is redundant, but I don’t know a cleaner alternative. And once again, make sure you’re only adding standard build variables and not secrets.
services:
app:
image: my-app
container_name: my-app
build:
context: .
dockerfile: Dockerfile
args:
VITE_S3_BUCKET_NAME: "bucket-dev"
VITE_SENTRY_DSN: "..."
ports:
- "3000:3000"
environment:
PORT: "3000"
AWS_REGION: us-west-2
S3_BUCKET_NAME: bucket-dev
SENTRY_DSN: "..."
env_file:
- path: backend/.env
required: false
Note: You can also add build args manually using the cli.
docker build --build-arg "VITE_S3_BUCKET_NAME=..."
There may be other changes you need to make to your docker compose setup. But these are the major issues I ran into that related to Dev Containers specifically. If you’re lucky, you should be able to spin up a dev container for your project locally and try it out. Next we have to get this to work with github codespaces. Which of course requires even more steps.
Note: If your dev container spins up, but your app doesn’t work, make sure the app can connect to the database, and make you’ve actually populated the database. It’s probably still empty!
Getting Your dev Container working in Github Codespaces
Once you’ve got a dev container working locally, you can check all of the config files into github and push it. This includes the .devcontainer folder and any changes you had to make to Dockerfile and docker-compose.yml. I did this in a branch at first. Mostly because I wasn’t confident enough yet to drop it into main. But also because I was explicitly looking for this to work on pull requests.
Creating a codespace
Github will recognize your .devcontainer folder in your project and allow you to create a codespace. But if you’re working in a private repo, you may need to make sure to enable codespaces first. Then you can go to your repo page and look for codespaces under the green “Code” button on the top right. Click to create a codespace, and make sure you select the appropriate branch that you want to use. The codespace will clone that git branch into the environment before running the build. The Codespaces docs do a decent job of introducing this stuff, so I won’t go into detail here. We’ve done most of the heavy lifting with the local Dev Container setup.

Where to create new codespaces
Adding secrets
You can try to spin up a codespace in github, but it probably won’t quite work yet. Codespaces works almost like using Dev Containers locally. It can find your environment variables in your docker-compose.yml files. But it can’t find any secrets. Those have to be injected by github. Codespaces has it’s own separate space for variables and secrets. See this screenshot of the settings menu.

Where to add secrets in github settings
Once you’ve got the secrets in, try rebuilding your codespace to pick them up.
Running the app
When using docker containers, you usually specify a default command to run when the container starts up. This is usually where you start your web server. By default, Dev Containers overrides your command. Instead they add a simple command that just sleeps and keeps the codespace up and running so you can connect to it. What this means is that your app is not running by default. You’ll have to go into the codespace terminal and run it. For me it’s just an npm script.
npm run start
This is important because you want github to create a unique url for you to access your app from the web browser. That doesn’t happen until the codespace detects that a port is being used. You can leave your terminal running with your app, or you can put it in the background and manage it with something like systemd. Depends on how sophisticated you wanna get.
Wrapping up
Hooray! Hopefully you’ve successfully got a github codespace running your app. Github will give you a unique, generated url to access it. And you should be able to spin one up against any branch or pull request. I hope this saves someone some headaches when getting Github Codespaces working. Once you get it going, it’s very cool and useful. Unfortunately I don’t have any advice for getting this working if you don’t already have your own docker setup. But the docker approach isn’t well documented in the searches I did, so this is my contribution to fixing that.