It started, as most of my side projects do, with a moment of irritation. I was sitting at my desk on a Sunday evening, running through our deploy process for the third time that week, and I thought: I could automate this. Famous last words.

The deploy workflow at the time involved SSH-ing into a VPS, pulling from the correct branch, running a build script, restarting a couple of services, and then manually verifying that everything came up clean. It took about eight minutes if everything went right, and about forty-five if something didn't. Which was often.

So I opened a new directory, ran npm init -y, and started writing what I thought would be a simple Node.js CLI tool. Something that would automate the SSH, the pull, the build, the restart. Maybe a hundred lines of code. A weekend project, tops.

Three months later, I was still working on it.

A cluttered desk with a laptop showing terminal output, coffee cup, and scattered notes
My workspace during the "just one more feature" phase. The sticky notes tell the story.

The first version: ugly but honest

The first working version was about 60 lines. It used child_process.exec to run commands over SSH and had zero error handling. If something failed, it failed silently. But it worked — in the way that duct tape works. You wouldn't show it to anyone, but it held things together.

deploy.js — version 1

const { exec } = require('child_process');

const SSH_HOST = 'deploy@192.168.1.50';
const REMOTE_DIR = '/var/www/app';

const commands = [
  `cd ${REMOTE_DIR} && git pull origin main`,
  `cd ${REMOTE_DIR} && npm ci --production`,
  `cd ${REMOTE_DIR} && npm run build`,
  `sudo systemctl restart app.service`,
];

async function deploy() {
  for (const cmd of commands) {
    console.log(`Running: ${cmd}`);
    await new Promise((resolve, reject) => {
      exec(`ssh ${SSH_HOST} "${cmd}"`, (err, stdout) => {
        if (err) reject(err);
        console.log(stdout);
        resolve();
      });
    });
  }
  console.log('Deploy complete.');
}

deploy();

I used it for about a week before the problems started. A failed git pull would leave the remote in a dirty state. A botched build would still trigger the restart. There was no rollback, no logging, no way to know what went wrong unless I SSH-ed in and looked around.

This is the part where a smarter person would have paused and thought about what they actually needed. I am not always that person.

Scope creep disguised as good intentions

Instead of fixing the error handling, I started adding features. A config file parser, so I could support multiple deploy targets. A progress spinner, because staring at a blank terminal felt wrong. Color-coded output. A --dry-run flag. A rollback mechanism that kept the last three builds.

Each feature felt justified in isolation. Together, they turned a 60-line script into a 400-line application with its own config format, a growing list of edge cases, and more TODO comments than actual logic.

"The best tool is the one you actually finish building."

— Something I told myself and then immediately ignored

My daughter noticed. She's seven, and she has a sixth sense for when I'm frustrated. One evening she walked into my office, looked at the screen full of terminal output, and said: "Is the computer being mean to you again?"

I laughed, but she wasn't entirely wrong. I'd been fighting with a YAML parser for two hours because I wanted the config file to support nested environment variables with fallback defaults. For a deploy script. That only I would use.

The rewrite that almost wasn't

Somewhere around week six, I had a moment of clarity. I was debugging an issue where the rollback mechanism would occasionally delete the current build instead of the oldest one — a genuinely terrifying bug — and I realized I'd been solving the wrong problem the entire time.

I didn't need a general-purpose deployment framework. I needed my deploy script to not break things. That's it. The difference between those two goals is about three months of wasted evenings.

A notebook with hand-drawn diagrams showing a simplified architecture
The notebook page where the rewrite started. Sometimes the best debugging tool is a pen.

I sat down with a physical notebook (yes, I still keep one) and wrote out exactly what the tool needed to do. Not what it could do. Not what would be cool. What it needed.

That was the whole list. Six requirements. I rewrote the tool in an afternoon.

The final version

Here's the core of what I ended up with. It's not glamorous. It doesn't have a spinner or color-coded output. But it's been running without issues for four months.

deploy.mjs — final version (abbreviated)

import { NodeSSH } from 'node-ssh';
import { appendFileSync } from 'node:fs';
import { join } from 'node:path';

const CONFIG = {
  host:       '192.168.1.50',
  username:   'deploy',
  privateKey: join(process.env.HOME, '.ssh', 'deploy_rsa'),
  remoteDir:  '/var/www/app',
};

const LOG_FILE = `./logs/deploy-${Date.now()}.log`;

function log(msg) {
  const line = `[${new Date().toISOString()}] ${msg}`;
  console.log(line);
  appendFileSync(LOG_FILE, line + '\n');
}

async function run(ssh, cmd) {
  log(`exec: ${cmd}`);
  const result = await ssh.execCommand(cmd, { cwd: CONFIG.remoteDir });
  if (result.stdout) log(result.stdout);
  if (result.code !== 0) {
    log(`FAILED (exit ${result.code}): ${result.stderr}`);
    throw new Error(`Command failed: ${cmd}`);
  }
  return result;
}

async function deploy() {
  const ssh = new NodeSSH();

  try {
    log('Connecting...');
    await ssh.connect(CONFIG);

    await run(ssh, 'git pull origin main');
    await run(ssh, 'npm ci --production');
    await run(ssh, 'npm run build');
    await run(ssh, 'sudo systemctl restart app.service');

    log('Deploy complete.');
  } catch (err) {
    log(`DEPLOY FAILED: ${err.message}`);
    process.exit(1);
  } finally {
    ssh.dispose();
  }
}

deploy();

Forty lines. No config parser, no rollback system, no YAML. Just the thing that needed to exist.

What the yak taught me

There's a phrase in programming — yak shaving — for the phenomenon of solving a chain of tangentially related problems instead of the actual problem. I'd been shaving a very large yak.

But the interesting thing is, I don't entirely regret it. The 400-line version was a waste of time in the practical sense. I threw it away. But writing it taught me things the 40-line version never could have:

A close-up of a terminal window with green text, a coffee cup visible in the background
The final version, doing its thing. Boring is beautiful.

The pancake connection

I mentioned my daughter earlier. The same week I finally shipped the rewrite, she and I were making Saturday morning pancakes — our usual routine — and she got frustrated because she couldn't flip one cleanly. It folded over on itself, a sad half-moon of batter.

"It's ruined," she said.

"It's not ruined," I told her. "It still tastes the same. And the next one will be better because you know what not to do."

She looked at me skeptically. "That's what you always say about your computer stuff."

She's not wrong. That is what I always say about my computer stuff. And I mean it every time, even when the "next one" takes three months to arrive.


Some practical takeaways

If you're building a tool for yourself and nobody else, here are the things I wish I'd internalized before starting:

  1. Write the requirements list first. Not the features you want. The problems you have. If you can't articulate the problem in one sentence, you're not ready to code.
  2. Solve for today's problem. Not tomorrow's, not the hypothetical future where you have twelve servers and a team of five. Today's.
  3. Set a scope deadline. If you haven't shipped in two weeks, stop and ask yourself what you're actually building. The answer might surprise you.
  4. Throw code away without guilt. The 400-line version wasn't wasted. It was a draft. Writers don't feel bad about drafts. Neither should we.

The CLI tool has been running in a cron job since November. It's deployed our app about 80 times without a single failure. It's boring, reliable, and exactly what I needed from the start.

Sometimes the best engineering is knowing when to stop.