Learning PowerShell with GitHub Copilot: Building a Hub–Spoke Networking Test

I wanted a project that would ensure that I keep my PowerShell skills and continued to learn new stuff. I chose to build a hub–spoke networking test with an Azure Firewall in the middle. It was complex enough to be interesting and a topic that I’m very familiar with. My PowerShell skills are somewhere in the middle—I’ve used it for years, but I’m not a pro—so this felt like the right level of challenge.

Code can be found here.

TL;DR

I initially over-explained everything to the AI and got back sophisticated code I couldn’t run or debug. I restarted with a single file and built it step by step. Targeted prompts—refactors, verbose logs, try/catch, debugging, and documentation—made the difference. AI accelerated me when I owned the decomposition and validated each step.

Why this project

The goal was to validate hub–spoke connectivity gated by an Azure Firewall. The script either creates or uses two spokes, peers them with a hub, routes inter-spoke traffic through the firewall, runs pings that are blocked in phase one, adds an ICMP allow rule, and confirms pings succeed in phase two. While also following the default policy set from Azure Landing Zone. For the entire project i used GitHub Copilot with GPT-5 in VSCode.

The first attempt: what didn’t work

I started by telling the AI everything in one mega-prompt: the whole problem, the audience, the output format, and even a persona. The output looked impressive—multiple helpers and files, advanced abstractions—but I code did not run at all. I didn’t understand the flow, and debugging was painful. It was a classic “garbage in, garbage out” moment: oversized prompts invited oversized scaffolding I couldn’t own. Total waist of time and I got really really frustrated.

The restart: what worked

I opened a new single file and rebuilt from zero. I decomposed the work into the smallest possible steps and treated each as its own mini-task. I wrote a function, ran it, and verified it before moving on. The AI became useful when I narrowed the scope of each ask and tested immediately. If I couldn’t explain a piece of code, I refactored it until I could.

Prompts (shortened) that actually helped

  • Rewrite this function to be more optimized.
  • Add -Verbose output to these steps with meaningful messages.
  • Wrap this with try/catch and return actionable errors.
  • Help me debug: what could cause X to fail? Give likely causes and quick checks.
  • Analyze this function and suggest concrete improvements without changing behavior.
  • Document this function with inputs, outputs, and edge cases.

The working loop

My loop became simple and repeatable. I started with the problem, not the solution, and divided it into small pieces. I asked the AI for focused, verifiable help and ran the results immediately. After something worked, I went back to improve it—simplify, harden, and document—so I always understood the code I was keeping.

The networking test in a nutshell

The setup is a hub VNet with an Azure Firewall and two spokes that either exist already or are created temporarily. Each spoke has an NSG and a route table that sends inter-spoke traffic to the firewall, and peerings connect each spoke to the hub in both directions. The test runs in two phases. In the first phase, ICMP is blocked and pings fail as expected. Then the script adds an ICMP allow rule—using policy-managed rules when available or classic rules otherwise—and the second phase confirms that pings succeed. The script writes a timestamped JSON artifact with assertions and evidence and then cleans up the resources it created.

Did AI help?

Yes, it accelerated refactors, boilerplate, diagnostics, and documentation. It was also frustrating at times because of hidden complexity, answers that did not help at all and over-confident suggestions. The way to get the benefits without the pain was to keep the scope small, validate everything, and iterate until the code was correct and understandable.

Practical advice

Don’t accept code you can’t explain. Make logging and error messages first-class from the beginning. Prefer incremental refactors and diffs over “write everything” prompts. Treat the AI like a pair programmer that is great at drafts and rewrites, while you stay responsible for direction, constraints, and verification.

Is the code any good?

Honestly, I have no idea. It runs, and it works. Would it look different if I had coded it without AI? Absolutely. Can I read and understand it? Absolutely. So, based on the following criteria, I’d say it’s suitable for a medium-level PowerShell user like me.
The entire code can be found here.

  • I can read and understand the code
  • I can debug the code
  • It works
  • It is easy to extend and reuse?
    • Maybe not

Closing

AI made the process faster, more educational, and more fun once I focused on decomposition and verification. Start small, finish often, and improve the code you already understand. If you’ve been using AI for infrastructure scripts, I’m curious what patterns are helping you and where it’s still falling short.

Legg igjen en kommentar