Insights

Share a Pebble, Build the Mountain: 3 AI Patterns I Use Every Week as a Lead Engineer

Written by Adrian Paredes | Oct 17, 2025

 

You know those days when being a developer feels less like building and more like playing detective? You’re spelunking through stack traces, negotiating with unfamiliar APIs, or retyping boilerplate that steals your afternoon. I’ve been there—often. What’s changed for me lately isn’t just “autocomplete got better.” It’s a mindset shift: I treat my editor as a thought partner that helps me reason, not just type.

 

Below are three practical AI patterns that have saved me hours, reduced the drag work, and—most importantly—kept me in flow. To make it concrete, I’ll walk you through a real Friday afternoon from a principal engineer on our team, Adrian. It’ll feel familiar.

 

Pattern 1: The Instant Debugger

The scene: Adrian pulls a new branch to review, hits run, and gets smacked with a wall of stack trace. Classic ClassNotFoundException. Not our class. Deep in Spring. Perfect Friday.

 

What he does: No essay prompts. No “please, mighty model, explain.” He highlights the entire error, pastes it into his coding assistant’s chat, and hits send.

 

What comes back: A clean diagnosis in plain English:

  • Confirms the exception
  • Identifies the library (Spring Cloud Stream)
  • Pinpoints the root cause (version mismatch)
  • Suggests the exact compatible version to use

Why it works: The model is excellent at structured pattern recognition—stack traces are basically a treasure map. What used to take 60–90 minutes of Maven/Gradle archaeology turns into a minute-long fix. You still validate the solution, but you don’t lose your afternoon to dependency spelunking.

 

How to try it:

  • Paste the full error with context (stack trace + top-level dependency list if you have it).
  • Ask for root cause + minimal fix.
  • Apply the fix, rebuild, and keep moving.

 

Pattern 2: The Logic Partner

Fixing errors is great; shipping new complexity is better.

The scene: Adrian needs a small-but-twisty bit of business logic. Conceptually simple: look up user data under a couple of conditions. The catch? The project uses Reactor. Reactive pipelines can be... expressive.

 

What he does:

  1. Writes the intent in comments: plain English, step-by-step—no syntax yet.
  2. Generates the first pass with the assistant.
  3. If a few lines don’t compile, he selects just the failing part and says, “fix.”

Then tests:

  • He doesn’t stop at “it runs.” He asks the assistant to generate a test suite with 100% coverage using Mockito annotations.
  • It creates a new test file with six targeted cases. They all pass on the first run.

Why it works: The model is strong at translating intent into idiomatic patterns—especially when you scaffold your thinking as comments. You provide the “what,” it drafts the “how,” and you keep ownership over correctness and clarity.

 

How to try it:

  • Start with a comment block describing inputs, outputs, edge cases, and constraints.
  • Generate the function; keep the comments.
  • Immediately ask for targeted unit tests (by framework and mocking style).
  • Review the tests as critically as the code.

Pattern 3: The Scripting Assistant

We all have those one-off tasks: a quick data pull, a tiny integration, a single-use utility. They matter—but they shouldn’t take your evening.

 

The scene: Adrian needs a Python script to pull stats from Elasticsearch across four indexes and print a tidy terminal summary.

 

What he does (three-step recipe):

  1. Context: “Use the style of these existing utilities.” (He points to a couple of small scripts in the repo.)
  2. Goal: “Query four ES indexes with these filters.”
  3. Output: “Print this exact terminal format.”

What he gets: A functional script that looks like it belongs in the codebase, not a random snippet from the internet. Ten-minute task, done.

 

Why it works: Narrow scope + explicit output format = high-quality first draft. By referencing local conventions, you keep your repo consistent without writing a style guide into the prompt.

 

How to try it:

  • Provide a short example file from your repo for naming, structure, and logging patterns.
  • Describe the exact CLI arguments and terminal output.
  • Ask for a small README usage note at the bottom of the script.

 

Working With Limits (and Still Winning)

AI assistants will hallucinate. That’s not “the tool is broken”; it’s how probabilistic text models behave without enough constraints. The solution isn’t wishing it were perfect—it’s managing it like any other collaborator.


Four practical guardrails:

  1. Constrain the surface area: Provide local files, interface signatures, version ranges, and relevant docs links.
  2. Be explicit about contracts: Types, error handling, timeouts, edge cases. Say them out loud (in comments) before generation.
  3. Demand tests: Specify coverage goals, frameworks, and mocking strategies. Ask for boundary tests and failure-path tests.
  4. Own the code: Read it. Refactor it. Add comments where the model was “creative.” Your name is on the commit.

The Payoff

A thought partner doesn’t just write code faster. It:

  • Unblocks you in minutes instead of hours
  • Explores alternatives without dragging your cognitive load
  • Keeps you in flow, focusing your brain on the architecture and the “why,” while it helps with the “how”

If you leave with one idea, make it this:

Share a pebble, build the mountain.
Every pattern above is a small pebble. When we share these pebbles—what worked, what didn’t—we build a mountain of practical, repeatable engineering habits that make all of us better.

 

Your move this week: Pick one annoying task in your workflow. Try one of these patterns. Keep what works. Share your pebble.