
AI adoption inside engineering teams usually starts the same way.
One engineer finds a prompt that works well. Another engineer builds a workflow around commits. Someone else figures out how to fix CI faster with an AI tool. Everyone is experimenting, but everyone is doing it slightly differently.
For a while that’s fine.
But as teams grow, something strange happens. AI usage becomes fragmented. Best practices stay inside people’s heads. New engineers don’t know how the team is actually using AI.
As I described during one of our engineering sessions:
“Since this whole AI boom came to us, it’s kind of the Wild West. Every engineer interacts with AI in their own way - their own prompts, their own conventions, their own habits.”
The real challenge wasn’t whether AI worked.
The challenge was how to turn individual usage into something the entire team could benefit from.
At the beginning, everyone on the team had their own approach.
Different engineers were:
None of that was visible to the rest of the team.
Some engineers developed very effective workflows. Others were still figuring things out. New hires had no idea where to start.
The gap kept growing.
“You end up with senior engineers having their own recipes for how they work with AI. Then a new hire comes in and asks - how does this team actually work?”
That’s when we realized something important.
If AI is going to be part of the software development lifecycle, it can’t stay an individual productivity trick.
It has to become a team system.
The breakthrough came when we changed how we thought about prompts and workflows.
Instead of treating them as personal setups, we started treating them like shared engineering assets.
That meant AI configurations should be:
In other words, AI behavior should live in code, not just in people’s heads.
As I explained during the session:
“What if we treated AI config like code? Version it. Review it. Share it. Merge once and propagate everywhere.”
That idea changed everything.
The setup we landed on was simple but powerful.
We introduced two layers.
This repo holds the team’s reusable AI workflows:
Think of it as the AI operating layer for the team.
Inside that repo we structured things like this:
commands/
agents/
scripts/
init.sh
Commands included things like:
Every engineer on the team could use them.
Each project still keeps its own AI configuration.
Inside the repository we maintain things like:
.claude/
best_practices/
commands/
project_docs/
These files include:
This allows AI to understand the context of the codebase, not just generic programming patterns.
One of the biggest lessons we learned was that distribution matters more than documentation.
You can build great commands and workflows, but if engineers have to manually install them or remember where they live, adoption drops quickly.
The solution was automatic propagation.
Every engineer’s environment loads the shared repo automatically through a small initialization script.
When someone adds or improves a command:
No copying. No manual setup.
“Without auto update, adoption dies. With auto update, everyone gets the new command automatically the next morning.”
That small detail turned the system from optional tooling into part of the workflow.
This didn’t start with a big architecture plan.
It started with something small.
Creating Git branches.
Engineers were constantly copying branch names from JIRA tickets and formatting them manually. It was repetitive and annoying.
So we created a command.
/team:branch
The command reads the ticket and generates the branch name correctly.
It sounds small, but once engineers saw it working, the next question appeared naturally:
“What else can we automate?”
Then came:
Each step built on the previous one.
“It started with one simple task, then the next one, then the next one. Eventually you end up with a whole ecosystem.”
As the system evolved, some commands began orchestrating others.
For example, we built a command that can implement small tickets.
It works roughly like this:
All of that can happen automatically.
But the important part is that humans still review the result.
“You can go grab a coffee and come back - but a human still reviews the PR.”
Automation speeds things up, but it doesn’t remove engineering judgment.
Another lesson came the hard way. AI tools can execute commands. Sometimes they execute the wrong ones.
At one point I stepped away from my computer while an AI agent was running.
When I came back, my local database was gone.
“Claude decided to run rails db:drop and recreate the database. I came back and asked - What the hell are you doing?”
The AI apologized politely, but the damage was done. That moment forced us to take permissions seriously.
We introduced explicit guardrails to prevent dangerous commands from running.
Examples include blocking things like:
rails db:drop
git push --force
delete operations
This made it clear that AI systems must follow the same safety standards as any other automation.
Another improvement was moving best-practice documentation into the repository.
Instead of leaving guidance buried in Notion or Confluence, we added files like:
backend_best_practices.md
testing_best_practices.md
migration_guidelines.md
Now every AI session loads those documents automatically.
That means the AI understands how our team expects code to be written.
“Now it’s a living document. Every commit, every review, every session has access to that knowledge.”
One of the interesting parts of this process was how many team conversations it triggered.
For example: Should pull requests default to draft or ready for review? Some engineers preferred the draft. Others preferred ready. Instead of forcing one opinion, we made it configurable.
The key idea was simple.
“Modify conventions, not opinions.”
AI systems should reflect how the team actually works, not just one person’s preference.
At the beginning, AI usage was individual experimentation.
Today it’s collaborative. Engineers add commands. Others improve them. The team reviews changes through pull requests. The knowledge compounds. Instead of isolated prompts, we now have shared workflows.
“The goal is to move away from individual usage and make it a team effort.”
If your team is experimenting with AI, you don’t need a big transformation plan.
Start small. Find a repetitive task. Turn it into a command. Share it. Review it. Improve it.
Then repeat the process.
The cycle looks like this:
Over time, the system grows naturally.
AI tools come with their own defaults. But engineering teams already have their own conventions.
Frameworks like Rails are built around conventions. Engineering teams operate the same way.
Now we have a chance to encode those conventions into AI workflows.
“Your AI tools should reflect your team - not just the model’s defaults.”
That’s when AI stops being a personal productivity trick and becomes a team capability.