How We Turned Scattered Prompts into Shared AI Systems for Our Engineering Team

AI adoption inside engineering teams usually starts the same way.

One engineer finds a prompt that works well. Another engineer builds a workflow around commits. Someone else figures out how to fix CI faster with an AI tool. Everyone is experimenting, but everyone is doing it slightly differently.

For a while that’s fine.

But as teams grow, something strange happens. AI usage becomes fragmented. Best practices stay inside people’s heads. New engineers don’t know how the team is actually using AI.

As I described during one of our engineering sessions:

“Since this whole AI boom came to us, it’s kind of the Wild West. Every engineer interacts with AI in their own way - their own prompts, their own conventions, their own habits.”

The real challenge wasn’t whether AI worked.

The challenge was how to turn individual usage into something the entire team could benefit from.

The real problem: AI knowledge stays isolated

At the beginning, everyone on the team had their own approach.

Different engineers were:

  • creating branches in different ways
  • writing commits differently
  • structuring PR descriptions differently
  • using AI tools differently
  • experimenting with different prompts

None of that was visible to the rest of the team.

Some engineers developed very effective workflows. Others were still figuring things out. New hires had no idea where to start.

The gap kept growing.

“You end up with senior engineers having their own recipes for how they work with AI. Then a new hire comes in and asks  -  how does this team actually work?”

That’s when we realized something important.

If AI is going to be part of the software development lifecycle, it can’t stay an individual productivity trick.

It has to become a team system.

The shift: treat AI configuration like code

The breakthrough came when we changed how we thought about prompts and workflows.

Instead of treating them as personal setups, we started treating them like shared engineering assets.

That meant AI configurations should be:

  • versioned
  • reviewed
  • shared across repositories
  • automatically updated
  • improved by the team

In other words, AI behavior should live in code, not just in people’s heads.

As I explained during the session:

“What if we treated AI config like code? Version it. Review it. Share it. Merge once and propagate everywhere.”

That idea changed everything.

The architecture that made it work

The setup we landed on was simple but powerful.

We introduced two layers.

1. A shared AI utilities repository

This repo holds the team’s reusable AI workflows:

  • commands
  • agents
  • scripts
  • shared configurations
  • best-practice prompts

Think of it as the AI operating layer for the team.

Inside that repo we structured things like this:

commands/

agents/

scripts/

init.sh

Commands included things like:

  • branch creation
  • commit generation
  • pull request creation
  • code review automation
  • CI troubleshooting

Every engineer on the team could use them.

2. Project-specific context

Each project still keeps its own AI configuration.

Inside the repository we maintain things like:

.claude/

 best_practices/

 commands/

 project_docs/

These files include:

  • backend best practices
  • testing rules
  • migration guidelines
  • framework conventions
  • project-specific knowledge

This allows AI to understand the context of the codebase, not just generic programming patterns.

Why auto-update changed everything

One of the biggest lessons we learned was that distribution matters more than documentation.

You can build great commands and workflows, but if engineers have to manually install them or remember where they live, adoption drops quickly.

The solution was automatic propagation.

Every engineer’s environment loads the shared repo automatically through a small initialization script.

When someone adds or improves a command:

  1. The change is merged once.
  2. The next time engineers open their terminal, the update is available.

No copying. No manual setup.

“Without auto update, adoption dies. With auto update, everyone gets the new command automatically the next morning.”

That small detail turned the system from optional tooling into part of the workflow.

Start small: one command is enough

This didn’t start with a big architecture plan.

It started with something small.

Creating Git branches.

Engineers were constantly copying branch names from JIRA tickets and formatting them manually. It was repetitive and annoying.

So we created a command.

/team:branch

The command reads the ticket and generates the branch name correctly.

It sounds small, but once engineers saw it working, the next question appeared naturally:

“What else can we automate?”

Then came:

  • commit generation
  • PR creation
  • CI troubleshooting
  • automated code review

Each step built on the previous one.

“It started with one simple task, then the next one, then the next one. Eventually you end up with a whole ecosystem.”

Commands can call other commands

As the system evolved, some commands began orchestrating others.

For example, we built a command that can implement small tickets.

It works roughly like this:

  1. Read the JIRA ticket
  2. Create the branch
  3. Implement the change
  4. Generate commits
  5. Create the pull request

All of that can happen automatically.

But the important part is that humans still review the result.

“You can go grab a coffee and come back  -  but a human still reviews the PR.”

Automation speeds things up, but it doesn’t remove engineering judgment.

AI also needs guardrails

Another lesson came the hard way. AI tools can execute commands. Sometimes they execute the wrong ones.

At one point I stepped away from my computer while an AI agent was running.

When I came back, my local database was gone.

“Claude decided to run rails db:drop and recreate the database. I came back and asked  -  What the hell are you doing?”

The AI apologized politely, but the damage was done. That moment forced us to take permissions seriously.

We introduced explicit guardrails to prevent dangerous commands from running.

Examples include blocking things like:

rails db:drop

git push --force

delete operations

This made it clear that AI systems must follow the same safety standards as any other automation.

Best practices should be living documents

Another improvement was moving best-practice documentation into the repository.

Instead of leaving guidance buried in Notion or Confluence, we added files like:

backend_best_practices.md

testing_best_practices.md

migration_guidelines.md

Now every AI session loads those documents automatically.

That means the AI understands how our team expects code to be written.

“Now it’s a living document. Every commit, every review, every session has access to that knowledge.”

Conventions matter more than opinions

One of the interesting parts of this process was how many team conversations it triggered.

For example: Should pull requests default to draft or ready for review? Some engineers preferred the draft. Others preferred ready. Instead of forcing one opinion, we made it configurable.

The key idea was simple.

“Modify conventions, not opinions.”

AI systems should reflect how the team actually works, not just one person’s preference.

The cultural shift matters as much as the tooling

At the beginning, AI usage was individual experimentation.

Today it’s collaborative. Engineers add commands. Others improve them. The team reviews changes through pull requests. The knowledge compounds. Instead of isolated prompts, we now have shared workflows.

“The goal is to move away from individual usage and make it a team effort.”

How to start tomorrow

If your team is experimenting with AI, you don’t need a big transformation plan.

Start small. Find a repetitive task. Turn it into a command. Share it. Review it. Improve it.

Then repeat the process.

The cycle looks like this:

  1. Identify a pain point
  2. Prototype a command
  3. Share it with the team
  4. Implement auto-update 
  5. Iterate together

Over time, the system grows naturally.

The main takeaway

AI tools come with their own defaults. But engineering teams already have their own conventions.

Frameworks like Rails are built around conventions. Engineering teams operate the same way.

Now we have a chance to encode those conventions into AI workflows.

“Your AI tools should reflect your team  -  not just the model’s defaults.”

That’s when AI stops being a personal productivity trick and becomes a team capability.

You may also like

Thinking about your own AI, data, or software strategy?

Let's talk about where you are today and where you want to go - our experts are ready to help you move forward.