I treat AI like a toolbox. On good days it hands me a torque wrench. On bad days it hands me a rubber chicken. The trick is knowing which one you are holding before you bring it to a production incident.
During a recent internal AMA, we went wide on what actually works, where it still falls apart, and why humor helps. I had just come back from vacation, still peeling open my inbox, when our moderator surprised me with a full hour. No slides. No scripts. Plenty of stories. Below is the cleaned-up version with the same spirit.
The neat thing about AI right now: most people touch it daily, just not always where you expect. I see three patterns in the wild.
The usual complaint shows up quickly: the model gives an answer that lands almost right. That is fine if your plan expects it. AI drafts. Humans decide. The value comes from acceleration and from catching more issues in the time you already spend, not from handing your judgment to a probabilistic parrot.
My personal rule: start tiny, measure something a reasonable person cares about, expand only when the result holds. Crawl, then walk, then run. It sounds boring. It works.
We work in Google Workspace, so Gemini sits where our documents, sheets, and chats already live. After vacation, I ask for a digest of the rooms I missed with decisions, risks, owners, and dates I should not forget. In Docs and Sheets I let it draft and redline. For engineering we added Gemini to GitHub Actions so every pull request gets a first read. It flags obvious defects and recurring smells. A human reviewer still owns the call. I like the pattern: AI makes fewer silly misses, people use their attention for judgment.
Meetings stopped leaking value the day we got serious about transcripts. I export the text and ask for what I wish I had typed live: action items with owners, open questions that need a champion, a short status by speaker for the next standup, a list of requirements and constraints after a refinement. Premium tools make this easy. A disciplined prompt plus a transcript gets you most of the way even without the extras.
Quality engineering feels like the ripest surface for leverage. We are running a proof of concept that gives structure first: environment notes, fixtures, acceptance criteria. Then the model proposes test cases and variations. People curate and extend. Early results show faster test ideation and fewer forgotten edge paths. Not a Hollywood montage, but real time saved and fewer “how did we miss that” moments.
Project management has headroom too. The current crop can summarize status and clean up notes. Risk prediction still needs better data plumbing and tougher evaluation. I would love a tool that reads our signals and tells me what will slip next month with evidence. Today it tells me what slipped yesterday with confidence.
Because it is October, we turned a serious habit into a playful one. Inside Forte we are running a little event called Prompt-or-Treat. The rules are simple: bring one prompt that saves you time, show it live in two to five minutes, and submit the text so others can reuse it. We will pick a few favorites. Our judging panel: a quality chief who speaks in edge cases, a delivery chief who sees around corners, and me, who likes prompts that survive Monday morning. The prizes are less important than the ritual. A good prompt is basically reusable process. Sharing them raises the floor for everyone.
If your company wants a zero-theater way to spread adoption, try a version of this. No slide decks. Just a calendar invite and people showing one thing that works.
Presentation builders that promise to turn a paragraph into a board-ready deck still miss two things that matter: brand system discipline and a story that earns attention. I let the model rough out a skeleton when I am tired, then I rewrite the narrative and fix the layout because I care about both.
Another gap: assistants that claim to manage a project for you. They can summarize, they can nudge, and they can fill a status template. They do not yet reason across shifting constraints, stakeholders, and trade-offs the way an experienced PM does on a Tuesday with three blockers and a surprise vendor email. We will get closer as we wire in better context and better data. Today it is a good helper, not a manager.
At work I push Gemini hard because it lives in our ecosystem and respects our boundaries. At home I drift to ChatGPT for casual things because it remembers more about me and because my kid enjoys the “explain dinosaurs like they are running a startup” joke. The model then draws T. rex with a tie. Everybody wins.
Two practical bits my teams asked me to mention. First: Google AI Studio and NotebookLM are worth the time if you live in Google land. There is more power in there than the marketing pages suggest. Second: we have a small workflow where Gemini runs as a PR reviewer in CI. It does not gate merges. It gives a consistent pass and never gets grumpy after back-to-back reviews. People do the final read with more energy.
Story one. I ran an AI summary over a monster internal thread to save my afternoon. The output included a confident action item: bring a cake. It turned out to be an inside joke. I laughed, then I added a rule for myself: anything that looks like party planning in a code review thread gets a manual check.
Story two. A colleague did not join a call, but their note-taker bot did. We tried to prank the bot with absurd action items. Pick up my kids at five. Buy two liters of cola. The summary ignored all of it. Bot one, pranksters zero. Even the robots have standards.
Pick one workflow and give it two weeks. Choose a metric a practical person would care about: PR review latency, cycle time, defect escape rate, or coverage growth. Write a small prompt template with structure and one example. Run the pilot. If the number moves in the right direction, automate the trigger. Save the prompt where people can find it. Treat AI output like a junior draft. People remain accountable. Guardrails stay on: client permissions, data boundaries, compliance.
Then do something cultural. Hold a thirty-minute prompt exchange every other week. One person, one prompt, one real win. No theater. Adoption grows when the next step is obvious.
Pair curiosity with criticism. Break problems into context, constraints, and examples so the model has a fair chance. Verify outputs. Run them, test them, compare them. Grow toward an architect’s mindset. Learn the system and the purpose, not just the snippet. You will move faster than your peers who wait for permission.
Inbox triage that lists the ten messages I truly need to answer today and the reasons why. Hands-free lists so I can add groceries without stopping work. Household troubleshooting where a photo of a washing machine error code turns into a diagnostic checklist. Sometimes the suggestion points to the right spare part. Sometimes it does not. I check before I buy and keep my ego out of it.
>> Read more about my productivity tips.
AI will not run your team for you. Teams that learn faster with AI will pass teams that poke at it once a quarter. Start small. Measure honestly. Keep people in charge. When you find a prompt or a tiny workflow that helps, share it. That is how crawl turns into walk, and walk turns into run.
Pablo
Light copy edit for typos with an assistant. The stories, the cake, and the rubber chicken come directly from my notes and my teams.