Most agencies adopt AI backward. They see a demo, get excited, roll it out, then spend months cleaning up the mess.
Here’s what they miss: tools don’t fix broken systems. They amplify them.
If your briefs are unclear, AI produces unclear work faster. If your design system is weak, AI generates off-brand assets at scale. If your team doesn’t know what good looks like, AI optimizes for mediocre.
We’ve built a different approach. Fix the system first. Then add tools that actually fit.
The problem with adding tools fast
Every new tool creates dependencies. The team learns it. Workflows form around it. Clients expect outputs only that the tool can produce. If it fails to deliver, adds more cleanup than speed, or chips away at quality, you are locked into it anyway. Removing it is expensive. Keeping it is worse.
Most agencies don’t think this through. They see “AI-powered” and assume it’s an upgrade. It’s not. Speed without quality is waste. Automation without judgment is a liability.
The Plus972 AI principles
Before we deploy any AI tool, we test it against three principles:
1. Improves speed without replacing judgment
AI is useful when it accelerates something we already know how to do well. It’s a problem when it skips the thinking.
We use AI to turn messy call notes into structured briefs. It saves 30 minutes and forces clarity. But we still own the decisions—audience, promise, proof, tone. AI organizes inputs. Humans make the calls.
2. Fits into workflows without creating dependencies
AI should make decisions easier, not make them for you. If we can’t explain why it produced an output, override it when it’s wrong, or scale it as client needs change, it doesn’t belong in our stack.
Hard line: AI helps with inputs (research, organization) and outputs (drafts, variants, speed). Humans own strategy, positioning, creative direction, and final approval.
3. Protects brand integrity at scale
AI defaults to average. It writes like the internet and designs like a template. That’s fine for drafts. It’s a disaster for client work.
We only deploy AI where we can constrain it tightly: defined voice, locked design systems, clear formatting rules. If we can’t control the output, we don’t use it.
Example: We use AI image generation for moodboards. Generate 40 options, pick 5, extract visual rules, rebuild into our design system. AI finds the lane. The system keeps us in it.
How we deploy: three phases
When a tool passes the filter, we don’t roll it out to everyone immediately.
Phase 1: Internal experiment (1-2 people, 2 weeks)
Small team, internal projects only. Goal: stress-test and find where it breaks.
Questions we answer:
- Does it save time or just move time elsewhere?
- What new failures does it introduce?
- Can we build a repeatable process around it?
Phase 2: Controlled client pilot (one project, narrow scope)
Low-risk client project. Project lead owns the tool and is responsible for QA and cleanup.
Questions we answer:
- Does output hold up under client scrutiny?
- Does it reduce revisions or create them?
- Can we train others to use it at the same quality level?
Phase 3: Team-wide deployment (with guardrails)
Roll out with clear constraints:
- Documented workflow (when to use it, when not to)
- Quality checks before anything goes to a client
- Accountability (one person owns consistency)
We do not just add tools. We add operating procedures. If the tool can’t prove itself in Phase 1 or Phase 2, it doesn’t reach Phase 3.
What we use and why
We do not choose tools because they are trending. We choose them because they behave well inside real client work.
Example: Product launch for a commerce client
Client needed 40 product descriptions, brand guidelines were solid, timeline was tight.
- Phase 1: Tested ChatGPT internally on 5 products. Found it nailed tone but missed technical specs. Built a template that forced spec inclusion.
- Phase 2: Ran 10 descriptions through client review. Two needed rewrites (AI missed brand-specific terminology we hadn’t documented). Updated the template.
- Phase 3: Deployed to team with the refined template and a checklist. Delivered all 40 descriptions in half the usual time, zero revisions.
The tool didn’t make it work. The template, the checklist, and the two-phase testing did.
Most “AI tool stacks” fail for one reason: they optimize for novelty. You get a fast demo, a flashy output, and a team that slowly loses consistency. The tool becomes the creative director. The brand starts drifting. Revision cycles get longer, not shorter.
Our bias is different. We choose tools that do three things:
- They reduce friction in production
Less time spent formatting, reworking, hunting files, or rebuilding what already exists. - They protect coherence
The work still looks like the brand on Monday, and still looks like the brand on Friday. - They let humans stay in control
Clear inputs, controllable constraints, and easy overrides. No mysterious black box that everyone is afraid to touch.
Below are a few tools that have earned a place in our workflows. This is a snapshot, not an inventory. The tools can change. The bar does not.
Google Workspace AI (Gemini)
This is where work actually happens: threads, notes, drafts, decisions. We use it to turn scattered inputs into plans the team can execute—summarize meetings, extract decisions, and build next steps. It keeps projects moving.
OpenAI (ChatGPT)
We use it when thinking needs structure and speed. Briefs, positioning options, counterarguments, content architectures, and variants that stay inside a defined lane. The key is constraint. When teams use it like a content slot machine, everything starts to sound the same. When teams use it to test and sharpen decisions, it becomes leverage.
Figma
Figma is our consistency engine. Not file storage. Not “design at the end.” It is where we lock systems so production is faster and brand-safe: components, templates, spacing rules, and reusable modules. If a tool does not help us ship consistent work across many assets, it is not a priority.
Shopify Magic
In commerce builds, speed matters, but so does accuracy and scope control. We use Shopify’s native AI support to accelerate clearly bounded tasks inside the platform, like drafting and iterating on product content. It is not a strategy tool. It is an operations tool, used with guardrails.
The meta point
These tools are useful because they are not trying to be everything. They do a few jobs well, integrate into existing workflows, and let us maintain standards.
Our rule is simple: if a tool makes the work faster but lowers the floor on quality, it is out. We would rather be slightly slower than inconsistently good.
Subtract Before You Add
Most agencies have too many tools. The problem isn’t that you’re missing the latest platform. The problem is bloat.
Before adding anything new, run this audit:
Which tools are you paying for but not using? Cancel them. If you haven’t touched it in 30 days, you don’t need it.
Which tools create more cleanup than they save time? That “AI writing assistant” that requires 20 minutes of editing per output? Cut it. You’re paying to make your job harder.
Which tools make collaboration harder? If your team spends more time explaining the tool than using it, it’s overhead, not leverage.
Cut those first. Get back to a tight stack. Then—only then—consider what to add.
The agencies that win aren’t the ones with the most tools. They’re the ones with the most judgment—knowing what to adopt, what to ignore, and how to deploy what they keep.