About Services Products Work Blog Careers Contact Japanese
← BLOG
ai 4 min read

How a "Mockup First" Strategy with Google AI Studio Changed Our Development Flow

On a client AI tool project, leading with Google AI Studio mockups dramatically improved feedback quality and speed. A practical record of three-lane parallel development.

#AI#Google AI Studio#Claude Code#mockup#workflow#client-work

“Build something they can touch” — first

On an AI business tool project for a major food distribution company, we made an unusual call.

The normal path: requirements → design → develop → client review. We flipped it. We built screen mockups in Google AI Studio first and put them in the client’s hands.

Feature development and mockup creation running in parallel. Three lanes, simultaneously.

Three-Lane Parallel Development

Here’s how the project was structured:

  • Lane 1: Screen Design (Google AI Studio) — Create mockups first, get client feedback on look and feel
  • Lane 2: Detailed Design (Vertex AI Studio) — Prompt validation and accuracy tuning
  • Lane 3: Feature Development (GCP Cloud Run) — Build the actual template engine

Why separate them? To prevent client confusion.

Show a client an in-progress build and “Is this the final version?” is guaranteed. Development UI is rough. Specs change. Expectations drift.

Mockups are different. They represent “what it will look like.” The client touches the screen and says “move this button here” or “I need this information higher.” Concrete feedback, not abstract concerns.

Google AI Studio Hit a Wall

Google AI Studio is fast. One prompt, one screen. But the limits showed up quickly.

Design quality was uncontrollable. Output varied wildly — color choices felt amateur, spacing was off, typography was generic. Prompting “make it refined” didn’t produce consistent results.

Multi-page consistency collapsed. Single screens were fine. But navigation, shared headers and footers, cross-page flow — it all broke down. Every new page drifted further from the design language.

The Fix: Stop Prompting, Start Feeding Code

We brought in Claude Code.

We stopped asking AI Studio to “make it look good.” Instead, we built the design system first in Claude Code — CSS variables, fonts, spacing rules — and fed that code directly into Google AI Studio.

“Follow this CSS and build the XX screen.”

Not instructions in words. Constraints in code. Design consistency held. AI Studio stayed fast but gained guardrails. Claude Code stayed precise but didn’t need to mass-produce pages.

Client Review Speed Changed

The result:

Every 1-2 weeks, the client could review real screens in a browser. Not Figma stills. Working HTML. Sales reps could actually click through and test the flow.

“What happens when I press this?” — answerable on the spot. Feedback precision went up.

And by the time the template engine development was wrapping up, mockup-driven feedback had already converged the screen and feature requirements. The same mockup could be used for the executive progress report. Minimal gap between what the development team built and what the client expected.

The Tool Choice Shifted Too

One more pivot worth noting.

We’d originally planned to use Dify for prompt validation. The team pushed back: “It’s cumbersome.” Extra hosting, separate auth management — unnecessary complexity for a validation phase.

The call: unify on Vertex AI Studio. Everything inside GCP Console. Same foundation as the production API, so validation-to-production migration was seamless. Client access via IAM with view-only permissions — simple.

“Complete it within GCP” as a design principle eliminated tool sprawl.

Template Selection Time Down 90%, Overall Effort Down 80-85%

The quantitative impact:

  • Template selection: AI auto-recommendation cut 90% of selection time
  • Information gathering: Form-based input + database reduced 70%
  • Image selection/editing: AI image generation cut 85%
  • Overall: 4-5 hours per item → 30 min - 1 hour (80-85% reduction)

The numbers speak for themselves.

Conclusion: Knowing an AI Tool’s Limits Is the Best Way to Use It

Google AI Studio isn’t omnipotent. But for the strategy of “build a mockup first and let the client touch it,” it was the right tool. The quality wall was broken by combining it with Claude Code. Dify’s complexity was resolved by unifying on Vertex AI Studio.

The point isn’t finding an “all-purpose AI.” It’s knowing each tool’s limits and deploying them where they’re strong. Sometimes feeding code as constraints beats crafting the perfect prompt.

Mastering AI means deciding what you won’t ask it to do.