Most teams try AI tooling. Few actually adopt it.
Copilot installed, used for a few weeks, back to the old workflow. That's not an AI problem. That's an adoption problem.
I use Claude Code daily. Not occasionally, but as a core part of my workflow. Here's what I've learned.
What I build with it
I'm currently building Invullen.nl: a SaaS platform for coaches and trainers. Multi-tenant architecture, questionnaire builder, respondent flows, AI-driven questionnaires and AI response analysis.
As a solo developer with Claude Code, I deliver significantly faster than without. Not because the tool "writes" the code, but because I iterate faster. I describe what I want, review the output, adjust, and focus my time on the architectural decisions that matter.
Why it doesn't stick at most teams
No agreements on when to use AI tooling and when not to. No review process for AI-generated code. The result: inconsistent quality and justified distrust.
What does work:
- AI-generated code goes through the exact same review process
- Clear agreements: boilerplate and CRUD yes, core business logic with extra attention
- The team learns to write prompts that fit their codebase
And if you build AI into your product: the EU AI Act
Using AI tooling internally for development falls into the lowest risk category. But as soon as you build AI features into your product, transparency obligations apply at minimum. Depending on the domain, it can shift further. That distinction needs to be a conscious decision for your team.
The real problem is adoption, not technology
Developers need to build trust with the tooling. That doesn't take a 30-minute demo. It takes someone who works alongside the team for a few months, showing how it works in their own codebase, with their own conventions.