PMs with ChatGPT: The Good, the Bad, and the Ugly of AI-Driven Code
I’ve noticed more PMs and designers wanting to use AI tools—like ChatGPT—to write code. It’s an intriguing trend, but it’s also not as simple as plugging in prompts and watching perfect code appear. Here’s how I feel about it:
1. Curiosity Is Good, But Expectations Matter
I’m happy they’re curious enough to try. It helps them understand the complexities of our work. But AI-generated code often “looks” right while hiding logical flaws. They need to realize that success isn’t guaranteed.
2. Let Them Open a Pull Request (PR)
If they really want to code, I say, “Go for it!” They can open a PR, and our standard review and testing process will catch issues. If their code isn’t up to par, it won’t merge. That’s life in software engineering.
3. Failure Teaches
Sometimes, the best teacher is experience. Let them try, and if they fail, they’ll see why coding isn’t just about typing. But I don’t want them to fail in a vacuum—there should be feedback and support so it’s a learning moment, not a dead end.
4. Mentorship Over Gatekeeping
I don’t think it’s fair to shut them down immediately. It’s better to offer tips and code standards. If they understand how to properly test, lint, or follow design patterns, they might get something worthwhile out of AI-generated code.
5. AI Tools Are Not Magic
These tools are still limited. They produce code based on patterns, not genuine “reasoning.” That means they can be helpful for boilerplate tasks but definitely need a human’s oversight.
My Conclusion
I say we let PMs and designers dabble in AI coding under the same process we’d apply to any new contributor: branching, reviews, and guidelines. If their contributions pass, awesome. If not, it’s a lesson in why writing robust code is harder than it looks. In the end, a balanced mix of openness and caution seems best.