Discussion about this post

User's avatar
The AI Founder's avatar

"Knowing how to think with AI" is doing a lot of work in that sentence — the piece explains what supervised/unsupervised/reinforcement learning are, but vocabulary and thinking are different skills. The harder problem for PMs isn't knowing what LLMs are, it's knowing when a given product problem is actually an AI problem versus a data problem versus a process problem with an AI-shaped hammer nearby. The 95% enterprise pilot failure rate (MIT's NANDA initiative, 150 interviews) suggests most PMs who can describe ML approaches still can't distinguish between "this needs a model" and "this needs clean labels and a simple classifier." What would a practical decision framework for that distinction look like — is there a set of questions PMs should ask before reaching for any ML approach at all?

The AI Founder's avatar

"Knowing how to think with AI" is doing a lot of work in that sentence — the piece explains what supervised/unsupervised/reinforcement learning are, but vocabulary and thinking are different skills. The harder problem for PMs isn't knowing what LLMs are, it's knowing when a given product problem is actually an AI problem versus a data problem versus a process problem with an AI-shaped hammer nearby. The 95% enterprise pilot failure rate (MIT's NANDA initiative, 150 interviews) suggests most PMs who can describe ML approaches still can't distinguish between "this needs a model" and "this needs clean labels and a simple classifier." What would a practical decision framework for that distinction look like — is there a set of questions PMs should ask before reaching for any ML approach at all?

No posts

Ready for more?