Product #123

Product #123

super:Link
https://open.substack.com/pub/productandrew/p/product-123?r=12u3a4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
image

UX for Lean Startups · Laura Klein · 2013

Most startups fail by addressing nonexistent or trivial problems. As the book states, "One of the most common mistakes that people make when thinking of a product idea is solving a problem that simply doesn't exist or that isn't bad enough for people to bother fixing it." Early validation helps test critical assumptions before investing significant resources.

Key definitions:

  • Market: People with common problems and willingness to pay
  • Product: The solution to users' problems
  • Problem: The unmet need justifying the product's existence

Validation requires testing three elements:

  1. Problem validation: Confirming people have a genuine pain point
  2. Market validation: Identifying who will pay for your solution
  3. Product validation: Ensuring your approach actually solves the problem

Research methods include:

  • Ethnographic research: Observing users in their environment
  • Landing-page tests: Measuring interest without coding a full product
  • Prototype tests: Allowing interaction with rough product versions

This approach embodies Pain-Driven Design, focusing on genuine user frustrations rather than arbitrary features.

Skipping research to meet tight deadlines often leads to costly rework. Effective research methods include:

  • Competitor Testing: Learning from others' mistakes
  • Five-Second Tests: Verifying if your landing page communicates clearly
  • Clickable Prototypes: Testing workflows without engineering resources
  • Guerrilla User Tests: Getting quick feedback from strangers in public places

For quality feedback, use these techniques:

  • "Shut the hell up" and give users time to explore
  • Ask open-ended questions instead of yes/no queries
  • Let users fail to identify natural interaction patterns

Research can move efficiently through small, iterative studies. After each test, fix major problems before testing again. Remote testing often works well, saving time and resources when a user's environment isn't critical to the product experience.

Unmoderated testing tools provide quick insights for specific tasks but won't reveal if users like or need the product. They're best for confirming that simple workflows are intuitive.

Surveys should validate hypotheses formed from qualitative research, not serve as the primary discovery tool. Keep them short and avoid leading questions.

Common excuses for skipping research:

  1. "It's a Design Standard"
  2. "Company X Does It This Way"
  3. "We Don't Have Time or Money"
  4. "We're New; We'll Fix It Later"
  5. "It's My Vision; Users Will Just Screw It Up"
  6. "It's Just a Prototype to Get Funding"

Adopt a continuous feedback mindset throughout development, not just before launch.

Qualitative research (interviews, usability studies) uncovers motivations behind user actions, while quantitative research (A/B testing, funnel analysis) examines what users are doing at scale.

"Quantitative research tells you what your problem is. Qualitative research tells you why you have that problem."

For small, single-variable changes, quantitative methods usually suffice. For multi-variable or flow changes, qualitative testing becomes essential. Deciding what to build next often requires both approaches.

Qualitative research excels at revealing confusion or excitement about features but struggles to predict future purchasing behaviour. For assessing willingness to pay, quantitative experiments (like adding "Buy" buttons before full development) provide more reliable data than simply asking users if they would purchase.

"Design is about solving problems." Lean UX encourages doing minimal design work to validate hypotheses, focusing on elements critical to testing assumptions. This approach prevents wasting time on potentially incorrect solutions.

Tools for validation-oriented design:

  1. Understand the Problem: Clarify users' real challenges
  2. Design the Test First: Identify measurable outcomes before designing screens
  3. Write Stories: Create narratives describing what users need to accomplish
  4. Discuss Solutions with the Team: Brief brainstorming to identify approaches
  5. Make a Decision: Choose the most promising solution
  6. (In)Validate the Approach: Test user interest with minimal investment
  7. Sketch, Prototype, Test, Iterate: Explore options quickly and refine based on feedback

"Instead of just building what the user asks for, build something that solves the user's real problem. As an added bonus, you might end up building a smaller, easier feature than the one the user asked for."

Lean UX emphasises essential elements: "Strip out everything that isn't necessary to validate your hypothesis or to move your key metric." This means building only what's required to test your core assumptions.

"Regardless, you need to find, design, and build everything that is absolutely necessary first and no more. Because if the necessary is an abysmal failure, there's an excellent chance that slapping on the nice-to-have won't save it."

Validation techniques include:

  • Feature stubs: Adding buttons without full backend functionality to gauge interest
  • Wizard of Oz features: Manually handling processes to test user benefits
  • Problem prioritisation: Confirming issues matter to enough users before investing

Low-return efforts to avoid:

  • Excessive visual design that doesn't improve conversions
  • Premature retention features before acquiring users
  • Complex animations that don't enhance usability

Design faster by leveraging existing patterns rather than reinventing everything:

  • Study how others implement similar features
  • Conduct competitive research to avoid repeating competitors' mistakes
  • Maintain consistency across interface elements and terminology
  • Use UI frameworks for ready-made components
  • Consider "Wizard of Oz" tests or off-the-shelf solutions before custom development

Remember that "innovative" UI isn't always beneficial. Standard solutions often work best, with only essential modifications for your specific context.

UX design involves various artefacts with different levels of detail:

  • Diagrams: Clarify flows and navigation paths (internal use)
  • Sketches: Rough layouts showing element placement (brainstorming)
  • Wireframes: Detailed screens with copy and calls-to-action (testing, documentation)
  • Interactive prototypes: Clickable experiences for early feedback (complex features)
  • Visual designs: Polish with fonts, colours, and styling (after flows are settled)

Choose artefacts based on feature complexity and team needs. Keep details minimal until validating essential flows—over-polishing too soon creates reluctance to discard flawed designs.

Minimum Viable Products include only what's absolutely necessary to solve a meaningful part of users' problems.

"Unsurprisingly, trying to shave a product down to its core is really, really freaking hard. It is perhaps the hardest thing that you will have to do."

MVP approaches include:

  • Landing pages: Validating interest before development
  • First iterations: Delivering precisely what was promised
  • Ongoing refinement: Understanding user requests before adding features

An MVP should be limited but never "crappy." A limited product does a small set of things well, while a crappy product does everything poorly, making it impossible to determine if the concept or execution is flawed.

Interaction design (how it works) and visual design (how it looks) serve different purposes but both contribute to successful products.

Visual design can enhance information presentation, reinforce desired actions, and set the tone. Delaying extensive visual design until interaction details are finalized saves rework and avoids overshadowing usability feedback.

Focus on establishing design standards:

  • Reusable color palette
  • Font standards
  • Simple icon or UI element set
  • Flexible header/footer design
  • Consistent spacing and layout rules

A/B testing determines whether specific changes improve metrics that matter to your business. It provides statistical evidence about user behavior in production.

"The single best reason for measuring design is to understand if what you are doing is making a difference, positive or negative, to your company."

What A/B testing does well:

  • Provides statistical evidence on real user actions
  • Validates design decisions with data
  • Identifies features with measurable impact

What it doesn't do well:

  • Explain why users behave certain ways
  • Work with small sample sizes
  • Solve major design questions

Important metrics that indicate user satisfaction include retention, revenue, Net Promoter Score, conversion rates, engagement, and customer service contacts.

Avoid common mistakes like fixating on specific metrics without connecting them to business goals or combining data from multiple tests without confirming the final outcome

Traditional waterfall approaches (product managers specify, designers make it look good, engineers code) are slow and inflexible. Lean cross-functional teams unite product, design, and engineering from the start, allowing faster iteration.

Small teams can combine product and UX responsibilities, streamlining decisions and ensuring user insights directly inform product direction. Validate ideas with low-tech options before investing engineering resources.

Limited release strategies include:

  • Opt-in features: Users enable them voluntarily
  • Opt-out features: Everyone gets them but can revert
  • Percentage rollouts: Release to a small subset first
  • New user rollouts: Only new users receive the feature

This approach reduces risk while allowing continuous refinement based on actual user behavior.

Three fundamental principles:

  1. User research: Listen to users continuously
  2. Validation: Test assumptions before building products
  3. Design: Iterate repeatedly

Full Book Summary · Amazon

Quick Links

How to Go From Customer Problems to Outcome OKRs · Article

Yale’s Introduction to Game Theory · Video

Three Stages of Talent Development · Article

What to Do Before Refactoring · Article

AI and Product Management: Becoming More Evidence-Guided · Article

image

Why Language Models Hallucinate

Kalai, Nachum, Vempala & Zhang. 2025. (View Paper → )

Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such "hallucinations" persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyse the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysterious -- they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are graded -- language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This "epidemic" of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems.

LLM hallucinations stem from predictable training and evaluation mechanics, not mysterious glitches: Even with perfect training data, models will generate errors because:

  • The training objective often incentivises answering over abstaining
  • Rare facts (birthdays, one-time events) appear too infrequently to learn reliably
  • Model capacity limitations prevent capturing all necessary distinctions

Most benchmarks reinforce hallucinations by:

  • Penalising "I don't know" responses while rewarding lucky guesses
  • Creating incentives for confident-sounding outputs regardless of accuracy

We should modify mainstream benchmarks to:

  • Reward appropriate uncertainty expressions
  • Establish clear guidelines for when models should answer vs. abstain
  • Promote behavioural calibration where confidence matches accuracy
image

Book Highlights

You might propose something completely different or even suggest something that meets them halfway. Whatever the case, proposing an alternative is a necessity in almost every situation. Tom Greever · Articulating Design Decisions
...you push. Yourself. The team. You push people to discover how great they can be. You push until they start pushing back. In these moments, always err on the side of almost-too-much. Keep pushing until you find out if what you’re asking for is actually impossible or just a whole lot of work. Get to the point of pain so you start to see… Tony Fadell · Build
Getting rid of some tasks doesn’t mean the job disappears. In the same way, power tools didn’t eliminate carpenters but made them more efficient, and spreadsheets let accountants work faster but did not eliminate accountants. Ethan Mollick · Co-intelligence
image

Quotes & Tweets

Surround yourself with people who are thoughtful in ways you are not, because they see what you can't. Shane Parrish
You can make serious progress training almost anything in 30 high-intensity minutes per day. Justin Skycak