Summary of "What is UX Research? | Google UX Design Certificate"
Concise summary — main ideas, concepts, lessons
Overview and definition
- UX research is the practice of understanding users’ behaviors, needs, and motivations through observation and feedback.
- The primary goal is to center the user while ensuring business needs are met; UX research helps align what the business thinks users need with what users actually need before costly development.
- Roles vary by company size: large companies may have dedicated UX researchers; smaller companies often expect a UX designer to conduct both design and research.
How UX research fits into the product lifecycle
UX research is continuous — it happens before, during, and after design.
-
Foundational (strategic/generative) research
- When: before anything is designed.
- Questions answered: “What should we build?”
- Purpose: define the problem, surface user pain points and unmet opportunities.
-
Design (tactical) research
- When: during design.
- Questions answered: “How should we build it?”
- Purpose: inform and iterate on prototypes or designs (can be as early as paper sketches or with high-fidelity prototypes).
-
Post-launch research
- When: after release.
- Questions answered: “Did we succeed?”
- Purpose: evaluate whether the feature meets user needs and how it performs vs. competition, often using KPIs (time on task, success rates, conversion metrics).
Key qualities of effective UX researchers
- Empathy — understanding others’ feelings and perspectives.
- Pragmatism — practical, goal-focused problem solving.
- Collaboration — ability to work effectively with diverse people and roles. These traits exist in people generally; practice and application in research work develop them further.
How research methods are categorized
-
By who conducts research
- Primary research: you/your team collect original data (interviews, surveys, usability studies).
- Secondary research: using existing sources (articles, studies, statistics); useful early for background facts.
-
By type of data
- Quantitative: numeric, measurable data (surveys with numerical answers, metrics); answers “how many/how much” or “what.”
- Qualitative: observational, descriptive data (interviews, open-ended feedback); answers “why” or “how.”
Common research methods — when to use them, strengths, and drawbacks
Secondary research
- When to use: early in a project to gather existing stats, trends, or evidence; to support primary findings.
- Strengths: fast, inexpensive, immediately accessible.
- Drawbacks: doesn’t show how users interact with your specific product; may not capture feelings or context.
Interviews
- What they are: in-depth, open-ended conversations to capture opinions, experiences, and motivations.
- When to use: when you need detailed qualitative understanding (why/how).
- Strengths: rich detail; opportunity for follow-ups and clarification.
- Drawbacks: time-consuming and costly; small sample sizes that may not generalize.
Surveys
- What they are: structured questionnaires given to many people to quantify opinions and behaviors; can mix quantitative and qualitative items.
- When to use: to validate and quantify findings from qualitative research; to reach larger samples.
- Strengths: fast, inexpensive, scalable.
- Drawbacks: limited depth; results depend heavily on question design.
Usability studies
- What they are: observing users using prototypes or products to identify pain points and measure task performance.
- When to use: to test usability and surface interaction problems before launch (or to evaluate a live product).
- Strengths: direct observation of behavior; reveals real usability issues.
- Drawbacks: often focuses mainly on usability (not broader attitudes); can be artificial (lab vs. real-world); requires participant recruitment and compensation.
- Post-launch: usability studies can include KPIs (time on task, clicks to purchase, success rates) to evaluate performance.
Practical guidance / methodology selection rule
Choose methods based on the question you need answered:
- Use secondary research to gather existing facts and save effort when available.
- Use qualitative methods (interviews, usability testing) to understand why problems happen and gather depth.
- Use quantitative methods (surveys, metrics) to determine how widespread an issue is and to measure change over time.
Additional tips:
- Be consistent with interview protocols to enable meaningful comparisons across participants.
- Combine methods when possible (qualitative discovery → surveys to measure prevalence → usability testing to iterate on solutions).
- Document sessions (notes, recordings) for better analysis and reduced bias.
Biases that affect UX research (definitions and mitigations)
Bias = favoring or prejudging something based on limited information. Biases are often unconscious but can be identified and mitigated.
Confirmation bias
- What: seeking or emphasizing evidence that supports your preconceptions.
- Mitigation: ask open-ended questions, avoid leading prompts, actively listen, and include diverse samples.
False consensus bias
- What: assuming others think like you and overestimating how many will agree.
- Mitigation: explicitly surface assumptions, recruit representative samples, and validate with data.
Recency bias
- What: over-remembering the most recent responses or impressions.
- Mitigation: take detailed notes or record sessions; review earlier data regularly.
Primacy bias
- What: over-remembering the first participant or initial impressions.
- Mitigation: same as recency — use recordings/notes and maintain consistent procedures.
Implicit (unconscious) bias
- What: automatic attitudes or stereotypes that affect decisions and can shape recruitment or questioning.
- Mitigation: reflect on behavior, invite colleagues to call out biases, recruit diverse participants, and avoid leading or stereotyped questions.
Sunk cost fallacy
- What: reluctance to abandon work you’ve invested time in, even when evidence shows it’s not solving the user problem.
- Mitigation: break projects into smaller phases with decision checkpoints; re-evaluate and pivot based on new insights.
Additional lessons and tips
- Awareness of bias improves design outcomes and personal development as a researcher.
- Use KPIs and success metrics to evaluate post-launch performance.
- Consistent methods and documentation (notes/recordings) improve analysis and reduce bias.
- Secondary research can validate or back up primary research claims.
Speakers / sources featured
- Unnamed narrator/instructor from the Google UX Design Certificate (primary speaker).
- Google / Google UX Design Certificate (source/producer).
- Background music (uncredited).
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...