Summary of "Einführung in die qualitative Inhaltsanalyse mit Dr. Grit Laudel"
Presenter and scope
- Speaker: Dr. Grit Laudel (sociologist of science, TU Berlin).
- Topic: Introduction to extractive qualitative content analysis — a specific, relatively theory-driven form of qualitative content analysis.
- Aim: Position the method within qualitative research and present a practical, stepwise procedure for producing a structured information base from interview/text data.
Big-picture framing
- Research design must be aligned: research question → theoretical/conceptual framework → data collection → data analysis. In qualitative work, data collection and analysis typically alternate in an iterative (circular) process rather than a strict linear sequence.
- Methods contrasted briefly:
- Hermeneutic/interpretive approaches (holistic).
- Coding (widely used; can be emergent or theory-driven).
- Extractive qualitative content analysis (method presented here).
- Suitability: Extractive qualitative content analysis is relatively theory-driven and particularly useful when the goal is reconstruction of conditions/processes and causal or mechanistic interpretation.
- Types of research questions it supports well: descriptive, causal, and mechanistic (process-oriented).
The extractive qualitative content analysis workflow
Dr. Laudel presents four main steps. The steps below summarize the actionable procedure.
1) Development of the category system (design stage)
Purpose: translate the research question and theoretical concepts into a set of categories (variables) that structure the information to be extracted.
Key points
-
Types of variables/categories:
-
Multidimensional variables (typical): values vary independently and cannot be reduced to a single ordered scale (often nominal). Example: an institutional rule described by at least three dimensions — object of the rule (what actions are affected), content of the rule (what it says), scope (whose actions are regulated).
-
One-dimensional variables: actor characteristics (age, gender, role) — used but less dominant in qualitative work.
- Action/process categories: capture sequences of interrelated actions and include actor, object, content, and temporal aspects.
- A category should include these dimensions:
- Factual dimensions (derived from variables) — e.g., subject, content, scope.
- Time dimension — point or period (essential for process research).
- Causal dimension — reported causes/effects as provided by interviewees (useful for reconstructing reported causal claims; not the final causal inference).
- Source reference — interview ID and paragraph location for reproducibility/traceability.
- Practical rules:
- Keep the number of categories pragmatic — typically ~10 on average; avoid more than ~15 (becomes unwieldy).
- Define categories for quick, consistent assignment (short operational definitions, not long theoretical texts).
- The category system is open and iterative: adapt it after reviewing early interviews (add categories/dimensions, refine definitions, add likely manifestations/values).
-
2) Methodological preparation
- Decide corpus composition: which interviews/documents to include (typically keep all interviews unless there is a strong reason to exclude some).
- Define the unit of analysis explicitly: an interview paragraph is recommended as a practical unit of meaning for extraction.
- Prepare tools and conventions (e.g., paragraph identifiers, macros/templates, extraction forms).
3) Extraction (material processing)
Purpose: interpret text units, identify information relevant to categories, paraphrase that information into category dimensions, and record source IDs.
Practical extraction actions
- Read paragraph-by-paragraph (paragraph = unit of meaning). Mark paragraph boundaries and identifiers.
- For each paragraph containing relevant information, open the category mask/form and fill fields such as: time, source, actor, subject, content, reported effects/causal links, paragraph ID.
- Use and expand predetermined lists of likely manifestations/values (default values) to standardize entries.
- Paraphrase and store only information relevant to the category (do not copy unnecessary narrative).
- Record source references for traceability.
- Formulate and apply extraction rules (heuristics) to ensure consistent assignment (e.g., “if question X was asked, then assign the answer to category K”).
- Iterate: add new characteristics, refine categories, and update extraction rules as more data are processed.
Tools
- Laudel described an in-house tool (“Akit Mia”) and Word/Visual Basic macros to support extraction; many commercial packages are primarily designed for coding rather than extractive workflows.
Extraction example (illustration)
- From an Assistant Professor interview paragraph about teaching relief:
- Two institutional rules were extracted as separate category entries (e.g., “no teaching in second semester for new staff”; “teaching relief if a large grant is obtained”).
- For each rule entry, fields filled included: time (if known or “time of interview”), source (university/department), content (teaching relief), affected action/decision (research conditions/time for research), and paragraph ID.
- Irrelevant narrative (e.g., unrelated comments about experiments) was not extracted.
4) Summarizing, compiling and cleaning the information base
- Create an analysis table per category that contains all extracted entries across interviews.
- Clean and organize the dataset:
- Sort entries by factual criteria or chronologically to analyze change over time.
- Group entries with the same meaning.
- Correct obvious errors; for contradictions, re-check transcripts and, if unresolved, mark contradictions explicitly.
- Retain all source references so every entry can be traced back to raw data.
- The outcome: a “cleaned information base” (structured dataset) ready for further analysis.
Further analyses (after extraction)
Using the cleaned information base you can:
- Identify recurring patterns, combinations of conditions and outcomes, and processes.
- Build typologies or pattern summaries.
- Integrate patterns to determine which factors are always present versus present only in some cases.
- Explain deviating cases — aim to account for all variation in the sample rather than ignoring exceptions.
- Use additional tools if needed (e.g., Qualitative Comparative Analysis / Charles Ragin) for systematic cross-case pattern analysis.
Comparison: extractive qualitative content analysis vs. coding
Similarities
- Both link text passages to analytic categories/codes and require interpretation of meaning.
- Both are interpretive and decision-heavy processes.
Key differences
- Coding:
- Typically leaves annotated text as the primary record (like an index pointing back to the text).
- Codes often flag presence/instances and can be emergent or theory-driven.
- Commercial tools (MAXQDA, NVivo, etc.) primarily support coding workflows.
- Extractive qualitative content analysis:
- Paraphrases and transfers relevant information into structured category tables; researchers subsequently work primarily from the structured information base rather than full transcripts.
- Category systems tend to be open/flexible; values are often not fully predetermined.
- Explicitly aimed at reconstructing multi-dimensional variables, processes, and causal mechanisms rather than only descriptive cataloguing.
- Practical consequence: some teams build custom macros/tools (e.g., Word-based “Akit Mia”) because off-the-shelf software is often optimized for coding rather than extraction.
Practical tips and methodological cautions
- Start analysis early (as soon as the first interviews) to refine data collection and categories.
- Be selective: record only information relevant to your research questions.
- Keep category definitions short, operational, and usable.
- Formulate and use extraction rules to reduce ambiguity and overlap.
- Limit the number of categories to a manageable size (guideline: ~10; avoid >15).
- Always record source references and paragraph IDs for reproducibility.
- If timing is important, probe interviewees for dates; otherwise use “time of interview” as a default.
- If recordings are refused, ensure full transcripts or detailed notes to preserve traceability.
Referenced literature, tools, and historical notes
- Authors/methods cited:
- Charles Ragin (Qualitative Comparative Analysis).
- Gläser & Laudel (Laudel’s own work; cited 2010, 2013).
- Miles & Huberman (and Miles et al. 2014), Johnny Saldaña.
- Grounded theory founders (Glaser & Strauss; Corbin & Strauss).
- German qualitative content analysis tradition (likely references to Philipp Mayring, Margrit Schreier, Udo Kuckartz).
- Tools mentioned:
- Akit Mia (custom extraction support tool / Laudel’s Word/Visual Basic macros).
- MAXQDA, NVivo (commercial qualitative coding software).
- Standard Word macros / Visual Basic for custom extraction workflows.
Speakers / sources (as they appear in subtitles)
- Dr. Grit Laudel — main speaker (TU Berlin).
- Several authors and method names were referenced; auto-generated subtitles included probable transcription errors (see Notes below).
End result of the method
- A reproducible, structured information base: category tables with dimensioned entries and source identifiers that support subsequent analytic steps to answer theory-driven research questions about processes, conditions, and causal/mechanistic explanations.
Notes
- The subtitles used in the talk were auto-generated and include several probable name/spelling errors; questionable names were left as presented in the talk and flagged earlier as likely transcription errors.
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.