Summary of "N. Katherine Hayles • Why We Are Still Posthuman"
High-level summary
N. Katherine Hayles argues we remain “posthuman” but must move from merely critiquing liberal humanism to creating an affirmative, technologically informed posthumanism. Preserve valuable humanist achievements (e.g., human rights) while addressing flaws revealed by theory and technology.
Hayles emphasizes the cognitive nonconscious (neural processing prior to conscious awareness) as central to an updated account of cognition and the human. Because technical systems operate on much faster temporal scales than human consciousness, humans now live in “cognitive assemblages” (hybrids of humans + machines) that reshape agency, law, language, ethics, and social life. She contrasts different theoretical models of posthumanism and uses cultural examples (The Silent History, asemic writing, the Voynich Manuscript) to show how writing, reading, and human identity are being transformed by digital infrastructures.
Main ideas, concepts, and lessons
-
System ↔ individual dynamic
- Intellectual configurations oscillate between focusing on single objects/individuals (e.g., New Criticism, a return to aesthetics) and systemic analyses (Marxism, feminism, deconstruction).
- The posthuman debate should attend to both individual and systemic scales.
-
How “posthuman” was framed originally
- Hayles’s earlier work (How We Became Posthuman) diagnosed the breakdown of Enlightenment liberal humanist assumptions: autonomous rational subject, free will, primacy of consciousness.
- Challenges came from theory (deconstruction, postcolonialism, animal studies), neuroscience, robotics, AI, artificial life, embodied/extended cognition, and algorithmic language processing.
-
Toward an affirmative posthumanism
- Goal: retain liberal humanist achievements (e.g., human rights) while correcting anthropocentrism and the narrow focus on consciousness.
- Key tasks: overcome anthropocentrism; account for the historicity of the human; extend cognition beyond conscious mind; integrate technology critically (neither reject nor uncritically celebrate).
-
Three contrasting theoretical approaches
- Rosi Braidotti: nomadic/sustainable subject — trans-historical, emphasizing a transformed subject open to flux (informed by Deleuze & Guattari).
- Niklas Luhmann / Cary Wolfe reading: emphasize systems and differentiation; posthumanism as a mode of thinking with weak historicity.
- Hayles’s approach: historically situated and technology-centered; posthuman as both a way of being and a way of thinking, starting with the individual (cognitive nonconscious) then scaling to systems (cognitive assemblages).
-
The cognitive nonconscious (central analytic move)
- Distinct from the Freudian unconscious: refers to neural processing not accessible to consciousness.
- Timing: sensory input ≈ 0 ms; nonconscious processing begins ≈ 100–200 ms; consciousness typically comes online ≈ 500 ms (the “missing half-second”).
- Functions: pattern recognition, constructing coherent body representation, inference, learning, and pre-filtering information to avoid overwhelming consciousness.
- Empirical support: temporal masking experiments and David Eagleman’s “brain time” work showing nonconscious processes alter perceived simultaneity.
-
Cognitive homology between biological and technical systems
- With a broad definition of cognition (interpretation of information in context and connection to meaning), cognition occurs across scales—from single cells and plants to animals and humans.
- Technical systems (algorithms, pattern recognition, surveillance systems) perform nonconscious-like cognitive functions: pattern analysis, inference, interpreting ambiguity, and preventing overload for human operators.
- This enables a homology between biological nonconscious cognition and machine processing.
-
Cognitive assemblages and dependency
- Cognitive assemblage: interdependent networks of humans + machines through which interpretation, meaning, and agency circulate (e.g., airport operations, automated trading, robotic factories, publishing algorithms).
- Developed societies depend on these assemblages; they offer capabilities but also create vulnerabilities and ethical/political problems.
-
New illiteracy / techno-linguistic regime
- Digital inscriptions have multiple opaque code layers; many users are partially illiterate relative to full technical stacks (cannot read or modify code).
- Surveillance and data collection (e.g., Snowden revelations; ebook telemetry) mean reading/writing are instrumented and monitored.
- Algorithmic curation (recommendation systems) shapes what we read and influences literary taste.
-
Cultural examples and consequences
- The Silent History (iPhone app/novel): imagines children born without language who communicate via microfacial gestures; probes questions about humanity, language, implants, and surveillance.
- Asemic writing and the Voynich Manuscript: mark-making that resists semantic recapture by techno-linguistic systems and can be read as modes of post-literacy resisting algorithmic capture.
-
Ethical, legal, and political implications - Distributed agency challenges institutions like law and ethics: code-driven constraints (e.g., DRM) function like law but lack features such as visibility, practical redress, and the capacity for violation. - There is a need for affirmative regulation and rethinking norms to govern cognitive assemblages, ensure accountability, and protect autonomy and human rights when agency is distributed.
Methodological guidance / working program
Questions to ask when studying posthumanism
- Is posthumanism a way of being and/or a way of thinking?
- Is it historically situated or trans-historical?
- Does it originate with individual-level changes or systemic/differentiation processes?
Steps toward an affirmative posthumanism
- Acknowledge and preserve liberal humanist achievements (human rights, intrinsic value).
- Analyze and correct blind spots (anthropocentrism, neglect of non-conscious cognition).
- Center technology as constitutive: examine cognitive assemblages and the mismatch between technical and human timescales.
- Study cognition broadly (single cells to distributed systems) with a low-threshold, scalable definition.
- Map temporal mismatches (machine microseconds vs human half-seconds) and their social consequences (e.g., affective capitalism, subliminal influence).
- Rework law/ethics for distributed agency: craft regulations and institutional forms suitable for shared human–machine agency.
- Explore cultural practices (literature, asemic writing, apps) to reveal and contest techno-linguistic regimes.
Concrete examples Hayles uses
- Neuroscience timing experiments (visual masking; David Eagleman’s “brain time”).
- Automated trading (orders executed in ≈5 ms) versus human conscious processing (≈500 ms).
- Airports and air-traffic control as dependent cognitive assemblages.
- The Silent History (app-novel) as a case study of language, surveillance, and networked implants.
- CAPTCHA as an instance of techno-linguistic validation and capture (using distorted text to separate humans from machines).
- Asemic writing traditions and the Voynich Manuscript as examples of undeciphered or nonsemantic mark-making that resist algorithmic decoding.
Key implications / takeaways
- Posthumanism should be affirmative and practical: attend to technology, timescales, distributed agency, and institutions.
- Cognition is not synonymous with consciousness; recognizing the cognitive nonconscious reconfigures understandings of agency, machine “thought,” and human–machine homologies.
- We live in interdependent cognitive assemblages, which create new capabilities alongside new vulnerabilities, ethical dilemmas, and legal challenges.
- Cultural practices (new media, literature, asemic art) both illustrate and can resist techno-linguistic capture.
- Scholarly and regulatory work is urgently needed to manage the social, legal, and ethical consequences of cognitive assemblages.
Speakers and sources featured
Presenters / onsite participants
- N. Katherine Hayles — main speaker (author of How We Became Posthuman and Unthought: The Power of the Cognitive Nonconscious)
- Jim English — director of the Wolf Humanities Center (introducer)
- Stuart Varner — managing director of the Price Lab for Digital Humanities (co-sponsor mentioned)
- Emily Wilson — introducer (classicist, translator)
Referenced authors, theorists, and works
- Cleanth Brooks (The Well Wrought Urn)
- John Guillory (Cultural Capital)
- Sharon Marcus and Stephen Best
- Rosi Braidotti
- Gilles Deleuze & Félix Guattari (A Thousand Plateaus)
- Niklas Luhmann (via Cary Wolfe)
- Cary Wolfe (posthumanities)
- David Eagleman (brain time)
- Terrence Deacon (The Symbolic Species)
- Alan Turing (Turing test)
- Edward Snowden (surveillance revelations)
- Mireille Hildebrandt (law and the ends of law)
- Nigel Thrift (technological unconscious)
- The Silent History (Eli Horowitz et al.)
- Asemic writing traditions and the Voynich Manuscript
- Transhumanist ideas (critically referenced)
Audience
- Multiple unnamed audience members participated in the Q&A.
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.