Summary of "DAY 3 (Part 1) The BCI & Neurotechnology Spring School 2026"
Main ideas & concepts (what the video teaches)
-
BCI / neurotechnology spring school overview
- The DAY 3 (Part 1) session covers the pipeline from signals → meaning, including:
- EEG-based feature extraction
- band power
- event-related potentials (ERPs)
- artifact handling
- The event is global and large-scale, framed as the “Woodstock of neurotechnology,” emphasizing shared learning across many countries and institutions.
- The DAY 3 (Part 1) session covers the pipeline from signals → meaning, including:
-
How EEG signal processing is done in practice (Johannes Kunvald + Sebastian Seagal Lightner)
- A practical real-time EEG processing pipeline is demonstrated using g.tec gpipe (a Python SDK for real-time signal processing and BCI).
- The talk moves from preprocessing basics toward feature extraction and ERP/event analysis.
-
Key EEG feature extraction methods
- Band power extraction (e.g., alpha power) in real time:
- Band-pass filtering to isolate a frequency range
- Squaring to convert to power-like measures
- Moving average to average over a short window
- Log transform to stabilize statistical behavior (helpful for classification assumptions)
- Normalization using broadband power so power changes reflect meaningful attention/oscillations rather than artifacts or non-stationarity
- Evoked/event-related potentials (ERPs)
- A stimulus presentation/paradigm generates triggers
- Use trigger nodes aligned to stimulus events (soft triggers)
- Extract and average trial-locked ERP waveforms; discuss what happens when trials contain artifacts
- Artifact handling
- Artifacts can’t simply be “ignored”; they must be managed through:
- prevention (setup, careful behavior)
- trial rejection (for epoched data)
- signal correction/removal (for continuous data)
- The demo introduces OSCAR, an in-real-time artifact removal tool integrated into gpipe with ~250 ms delay
- Artifacts can’t simply be “ignored”; they must be managed through:
- Band power extraction (e.g., alpha power) in real time:
-
Historical evolution of “feature extraction → decoding” in BCIs (R. “Randy” Sher)
- Core framing:
- BCIs decode intent from brain measurements
- performance is constrained not just by classifiers, but heavily by the features/representations chosen
- Historical phases:
- Synchrony-based EEG features (e.g., band power like alpha; ERP averaging)
- Better spatial information (e.g., ECoG features like high-gamma and spatial patterns)
- More invasive recordings (spiking, multi-unit, LFP) reveal non-stationarity and “stability crises”
- Shift toward latent state models: neural activity treated as observations generated from an underlying low-dimensional latent state
- Main message:
- “Better classifiers won’t fix a wrong question.” Representation and stability matter.
- Core framing:
-
Auditory neurotechnology signal processing & selective auditory attention (Adrian Mai)
- Auditory attention decoding can use:
- evoked/ERP activity
- ongoing EEG (e.g., regression-based speech-envelope tracking)
- EMG around the ears (post-auricular muscle activity can carry attention information)
- Auditory evoked potentials:
- ABR (auditory brainstem response): early waves
- middle latency responses
- late cortical responses
- Improving ABR SNR:
- stimulus design using chirps
- PCA for broadband ABR extraction
- wavelet transforms for time-frequency representations
- ITPC (inter-trial phase coherence) to distinguish:
- attention changing power vs attention changing phase consistency
- Extracting auditory evoked potentials from ongoing speech:
- use acoustic edge triggers
- Auditory attention decoding can use:
-
Interpretable deep-learning for EEG feature extraction & classification (JP / Alex “Osachi”)
- Focus on interpretable neural decoding using:
- physiologically meaningful spatial and temporal filtering
- learned weights to interpret which brain regions/dynamics contribute
- Key point:
- avoid shortcut learning (e.g., models using artifacts/muscle activity instead of brain activity)
- separate “measurement physics” from “neural/physiology dynamics” via architecture and interpretability methods
- Focus on interpretable neural decoding using:
-
Practical future/ethics & clinical neuromodulation context
- Neuroethics for industry-academia partnerships (Tristan Macintosh)
- Why neuroethics matters: trust, safety, accountability, regulation readiness
- Common partnership risks:
- competing incentives/timelines
- IP ownership disputes
- presumed trust
- inadequate post-trial access/support
- communication failures/expectation gaps
- Proposed response strategy: three-part approach (intrapersonal, interpersonal, operational)
- Clinical epilepsy neuromodulation biomarkers (Jonathan Parker / JP)
- Challenge in DBS programming:
- seizure freedom isn’t immediate
- clinicians lack fast biomarkers
- seizure counts alone are insufficient due to rhythms and long-term effects
- Goal: early EEG biomarkers to guide settings
- Challenge in DBS programming:
- Inner speech decoding (Erin Kun)
- Inner speech as a more comfortable alternative to attempted speech or miming
- System uses Utah arrays to decode phone probabilities, then a language model generates text
- Discusses representational differences across:
- attempted speech vs imagined (inner) speech vs listening
- Safeguards against decoding unintended inner thoughts
- Demonstrates decoding speed improvements and autonomy-preserving strategies (e.g., training-time “silence,” keyword gating)
- Neuroethics for industry-academia partnerships (Tristan Macintosh)
-
Auditory attention “hearing aid” style closed-loop concept (Nema Mascarani)
- Decodes what the user intends to attend to, then remixes audio in real time
- Emphasizes behavioral usability beyond accuracy:
- switching attention capability
- subjective preference
- reduced listening effort
- safety controls (e.g., turning the system off can frustrate users if it suppresses the wrong talker)
- Envisions integrating attention-aware signals into foundation models for scene understanding
-
High-performance BCI encoding for practical noninvasive use (Chingqing / Nadia / Nema)
- Themes include:
- robustness to weak/noisy signals
- fatigue reduction
- multimodal paradigms (visual, auditory, tactile)
- generalization/personalization (small calibration, transfer learning, meta-learning)
- scaling sensor arrays (e.g., “ultra-high density EEG”)
- Themes include:
Detailed methodology / instructions presented
A) Real-time EEG preprocessing pipeline (g.tec gpipe demo)
Setup
- Use a real-time pipeline including:
- EEG amplifier node (example: PCI Core 8)
- visualization scope node (time-series scope)
Step 1: High-pass filter
- Add a 1 Hz high-pass filter to remove DC offset from DC-coupled amplifiers.
Step 2: Notch filter for line noise
- Add a notch (band-stop) filter for power-line interference:
- center around 50 Hz with a narrow stop band of ±2 Hz margin
- note: in the US/Canada, this would be 60 Hz
Step 3: Low-pass filter
- Add a low-pass filter for standard EEG viewing/analysis:
- 1–30 Hz (noting the range can be debated, but used here for dry-electrode demos)
Outcome
- Improves readability by removing offset, reducing line noise, and keeping artifact/blink visibility manageable for demonstration.
B) Real-time band power extraction (example: alpha power)
Preprocessing chain
- Band-pass to target range:
- example: 1–30 Hz broadband reference
- then isolate the alpha band (typically 8–12 Hz)
Compute power from band-limited EEG
- Narrow to the desired band (alpha, beta, etc.)
- Square the band signal (power-like transformation) using an equation node:
- ( x \rightarrow x^2 )
Temporal averaging
- Apply a moving average over a short window:
- example: 0.5 s
- with sampling rate 250 Hz → 0.5 s = 125 samples
Stabilize distribution
- Apply log transform to stabilize variance/uncertainty:
- squared + averaged power values are non-stationary
- log makes them more comparable across levels
Normalization (important caveat)
- Normalize alpha power using broadband power to remove confounds/artifacts:
- broadband pipeline: square → moving average → log
- normalized alpha:
- in the log domain, subtract broadband power from alpha power
- (equivalent to division in linear domain)
- Goal: preserve meaningful modulation while reducing artifact-driven changes
Visualization / validation
- Use multiple scopes to inspect:
- raw EEG
- band-passed EEG (alpha)
- squared alpha
- averaged alpha power
- log-transformed alpha power
- normalized alpha power
- Demonstrate modulation by asking the subject to open/close eyes.
C) Evoked potentials / ERPs extraction with triggered epochs (visual attention task)
Stimulus presentation
- Use a paradigm presenter to display stimuli, e.g.:
- checkerboards with different spatial frequencies
- faces
Triggers
- Use soft triggers emitted by the paradigm presenter that include:
- timing + target value
- Use trigger nodes:
- one per stimulus class
- Trigger nodes feed triggered signals into a trigger scope for ERP visualization
ERP computation / trial handling principles
- Early trials may contain movement artifacts.
- Without artifact rejection:
- contaminated trials show unusually high amplitudes
- later trials may converge once artifacts diminish
ERP interpretation guidance
- Compare ERP shapes across electrodes (e.g., central vs occipital channels).
- Interpret using differences between attended stimulus types (e.g., face-evoked ERP vs checkerboard).
D) Artifact handling (OSCAR integrated into gpipe)
Conceptual rules
- Prefer avoiding artifacts at the source.
- If needed:
- for epoched data: trial rejection
- for continuous data: correction/removal
OSCAR usage
- Enable OSCAR via a flag in the gpipe pipeline (example):
enableOscar=true/false
- OSCAR typically requires a short stable period (“a couple of seconds of good EEG”) before effective correction begins
- After enabling, demonstrate reduced artifact effects for:
- blinks
- head movements
- electrode touching
- etc.
Deployment
- OSCAR is described as rolled out across multiple systems (including PCI cores, Nelus, hybrid black) and integrated into gpipe/GIS architecture.
E) Auditory attention decoding approaches (Adrian Mai / Nema Mascarani)
Evoked potential approach
- Use unpredictable auditory oddball sequences (standards vs deviants, random interstimulus intervals)
- Attend to one ear/source; ignore the other
- Extract attention markers such as N1 enhancement from auditory evoked potentials
Ongoing EEG / speech regression approach
- Model EEG as convolution of stimulus features (e.g., speech envelope) with a TRF
- Use stimulus reconstruction as an inverse approach
- Compare attended vs ignored reconstruction/TRFs
Single-trial mechanisms via time-frequency analysis
- Use wavelets for time-frequency power/phase analysis
- Use ITPC to determine whether attention changes:
- power, or
- phase consistency (jitter reduction)
Speech edge-event segmentation
- Compute the derivative of the speech envelope
- Detect edges via threshold crossing
- Segment EEG around these acoustic edge triggers to extract evoked responses from ongoing speech
F) Neuroethics strategy for industry-academia partnerships (Tristan Macintosh)
Problem categories identified
- competing values/incentives
- timeline mismatches
- IP ownership disputes
- presumed trust
- post-trial access/support expectations
- expectation-setting and communication failures
Response framework (3 domains)
-
Intrapersonal (within individuals/teams)
- test assumptions
- run internal audits
- pause and dialogue about tradeoffs
-
Interpersonal (stakeholder-to-stakeholder)
- perspective taking
- explicit communication and relationship-building
- transparency
- meaningful engagement of patients/clinicians early
-
Operational (process and governance)
- explicit policies and procedures
- clarity in contracts (timeframes, IP, support obligations)
- planned responsibility assignment for long-term device support
G) Inner speech BCI safeguards / autonomy (Erin Kun)
Key concept
- Differentiate:
- attempted speech
- inner speech
- listening
Safeguard strategies described
- For attempted speech BCIs (avoid decoding inner speech):
- include imagined trials labeled as “silence”
- For inner speech BCIs:
- use keyword detection / gating so decoding starts only when the user intends it
Training targets
- Use language models driven by RNN-decoded phone probabilities
Autonomy consideration
- Monitor and reduce decoding of unintended private inner utterances.
Speakers / sources featured (as named in subtitles)
- Christoff (host/introducer; appears multiple times)
- Johannes Kunvald
- Sebastian Seagal Lightner
- Reinhold “Randy” Sher
- Adrian Mai
- Alex Osachi
- Nadia Lamon
- Nema Mascarani
- Erin Kun
- Tristan Macintosh
- JP / Jonathan Parker
- Chingqing
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.