This repository has been modernized and is now maintained by Emir Hamurcu.
The codebase has been refactored to follow modern Python packaging standards with improved modularity.
This repository contains the python SDK code for the Brain Computer Interface (BCI) developed by the company Mindaffect <https://mindaffect.nl>_.
When installed, with the right hardware you can do things like shown here <https://youtu.be/MVuQzaqDkKI>_
Available at: https://mindaffect-bci.readthedocs.io/ <https://mindaffect-bci.readthedocs.io/en/latest/tutorials.html>_
-
Clone or download this repository:
git clone https://github.com/mindaffect/pymindaffectBCI cd pymindaffectBCI -
Install the package:
pip install -e .
pip install --upgrade mindaffectBCITry the off-line analysis on-line on binder.
.. image:: https://mybinder.org/badge_logo.svg :target: https://mybinder.org/v2/gh/mindaffect/pymindaffectBCI/pip_test
Try off-line multiple datasets analysis on kaggle <https://www.kaggle.com/mindaffect/mindaffectbci>_
You can run a quick test if the installation without any additional hardware by running::
python3 -m mindaffectBCI.online_bci --acquisition fakedata
Essentially, this run the SDK test code which simulates a fake EEG source and then runs the full BCI sequence, with decoder discovery, calibration and prediction.
If all is successfully installed then you should see a window like this open up.
If you now press 2 you should see a flickering grid of "buttons" like below. You should see a random one briefly flash green (it's the target) then rapidly flicker and eventually turn blue (to indicate it's selected.)
If all this works then you have successfully installed the mindaffectBCI python software. You should now ensure your hardware (display, amplifier) is correctly configured before jumping into BCI control.
For rapid visual stimulation BCI (like the noisetagging BCI), it is very important that the visual flicker be displayed accurately. However, as the graphics performance of computers varies widely it is hard to know in advance if a particular configuration is accurate enough. To help with this we also provide a graphics performance checker, which will validate that your graphics system is correctly configured. You can run this with::
python3 -m mindaffectBCI.presentation.framerate_check
As this runs it will show in a window your current graphics frame-rate and, more importantly, the variability in the frame times. For good BCI performance this jitter should be <1ms. If you see jitter greater than this you should probably adjust your graphics card settings. The most important setting to consider is to be sure that you have _vsync_ <https://en.wikipedia.org/wiki/Screen_tearing#Vertical_synchronization> turned-on. Many graphics cards turn this off by default, as it (in theory) gives higher frame rates for gaming. However, for our system, frame-rate is less important than exact timing, hence always turn vsync on for visual Brain-Compuber-Interfaces!
If you have:
- installed
pyglet <https://pyglet.org>_ , e.g. usingpip3 install pyglet#. installedbrainflow <https://brainflow.org>_ , e.g. usingpip3 install brainflow#. have connected anopenBCI ganglion <https://shop.openbci.com>_ , #. have followedMindAffect headset layout.pdf <https://github.com/mindaffect/Headset/blob/master/MindAffect%20headset%20layout.pdf>_ to attach the electrodes to the back of your head.
Then you can jump directly to trying a fully functional simple letter matrix BCI using::
python3 -m mindaffectBCI.online_bci
Note: For more information on how to run an on-line BCI, including using other supported amplifiers, see our complete on-line documentation <mindaffect-bci.readthedocs.io>_ and in particular our tutorials section <https://mindaffect-bci.readthedocs.io/en/latest/tutorials.html>_.
If you run into and issue you can either directly raise an issue on the projects github page <https://github.com/mindaffect/pymindaffectBCI>_
..
or directly contact the developers on gitter <https://gitter.im/mindaffect>_ -- to complain, complement, or just chat:
.. image:: https://badges.gitter.im/mindaffect/unitymindaffectBCI.svg
:target: https://gitter.im/mindaffect/pymindaffectBCI?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
This repository follows a modern src-based layout with modular organization:
PyMindAffectBCI/
├── src/mindaffectBCI/ # Main package
│ ├── utopia/ # Network communication
│ │ ├── utopiaclient.py # Low-level messaging
│ │ └── utopiaController.py # High-level API
│ ├── stimulus/ # Stimulus sequence generation
│ │ └── stimseq.py
│ ├── acquisition/ # EEG hardware drivers
│ ├── presentation/ # UI modules
│ │ └── selectionMatrix.py # Selection interface
│ ├── output/ # Action handlers
│ ├── decoder/ # BCI decoder & analysis
│ │ └── decoder.py # Online decoder
│ ├── hub/ # Central hub
│ ├── configs/ # Configuration files (.json)
│ ├── resources/
│ │ └── stimuli/ # Stimulus sequences (.txt)
│ ├── noisetag.py # Main BCI API
│ └── online_bci.py # Entry point
├── examples/ # Example scripts
│ └── utilities/
├── docs/ # Documentation
└── tests/ # Test files
- noisetag.py - Main API for developing BCI-controlled UIs
- utopia/ - Communication with MindAffect decoder
- stimulus/ - Stimulus codebook management
- acquisition/ - EEG hardware interface drivers
- presentation/ - User interface examples
- decoder/ - Open-source BCI decoder
Main high-level API for developing BCI applications. Provides:
- Noisetag class: Main interface for BCI control
- Finite State Machine (FSM) for BCI workflow management
- Automatic calibration and prediction phases
- Event handling and selection callbacks
- Simple integration for UI developers
Example Usage:
from mindaffectBCI import Noisetag
nt = Noisetag()
nt.connect() # Connect to decoder
nt.setnumActiveObjIDs(9) # 9 selectable objects
nt.startCalibration() # Run calibration
nt.startPrediction() # Start prediction modeLow-level networking and message protocol:
- Message serialization/deserialization
- Socket communication with hub
- Message types: StimulusEvent, DataPacket, Selection, etc.
- Timestamp synchronization
High-level controller for decoder interaction:
- Simplified API for sending/receiving messages
- Event handlers for predictions and selections
- Automatic connection management
- Signal quality monitoring
Example Usage:
from mindaffectBCI.utopia import UtopiaController
uc = UtopiaController()
uc.autoconnect()
uc.sendStimulusEvent([1, 0, 1, 0], timestamp=None)
msgs = uc.getNewMessages()Manages stimulus sequences (codebooks):
- Loads pre-defined stimulus patterns (.txt files)
- Generates random/pseudo-random sequences
- Supports various modulation codes (m-sequence, Gold codes)
- Frame-rate adaptive sequencing
Multiple EEG device drivers:
utopia_brainflow.py- BrainFlow supported devices (OpenBCI, etc.)utopia_lsl.py- Lab Streaming Layer (LSL) interfaceutopia_fakedata.py- Simulated data for testingutopia_eego.py- ANT Neuro eego devices- And 10+ more device-specific drivers
All drivers follow the same pattern:
- Connect to hardware
- Stream data packets to hub
- Handle trigger injection for testing
Main selection interface with:
- Grid of selectable objects (letters, symbols, images)
- Configurable stimulus rendering
- Calibration and prediction visualization
- Feedback animations
colorwheel.py- Circular color selectortictactoe.py- BCI-controlled gameframerate_check.py- Display validation tool
Real-time BCI decoder:
- Multi-channel EEG processing
- Canonical Correlation Analysis (CCA)
- Template matching algorithms
- Online adaptation
- Prediction generation
Jupyter notebook for:
- Post-hoc data analysis
- Performance evaluation
- Algorithm parameter tuning
- Visualization of results
JSON configuration files for different BCI setups:
online_bci.json- Default online BCI confignoisetag_bci.json- Noisetag-specific settingsdebug.json- Debug mode configuration- Device-specific configs (raspberry_pi_gpio.json, etc.)
Pre-computed stimulus sequences:
mgold_61_6521_psk_60hz.txt- Gold code sequence for 60Hz displaysmgold_65_6532_psk_60hz.txt- Alternative Gold coderc5x5.txt- 5x5 matrix codessvep.txt- SSVEP (Steady-State Visual Evoked Potential) patterns
# Run with simulated data
python3 -m mindaffectBCI.online_bci --acquisition fakedatafrom mindaffectBCI import Noisetag
# Initialize
nt = Noisetag()
nt.connect()
# Configure for 9 objects
nt.setnumActiveObjIDs(9)
# Run calibration
print("Starting calibration...")
nt.startCalibration()
# Run prediction
print("Starting prediction...")
nt.startPrediction()
# Get selection
selection = nt.getLastSelection()
print(f"Selected: {selection}")from mindaffectBCI import Noisetag
import pyglet
nt = Noisetag()
nt.connect()
# Your custom rendering loop
while True:
# Get stimulus state
stimulus_state = nt.getStimulusState()
# Render your UI based on stimulus_state
render_ui(stimulus_state)
# Send stimulus event
nt.sendStimulusState()-
docs <docs/>_ -- contains the documentation.-
source <docs/source>_ -- contains the source for the documentation, in particular this directory contains the juypter notebooks for tutorials on how to use the mindaffectBCI.online_bci.ipynb <docs/source/quickstart.ipynb>_ - Thisjuypter <https://jupyter.org/>_ notebook contains the code to run a complete on-line noise-tagging BCI
-

