-
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.txt
More file actions
215 lines (146 loc) · 9.49 KB
/
index.txt
File metadata and controls
215 lines (146 loc) · 9.49 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
Elliott John Mitchell — Personal Portfolio
# sidebar
alt = Elliott John Mitchell
Elliott John Mitchell
Neuroinformatics Researcher
Show Contacts
Email
elliott.mitchell10@gmail.com
Phone
+44 7883 338618
Degree
BA (Hons) Music: Production, Performance and Enterprise — First Class Honours, University of Westminster (2024)
Location
Rotherham, UK
# navbar
About
Resume
Portfolio
Blog
Contact
# about
About Me
I am a neuroinformatics researcher specialising in brain-computer interfaces, neural dynamics, and perceptually-aligned AI systems. My work bridges computational neuroscience with deep learning to build AI that perceives temporal structure through oscillatory dynamics — the same mechanisms humans use to process rhythm, music, and motion.
As founder of DedAI-Neurodynamics, I developed a computational framework implementing Neural Resonance Theory models (ASHLE, GrFNN) for real-time rhythm perception, achieving 94.37% Phase-Locking Value retention in a trained TCN surrogate with sub-50 ms latency. I replicated the musician/non-musician entrainment difference (p=0.0158) and extended the framework to cross-modal visual perception (r=0.91). My research was presented at Queen Mary University of London's DMRN+18 conference, and I have consulted with Dr Marcus Pearce (QMUL) on music cognition methodology.
I hold a First Class Honours degree in Music: Production, Performance and Enterprise from the University of Westminster, where I conducted EEG research examining differential emotional processing between musicians and non-musicians under ethics approval ETH2324-0744. I am applying to Queen Mary University of London's MSc Sound and Music Computing (AI and Music Data Science stream) for September 2026, with the intention of continuing to doctoral research in neural dynamics and perceptually-grounded AI.
## service
Research Interests
alt = Neural Dynamics Icon
Neural Dynamics
Oscillatory entrainment and synchronisation mechanisms underlying the perception of temporal structure in auditory and visual stimuli.
alt = BCI Icon
Brain-Computer Interfaces
Real-time EEG signal processing for closed-loop systems that map cognitive biomarkers to computational parameters.
alt = Cross-modal Icon
Cross-Modal Perception
Multimodal signal processing and self-supervised learning for cross-modal representation between auditory and visual domains.
alt = Perceptual AI Icon
Perceptually-Aligned AI
Building AI systems grounded in biophysical models of human perception using physics-informed neural networks.
## testimonials
Publications & Presentations
alt = DedAI Paper
DedAI: AI-Driven Music Composition (2024)
Advanced AI-Driven Music Composition Informed by EEG-Based Emotional Analysis. Undergraduate Research Paper, University of Westminster.
alt = EEG Study
EEG Study: Musicians vs Non-Musicians (2024)
Measuring Music's Emotional Impact with EEG: A Study on Musicians and Non-Musicians. Undergraduate Research Thesis, University of Westminster. Supervisor: Dr Jasmine Taylor.
alt = DMRN+18
DMRN+18 @ Queen Mary (2023)
Orchestrating Emotions: The Intersection of AI, Neuroscience & Music. 20-minute invited talk at the Digital Music Research Network Conference, Queen Mary University of London.
## clients
Key Research Metrics
94.37% — PLV Retention
<50ms — Real-time Latency
r=0.91 — Cross-modal Correlation
p=0.0158 — Musician/Non-Musician Difference
14-ch — EEG Channels
# resume
Resume
Education
BA (Hons) Music: Production, Performance and Enterprise — First Class Honours
University of Westminster | 2021 – 2024
Final Major Project (73%): Early version of DedAI — AI-driven music composition system using real-time EEG signal processing.
Research Project (74%): Quasi-experimental EEG study on emotional processing of music in musicians vs non-musicians (supervisor: Dr Jasmine Taylor).
Relevant modules: Advanced Audio Production, Music Production Studio and Live, Recording Techniques, Creative Identities and Making Digital Content, The Freelance Music Professional.
Self-directed study: Digital Signal Processing, Computational Neuroscience, Machine Learning, Neural Resonance Theory.
Research Experience
DedAI-Neurodynamics — Founder & Lead Researcher
2023 – Present
Developed a computational framework bridging Neural Resonance Theory and deep learning to build AI systems that perceive temporal structure through oscillatory dynamics. The project implements biophysically-grounded models for rhythm perception and translates EEG-derived emotional states into generative music synthesis.
- Implemented ASHLE (Adaptive Synchronisation with Hebbian Learning and Elasticity) and GrFNN (Gradient Frequency Neural Network) models for oscillatory entrainment and beat perception
- Trained a Temporal Convolutional Network (TCN) surrogate with physics-informed loss achieving 94.37% Phase-Locking Value retention, enabling real-time inference while preserving neural entrainment dynamics
- Built a closed-loop Brain-Computer Music Interface (BCMI) using Emotiv EPOC X (14-channel EEG), mapping cognitive biomarkers to oscillator parameters in real-time (<50ms latency)
- Developed PyNRT — an open-source Python toolkit for simulating Hopf oscillators, gradient frequency neural networks, and adaptive oscillator models
- Extended framework to cross-modal visual perception, demonstrating video-to-audio entrainment where motion energy drives ASHLE rhythm synthesis (r=0.91 correlation for periodic stimuli)
- Created digital twin simulations of EEG hardware for algorithm prototyping without physical sensors
EEG Study: Musicians vs Non-Musicians — Primary Investigator
2023 – 2024
Conducted a mixed-methods quasi-experimental study examining differential emotional processing of music between musicians and non-musicians using EEG. Grounded in Meyer's Expectancy Theory and Juslin & Västfjäll's BRECVEMA model of music-induced emotions.
- Collected and analysed 14-channel EEG data from 6 participants using Emotiv EPOC X in naturalistic listening conditions; processed with MNE-Python, ICA artifact rejection, and FFT band power analysis
- Found significant beta wave (12–30 Hz) differences in frontal regions (AF3, AF4, F3, F4, F7, F8) indicating heightened cognitive engagement in musicians during musical expectation violation (p=0.0158)
- Created visualisations including topographical brain maps, radar graphs, and hierarchical edge bundling diagrams using PyVis and Matplotlib
- Consulted with Dr Marcus Pearce (QMUL) on methodology and theoretical framework; discussion informed integration of schematic and dynamic expectation mechanisms
- Obtained full ethics approval (ETH2324-0744) from the University of Westminster Research Ethics Committee
Professional Experience
Pinnacle Crew — Freelance AV Crew / Technical Assistant
2021 – 2024
Assisted with AV setup, signal routing, and system configuration for live events at London venues. Worked under tight turnaround times across multiple cross-functional crews, gaining practical exposure to live sound, staging, and event production workflows.
Macoral Services — AI & Web Development Consultant
2022 – 2024
Developed and deployed a custom AI chatbot and quote calculator, achieving approximately 30% reduction in client response time. Translated complex technical requirements into accessible business tools for non-technical stakeholders.
Technical Skills
Programming: Python (PyTorch, NumPy, SciPy, MNE-Python), C++ (real-time DSP), JavaScript/TypeScript, MATLAB
Signal Processing: EEG acquisition, spectral analysis (FFT, wavelets), ICA, bandpass filtering, Phase-Locking Value computation, event-related potentials, oscillatory dynamics modelling
Machine Learning: Temporal Convolutional Networks (TCNs), Physics-Informed Neural Networks (PINNs), self-supervised learning, multimodal representation learning, time-series classification, generative models
Tools & Platforms: Git, Docker, Jupyter, VS Code, React, Three.js, Node.js, Emotiv SDK, OpenBCI, PyVis, Matplotlib, librosa, soundfile
# portfolio
Portfolio
All
Research
Presentations
Software
Select category
All
Research
Presentations
Software
alt = DedAI Research Paper
DedAI: Advanced AI-Driven Music Composition Informed by EEG-Based Emotional Analysis
Research
alt = DMRN+18 Talk at Queen Mary University of London
DMRN+18: Orchestrating Emotions — The Intersection of AI, Neuroscience & Music
Presentations
alt = DedAI Platform
DedAI-Neurodynamics — Neural Music Generation with EEG Integration
Software
alt = Live Demo
Live Demo: Real-time EEG to Music Generation
Presentations
alt = PyNRT Toolkit
PyNRT — Python Neural Resonance Theory Toolkit
Software
# blog
Blog & Updates
alt = DedAI Website
Project
2024
DedAI-Neurodynamics Project Website
Updates on neural music generation research and beta access for the DedAI platform.
# contact
Contact
Get in Touch
I am applying to QMUL's MSc Sound and Music Computing for September 2026 and am open to research collaborations, conversations, and opportunities in neural dynamics, brain-computer interfaces, and perceptually-aligned AI. Feel free to get in touch.
Email: elliott.mitchell10@gmail.com
LinkedIn: linkedin.com/in/dedeye
GitHub: github.com/HawkSP
Location: Rotherham, UK
Full name
Email address
Your Message
Send Message
Academic References
Hussein Boon — Principal Lecturer, University of Westminster
Supervisor for Final Major Project; advised on DedAI development and recommended submission to DMRN+18.
Dr Jasmine Taylor — Senior Lecturer, Westminster School of Arts
Supervised EEG research thesis: "Measuring Music's Emotional Impact with EEG: A Study on Musicians and Non-Musicians."