
Submitted by Richard Arlett on Wed, 01/10/2025 - 15:42
Dr Fatemeh Geranmayeh is an MRC funded Clinician Scientist at the Department of Brain Sciences, Imperial College London, where she leads the Clinical Language and Cognition (CLC) group. She is also an Honorary Consultant Neurologist at Imperial College Healthcare NHS Trust. She obtained a First Class Honours BSc degree in Neuroscience in 2004 for which she also received the Goldberg-Schachmann and Freda Becker Award from the University of London. She obtained her medical degree in 2006 and was awarded an Academic Clinical Fellowship in Neurology in 2008 funded by the NIHR. Subsequently she completed her PhD through a Wellcome Trust Research Training Fellowship in 2015. She then undertook a post-doctoral clinical fellowship at Imperial College London before completing her clinical training in Neurology.
Her current research investigates the mechanisms of recovery of language and cognitive functions in patients with cerebrovascular disease and vascular dementia. She uses advanced neuroimaging, artificial intelligence and brain stimulation to study changes in behaviour and neural networks post brain injury. She is the lead for the IC3 study where she investigates novel blood biomarkers of cognitive recovery after stroke.
Dr Geranmayeh runs a specialist vascular cognitive clinic. She has a clinical interest in post stroke cognitive impairment, vascular dementia and language impairment secondary to neurodegenerative diseases. She is co-director of VIDA (Vascular and Immune Contributions to DementiA), a multi-institutional Doctoral Training Centre funded by the Alzheimer’s Society. She is also a member of The Clinical Academic Training Forum (CATF), a UK-wide group representing key stakeholders in training the future clinical academic workforce.
Fatemeh was in conversation with Shrankhla Pandey, a PhD student at the Department of Computer Science and Technology.
Could you share the key questions or challenges your work seeks to address?
My main research focuses on predicting vascular cognitive outcomes following vascular injury, which can take various forms. This includes clinically manifest strokes as well as vascular dementia resulting from progressive, subclinical small vessel disease. I’m particularly interested in predicting outcome trajectories using cognitive and language-based measures.
Currently, my work involves identifying multimodal predictive biomarkers. To this end, we combine detailed linguistic and cognitive assessments with imaging and blood biomarkers to forecast outcomes after stroke. This research is being conducted in a clinical setting as part of a study at Imperial called the IC3 Study. It’s a prospective, longitudinal study of patients with stroke. We recruit participants acutely, sometimes even hyper-acutely, ideally within the first few days post-stroke. We then follow them over time, assessing their cognitive, linguistic, imaging, and blood profiles at baseline, and again at 3, 6, and 12 months post-stroke. The goal is to use these data to predict outcome trajectories.
IC study started in 2021. Due to COVID we could not see patients in person. I spent my first year of the programme developing online cognitive batteries so that we could use them to assess patients remotely. One arm of my research resources diverted to developing remote monitoring of speech, language, and cognition in patients with stroke. You can read more about the study, the online batteries, and their application in stroke patients in these two publications: Gruia et al., 2025 – Mitigating the impact of motor impairment on self-administered digital tests: a longitudinal cohort study in stroke and Gruia et., 2024 - Online monitoring technology for deep phenotyping of cognitive impairment after stroke.
In hindsight, it was a blessing in disguise because now we have a cheap and scalable method to collect patient data. Most of our assessments are validated for automation and don’t need manual work from psychologist or researcher. For language assessment, we are in process of developing framework to get a clinically meaningful automated outcome measure from online linguistic batteries. I am also actively working on developing reliable automated speech recognition system (ASR) for pathological speech. Clearly we have good ASR systems like Siri, for population falling in the normal bell curve but they are not well suited for deviations we observe in pathological speech such as phonological errors, neologisms and certain dysfluencies. You can read more about the foundation model for pathological speech in these publications: Latent representation encoding and multimodal biomarkers for post-stroke speech assessment, Sanguedolce et al., 2025 – SONIVA: Speech recOgNItion Validation in Aphasia and Sanguedolce et al., 2024 – Universal speech disorder recognition: Towards a foundation model for cross-pathology generalisation.
What shaped your journey into language science?
The Integrated Neuroscience BSc during my medical degree offered me a chance to carry out my first science experiment. The Academic Foundation Training Programme and subsequent Academic Clinical Fellowship program gave me the opportunity to pursue a PhD in parallel to my medical training. During my postdoctoral fellowship, I worked on post-stroke aphasia recovery. At that time, it was becoming increasingly evident that to understand language recovery, you actually need to understand the health of the whole brain—how it’s structured and how other non-language cognitive domains support language. That’s when my interest expanded to studying cognition and language in parallel.
What influences have been pivotal in guiding you?
The late Professor Richard Wise had a monumental influence on my journey in language science. I worked with him during my PhD, which I started in 2011 at Imperial College London. His pioneering work with early fMRI and PET imaging in patients with language impairment—particularly after stroke—and his research on the localization and recovery of language was truly groundbreaking. Working with him sparked my interest in cognition and language sciences. I’ve had a few mentors along the way and having their support—especially during my postdoctoral fellowship—was incredibly important.
What does your day-to-day look like?
I would describe my day-to-day as variable and chaotic, those are probably the two words I’d choose. Clinical commitments, academic events, lab meetings, supervising and co-supervising postgraduate and PhD students, writing and reviewing grants, and personal commitments all come together to form a colour-coded rainbow pattern in my calendar.
Clinically, I run a specialist cognitive clinic for patients with vascular neurological disease, called the Post-Stroke Vascular Cognitive Clinic, and I have general neurology cover commitments. Academically, I develop and supervise graduate and postgraduate research projects. I’m currently helping the university design the curriculum for a new Master of Science course. At the moment, I’m supervising two PhD students and co-supervising three more.
Do you use large language models (LLMs) in your day-to-day?
I use it for administrative purposes and to get a quick overview of unfamiliar but well-established concepts. I don’t find them to be reliable or trust worthy for state-of-the-at research concepts.
I first became interested in them for ASR, and I saw the potential, which got me excited! But using them for pathological speech was disappointing. I still continue to use them in certain context, they can improve efficiency, but anything that needs to be reliable and factual needs to be double checked. In a broader picture, I believe LLM have concerns of societal and ethical impact.
What societal changes do you envision your research contributing to, especially in terms of early intervention and personalised medicine?
We know that speech is a functional biomarker, not just the presence of the pathology but a measure of the disease progression. In the coming 2-3 years I’d like to see some movement towards clinical translation and applications of speech as a functional biomarker in brain disease. The blue sky ambition I have for my research is using people's day to day speech to give them a prediction of their health in a safe and secure way. Can be used like a health check-up along with routine check up one gets nowadays. That could give you a risk prediction for let's say vascular cognition and maybe other neurodegenerative causes of dementia. This is important because it's in that middle life stage where one can actually intervene to prevent the risk of dementia, with better blood pressure or cardiovascular risk factor control.
The biggest hurdle to realise this blue sky ambition is data security. We know speech is classed as a personally identifiable data under GDPR. We can now use speech to log into our bank as a security measure. It's as sensitive as a fingerprint. So safeguarding that data is going to be challenging. And patients are quite rightly, cautious about passively allowing their speech to be monitored or access to be given to unaccountable third party companies. As clinicians, we would be best placed to safeguard that data, similar to how we safeguard other health data. Of course we still need to be regulated fully, but that would be the main challenge.
Speech is a functional biomarker, not just an indicator of the presence of pathology, but also a measure of disease progression. In the coming 2–3 years, I’d like to see some movement towards clinical translation and applications of speech recognition as a biomarker and in rehabilitation after stroke.
What is your ambition for the field of language sciences as we look toward 2050?
By 2050, I’d like to see the field move toward developing clinically validated, scalable, and accessible tools, for instance a speech-based therapy bot, that can provide meaningful insights and support for patients. This bot would operate within a framework set by a clinician or speech therapist, adapting its approach based on the patient’s needs, informed by clinical judgment.
We know that patients often don’t receive the level of intensive therapy that evidence suggests is effective. That’s rarely achievable within current healthcare systems like the NHS or globally. If we could create a tool that delivers therapy in a way that’s engaging, accessible, and tailored, it could make a real difference. This bot wouldn’t just help with diagnostics, it would deliver therapy, monitor speech continuously, and adjust support as needed. It would need to be validated across languages, not just English, so it could serve people globally. Emphasizing what I mentioned earlier, data security remains a challenge, and such methods need to be developed under strict regulatory framework.