Mode
Text Size
Log in / Sign up

AI Spots Hidden Brain Lesions in Kids with Epilepsy

Share
AI Spots Hidden Brain Lesions in Kids with Epilepsy
Photo by Navy Medicine / Unsplash
  • Finds hard-to-see epilepsy-causing brain changes
  • Helps children who don’t respond to seizure meds
  • Not ready for clinics yet — still in testing

This tool could help more kids get the right diagnosis faster.

A 6-year-old has seizures. Doctors try two, then three medications. Nothing works. The family is told, “We can’t find anything on the MRI.” They feel stuck. But what if the problem isn’t absent — it’s just invisible to the human eye?

That’s the reality for many children with drug-resistant epilepsy. Their seizures come from tiny brain abnormalities called focal cortical dysplasias (FCDs). These are small wiring errors in the brain’s outer layer. They’re like a short circuit in a house — but hidden behind the wall.

And they’re surprisingly common.

FCDs are one of the top causes of hard-to-treat epilepsy in kids. Up to 1 in 3 children with ongoing seizures have them. Yet, in as many as half of these cases, standard MRIs miss the lesion. The scan comes back “normal” — even when something is wrong.

That delay can last years. Kids stay on meds that don’t work. Seizures keep happening. Development slows. Families grow frustrated.

Surgery can stop seizures — but only if doctors know where to look. And if the lesion isn’t seen, it can’t be removed.

Right now, finding these hidden spots depends on expert radiologists spending hours staring at brain scans. It’s like searching for a typo in a 1,000-page book — with no spellcheck.

The hidden culprit

For years, doctors assumed if an MRI looked normal, there was no structural cause. But we now know that’s not always true.

Many FCDs are too small, too flat, or in tricky spots to show up clearly. They blend in like camouflage.

But here’s the twist: AI might be able to see what humans can’t.

Two new deep-learning tools — MELD Graph and 3D-nnUNet — were built to find these invisible lesions. They were trained on thousands of brain scans, learning the subtle patterns of FCDs.

Now, for the first time, researchers tested them outside their original labs — in real-world pediatric cases.

What scientists didn’t expect

These tools don’t “see” like radiologists. Instead, they analyze the brain like a 3D map of textures and signals.

Think of it like this: A radiologist looks for a pothole on a road. The AI checks the entire road surface for tiny cracks, bumps, and uneven wear — even if no hole is visible yet.

It uses two types of standard MRI images — T1 and FLAIR — the same ones hospitals already take. No special machines needed.

The AI scans each millimeter of the brain, flagging areas that look “off” compared to healthy brains. It’s like a spellcheck for brain structure.

The study looked at 71 children with epilepsy — 35 with MRI-positive FCDs and 36 with “normal” scans. All had standard 3D brain MRIs.

Both AI tools analyzed the scans without any extra input. Then, a pediatric neuroradiologist reviewed every flagged area.

Each finding was labeled: true hit, false alarm, or missed lesion.

They found something big

At the lesion level, both tools were highly precise. That means when they flagged a problem, they were usually right.

3D-nnUNet was the most accurate — 91% of its alerts were real lesions. MELD Graph was close behind at 85%.

But here’s the catch: neither caught every lesion. MELD Graph found just over half (52%), while 3D-nnUNet found 48%.

In patient terms, MELD Graph spotted FCDs in 63% of kids who had them. 3D-nnUNet found 54%.

But 3D-nnUNet had far fewer false alarms — especially in kids with “normal” MRIs. It wrongly flagged a lesion in only 14% of patients. MELD Graph did so in 53%.

This doesn’t mean this treatment is available yet.

Why this changes things

High precision is key. It means doctors can trust the AI’s alerts. A false alarm could lead to unnecessary tests or stress. But if the tool is rarely wrong when it speaks up, it becomes a reliable second pair of eyes.

Imagine a radiologist reviewing a scan. The AI quietly highlights a suspicious spot. The doctor zooms in. Suddenly, a faint abnormality becomes visible — one they might have missed.

That’s the goal: not to replace doctors, but to support them.

What experts say

These results confirm that AI can play a real role in epilepsy care — but only as a tool, not a replacement.

The models work best when image quality is high. Poor scans lead to more misses and false alarms.

Experts stress that MRI protocols need to be optimized, especially for kids. Motion, resolution, and scan settings all affect results.

If your child has uncontrolled seizures and a “normal” MRI, this research offers hope — but not an immediate solution.

These tools are not yet available in hospitals. They’re still being tested.

You don’t need to ask for AI. But you should ask: “Was the MRI reviewed by a pediatric neuroradiologist?” and “Could this be a subtle FCD?”

Some centers already use advanced imaging or expert review. Surgery may still be an option, even if the lesion wasn’t seen at first.

Talk to your neurologist about next steps. A second MRI or specialized review might help.

More testing is needed — in more hospitals, with more diverse patients.

Researchers must improve sensitivity so fewer lesions are missed. Better MRI quality will help.

One day, AI could be built into routine scans, quietly checking every brain for hidden issues.

For now, it’s a promising step — not the finish line.

Share
More on drug-resistant epilepsy