Mode
Text Size
Log in / Sign up

Doctors Rethink How to Spot Blood Clots in Stroke Patients

Share
Doctors Rethink How to Spot Blood Clots in Stroke Patients
Photo by Navy Medicine / Unsplash
  • New analysis questions accuracy of all current clot risk tools
  • Could help prevent deadly clots in stroke survivors
  • Not ready for hospitals yet — still in review

This could change how doctors protect stroke patients from dangerous clots.

It starts with a small ache in the leg. A little swelling. Nothing too serious — or so it seems. Then, suddenly, a sharp pain. Trouble breathing. A second trip to the hospital. For some stroke survivors, this is real. They beat the stroke, only to face a new threat: blood clots in their veins.

These clots, called venous thromboembolism (VTE), can form when patients are weak or immobile after a stroke. They often start in the legs but can travel to the lungs and become deadly. Doctors want to catch high-risk patients early. But a new review suggests the tools they’re using might not be as reliable as we thought.

About 800,000 people in the U.S. have a stroke each year. That’s one every 40 seconds. Many survive, but recovery is fragile. One big danger? Blood clots. Up to 1 in 5 stroke patients may develop a VTE during recovery.

Right now, doctors use risk scores — like checklists — to guess who’s most likely to get a clot. These tools look at age, mobility, medical history, and other factors. If the score is high, they may prescribe blood thinners.

But here’s the problem: not all tools work the same. Some may miss high-risk patients. Others may over-treat low-risk ones. And giving blood thinners to the wrong person can cause dangerous bleeding.

We need tools that are both accurate and safe. That’s why this review matters.

The surprising shift

For years, researchers have built different models to predict clot risk. Each one claims to help doctors make better choices. Some even report high accuracy.

But here’s the twist: when scientists looked closely at seven of these models, they found a major flaw. Every single one had a high risk of bias. That means the way they were built could skew results — like a scale that’s not properly calibrated.

What’s different this time? This isn’t just about one flawed study. It’s a red flag for the entire field.

What scientists didn’t expect

Even though the models claimed good performance — with accuracy scores as high as 97% — none had been tested in new, independent groups of patients. This step, called external validation, is critical. It’s like testing a weather app in a new city. If it only worked in one place, would you trust it?

Without this check, we can’t be sure any of these tools work in real hospitals.

This doesn’t mean this treatment is available yet.

Think of the body like a highway. Blood flows smoothly when traffic moves at the right speed. After a stroke, patients may lie still for days. That’s like a traffic jam. The longer the backup, the higher the chance a clot will form — like a pileup on the interstate.

Doctors want a GPS for the bloodstream — a tool that warns when a “jam” is likely. The best tools would use real-time data: how well the patient moves, their vital signs, lab results.

But most current models are built using small, specific groups. That’s like designing GPS using only drivers from one town. It might work there — but not everywhere.

Researchers reviewed thousands of studies published up to September 2025. From over 2,700 records, they found seven models designed to predict VTE in stroke patients. They pooled data from these studies to estimate average clot rates and model accuracy.

The models were tested on different groups, with clot rates ranging from 10% to nearly 40%. The average risk was about 21%.

The models appeared to work well at first glance. On average, they could correctly identify high-risk patients about 87% of the time. That sounds strong — like flipping a coin and being right most of the time.

But that number hides a big problem. These results came from the same groups used to build the models. It’s like grading your own test. You might give yourself an A — but would an outside teacher agree?

But there’s a catch.

None of the seven models had been tested independently. That means no other hospital or research team has confirmed if they really work in everyday care.

And the PROBAST tool — a standard checklist for risk models — flagged serious flaws in all seven. Issues included biased patient selection, poor data reporting, and overfitting (when a model works too perfectly for one group but fails elsewhere).

This review doesn’t say these models are useless. It says we need to be cautious. Predicting VTE risk is important — but only if the tool is trustworthy.

Right now, the best path may not be building more models, but testing the ones we already have. Science moves forward not just by creating new tools, but by checking if they hold up under pressure.

If you or a loved one has had a stroke, this doesn’t change care today. Doctors will still use their best judgment — along with existing tools and guidelines — to decide on blood thinners.

But it does mean that better tools are needed. And it highlights why research transparency matters. Patients deserve methods that are tested, verified, and reliable.

Don’t stop prescribed treatments. But do ask your doctor: How sure are we about my clot risk? That conversation could lead to safer, more personalized care.

The big gap

The biggest limitation? All models were at high risk of bias. Most were small, single-center studies. Some didn’t even report basic details like age or stroke type. Without this, we can’t know who the tool really works for.

Also, the pooled data had wide confidence intervals — meaning the true clot rate could be much higher or lower than 21%. That uncertainty makes it hard to act on the results.

What happens next

The road ahead starts with validation. Researchers need to take existing models and test them in new hospitals, different countries, and diverse patient groups. Only then can we know which tools truly help.

New models should follow stricter design rules — clear data, independent testing, and open reporting. This review isn’t the end. It’s a call to build better.

Real-world testing is the next step — but no model is ready for broad use until it’s independently proven.

Share
More on Stroke