Mode
Text Size
Log in / Sign up

Oncology Journals Set New Rules for AI Writing Tools

Share
Oncology Journals Set New Rules for AI Writing Tools
Photo by Faustina Okeke / Unsplash

You are reading a medical study, trying to understand a new treatment. You trust the authors. But what if a computer wrote most of it?

This is no longer a hypothetical question. Artificial intelligence is now writing parts of scientific papers. It can draft text, fix grammar, and even summarize data. But who is responsible if the information is wrong?

A new study looked at the top cancer journals in the world. They wanted to see how these journals are handling AI. The rules are changing fast.

The Rise of the Machine Writer

Generative AI tools are everywhere. They are changing how doctors and scientists write. This creates a big problem for medical publishing.

Medical research must be accurate. Lives depend on it. If AI makes a mistake, who fixes it? The journal needs clear rules to protect readers.

This study reviewed 60 high-impact oncology journals. These are the most respected sources for cancer news. The researchers checked their policies from 2020 to 2025.

They found that most journals are taking a strong stand. But there is still a lot of confusion.

Who Gets the Credit?

For years, the rules were simple. Only humans could be authors. AI is changing that line.

The study found that 96.7% of journals (58 out of 60) ban AI from being listed as an author. This is a huge number. It means the journals are saying: The computer does not get credit.

This is important because authorship means responsibility. If a human signs a paper, they are accountable for the data. AI cannot be accountable. By banning AI authorship, journals keep humans in the driver’s seat.

But here’s the twist. While AI cannot be an author, almost all journals allow it to help.

The "Helpful Assistant" Rule

Think of AI like a spell-checker on steroids. Most journals are okay with that.

The study found that 96.7% of journals allow AI for specific tasks. These tasks are usually limited to language editing and formatting. It’s like using a calculator to check your math. The AI is a tool, not a thinker.

However, there is a strict limit. Journals discourage using AI to generate original content or interpret results. You cannot ask a computer to "find the meaning" of the data. That is still the human’s job.

This creates a clear boundary. AI can polish the writing, but it cannot do the science.

The Enforcement Gap

Here is where things get tricky. Having a rule is one thing. Enforcing it is another.

The study found a major weak spot. Only 35% of journals (21 out of 60) have specific enforcement provisions. This means if a researcher secretly uses AI to write a whole paper, the journal might not have a clear penalty.

This is a big concern. Without enforcement, rules are just suggestions.

Different publishers handle this differently. Elsevier, Springer Nature, and the American Association for Cancer Research (AACR) all have universal disclosure rules. They require authors to say exactly how they used AI. But enforcement varies.

How AI Fits Into the Paper

Imagine a traffic jam. Humans are the drivers, and AI is a new GPS system. The GPS suggests a route, but the driver decides where to go.

In oncology publishing, the "driver" is the researcher. The AI might suggest better wording or fix a typo. But the researcher must check every fact. They must ensure the science is sound.

The study showed that most journals want this "human-in-the-loop" approach. They want transparency. If AI helped write the paper, the reader should know.

What the Study Looked At

The researchers did a systematic audit. They checked the "Instructions for Authors" on 60 top oncology journal websites.

They looked for four things: 1. Authorship: Can AI be an author? 2. Disclosure: Must authors admit they used AI? 3. Permissible Uses: What can AI actually do? 4. Enforcement: What happens if you break the rules?

They analyzed documents published between January 2020 and March 2025. This covers the recent explosion of AI tools like ChatGPT.

The Results in Plain English

The findings show a strong consensus on the basics. Almost everyone agrees on authorship and disclosure. But the details vary.

  • Authorship: 96.7% say no to AI authors.
  • Disclosure: 96.7% require authors to report AI use.
  • Permissible Uses: 96.7% allow editing and formatting.
  • Enforcement: Only 35% have clear rules for breaking these policies.

This means the medical community agrees on the principles. But the practical application is still messy.

But There’s a Catch

The rules are not the same everywhere. One journal might ask you to disclose AI use in the methods section. Another might want it in the acknowledgments.

This inconsistency is confusing for researchers. It also makes it hard for readers to compare papers. If the standards are different, how do we know what we are reading?

You might wonder, "I’m not a scientist, why does this matter?"

When you read news about a new cancer drug, that story starts as a scientific paper. If AI helps write that paper, you deserve to know. More importantly, you need to know that a human expert verified every claim.

This study pushes for a "minimum dataset" of rules. This would mean every journal uses the same basic standards. It protects the integrity of the information you read.

If you are a patient or a caregiver, you are reading medical news to make decisions. Trust is key.

Right now, top journals are taking AI seriously. They are ensuring that human doctors remain responsible for the science. You can trust that the papers in these journals have human oversight.

The Limitations

This study only looked at oncology journals. The rules might be different in other fields like cardiology or neurology.

Also, the study is a snapshot in time. AI technology changes weekly. Journals are still playing catch-up. The policies reviewed here might be outdated by next year.

The authors of the study propose a cross-publisher "AI Policy Minimum Dataset." This is a fancy way of saying: Let’s all agree on the same basic rules.

They want standardized disclosures and clear enforcement. This will help keep medical publishing honest and transparent.

Next, other medical fields will likely follow oncology’s lead. We can expect more specific guidelines on how AI can be used in research. The goal is to use the tool safely without losing the human touch.

Share