• Newswire
  • People and Stories
  • SMB Press Releases
Sunday, February 1, 2026
  • Login
  • Register
No Result
View All Result
  • Newswire
  • People and Stories
  • SMB Press Releases
No Result
View All Result
Press Powered by Creators

Building AI Systems in Healthcare Where Hallucinations Are Not an Option

The Owner Press by The Owner Press
July 25, 2025
in Newswire
Reading Time: 5 mins read
A A
0
Share on FacebookShare on Twitter


Dr. Venkat Srinivasan, Ph.D, Founder & Chair at Gyan AI

As a technologist and entrepreneur who has spent many years architecting enterprise-grade AI techniques throughout extremely regulated industries, I’ve seen firsthand the chasm between AI’s promise and its sensible dangers, particularly in domains like healthcare, the place belief will not be optionally available and the margin for error is razor-thin. Nowhere is the price of a hallucinated reply increased than at a affected person’s bedside. 

When an AI system confidently presents false info—whether or not in scientific determination help, documentation, or diagnostics—the results may be instant and irreversible. As AI turns into extra embedded in care supply, healthcare leaders should transfer past the hype and confront a troublesome reality: not all AI is ‘match for objective’. And except we redesign these techniques from the bottom up—with verifiability, traceability, and zero-hallucination as defaults—we threat doing extra hurt than good.

Hallucinations: A Hidden Risk in Plain Sight

And but, there isn’t any doubt that Massive language fashions (LLMs) have opened new frontiers for healthcare, enabling all the things from affected person triage to administrative automation. However they arrive with an underestimated flaw: hallucinations. These are fabricated outputs—statements delivered with confidence, with no factual foundation.

The dangers will not be theoretical. In a widely cited study, ChatGPT produced convincing however solely fictitious PubMed citations on genetic situations. Stanford researchers found that even retrieval-augmented fashions like GPT-4 with web entry made unsupported scientific assertions in practically one-third of circumstances. The implications? Misdiagnoses, incorrect remedy suggestions, or flawed documentation.

Healthcare, greater than some other area, can not afford these failures. As ECRI recently noted in naming poor AI governance amongst its prime affected person security considerations, unverified outputs in scientific contexts could result in damage or dying, not simply inefficiency.

Redefining the Structure of Reliable AI

Constructing AI techniques for environments the place human lives are at stake calls for an architectural shift—away from generalized, probabilistic fashions and towards techniques engineered for precision, provenance, and accountability.

This shift for my part rests on 5 foundational pillars:

  1. “Explainability” and Transparency

AI outputs in healthcare settings should be comprehensible not simply to engineers however to clinicians and sufferers. When a mannequin suggests a analysis, it should additionally clarify the way it reached that conclusion, highlighting the related scientific elements or reference supplies. With out this, belief can not exist.

The FDA has repeatedly emphasized that explainability is important to patient-centered AI. It’s not only a compliance characteristic—it’s a safeguard.

(b) Supply Traceability and Grounding

Each output in a scientific AI system ought to be traceable to a verified, high-integrity supply: peer-reviewed literature, licensed medical databases, or the affected person’s structured data. In techniques we’ve designed, solutions are by no means generated in isolation; they’re grounded in curated, auditable information—each declare backed by a supply you possibly can examine. This sort of design is the best antidote to hallucinations.

(c) Privateness by Design

In healthcare, compliance will not be an choice, it’s a necessity. Each part of an AI system should be HIPAA-aware, with end-to-end encryption, stringent entry controls, and de-identification practices baked in. This is the reason leaders should demand extra than simply privateness insurance policies—they want provable, system-level safeguards that stand as much as regulatory scrutiny.

(d) Auditability and Steady Validation

AI fashions should log each enter and output, each model change, and each downstream impression. Simply as scientific labs are audited, so too ought to AI instruments be monitored for accuracy drift, hostile occasions, or sudden outcomes. This isn’t nearly defending choices—it’s additionally about bettering them over time.

(e) Human Oversight and Organizational Governance

No AI ought to be deployed in a vacuum. Multidisciplinary oversight—combining scientific, technical, authorized, and operational management—is important. This isn’t about forms; it’s about accountable governance. Establishments ought to formalize approval workflows, set thresholds for human assessment, and constantly consider AI’s real-world efficiency.

An Government Framework for Accountable AI Adoption

For healthcare executives, the trail ahead with AI fashions ought to start with questions. This may embody, Is that this mannequin explainable, and to which practitioners or viewers? Can each output be tied to a trusted, inspectable supply? Does it meet HIPAA and broader moral requirements for knowledge use? Can its conduct be audited, interrogated, and improved over time? Who’s accountable for its choices, and who’s accountable when it fails?

These questions also needs to be embedded into procurement frameworks, vendor assessments, and inner deployment protocols. Stakeholders within the healthcare ecosystem can begin with low-risk functions, like administrative documentation or affected person engagement, however design with future scientific use in thoughts. They need to insist on options which might be deliberately designed for zero hallucination, moderately than retrofitted for it.

And most significantly, any AI integration ought to contain investments in clinician schooling and involvement. AI that operates with out scientific context will not be solely ineffective—it’s harmful.

From Chance to Precision

It’s clear to me that the age of ‘speculative AI’ in healthcare is ending. What comes subsequent should be outlined by rigor, restraint, and duty. We don’t want extra instruments that impress—we want accountable techniques that may be trusted.

Enterprises in healthcare ought to reject fashions that deal with hallucination as a suitable facet impact. As a substitute, they need to look to techniques purpose-built for high-stakes environments, the place each output is explainable, each reply traceable, and each design alternative made with the affected person in thoughts.

In abstract, if the price of being improper is excessive, because it actually is in healthcare techniques, your AI system ought to by no means be a trigger or purpose.


About Dr. Venkat Srinivasan, Ph.D

Dr. Venkat Srinivasan, PhD, is Founder & Chair of Gyan AI and a technologist with many years of expertise in enterprise AI and healthcare. Gyan is a basically new AI structure constructed for Enterprises with low or zero tolerance for hallucinations, IP dangers, or energy-hungry fashions. The place belief, precision, and accountability are vital, Gyan ensures each perception is explainable, traceable to dependable sources, with full knowledge privateness at its core. 



Source link

Tags: buildinghallucinationshealthcareoptionsystems
Share30Tweet19
Previous Post

Over 100 MPs sign cross-party letter demanding Starmer recognise Palestine as a state | Politics News

Next Post

Kyle Richards Swears by This Viral Face Mask — On Sale for $14

Recommended For You

She’s Trying to Stay Ahead of Alzheimer’s, in a Race to the Death
Newswire

She’s Trying to Stay Ahead of Alzheimer’s, in a Race to the Death

by The Owner Press
February 16, 2025
Make ‘significant adjustments’ to Online Safety Act, X urges govt | Science, Climate & Tech News
Newswire

EU hands €120m fine to Elon Musk’s X for breaking social media rules | Science, Climate & Tech News

by The Owner Press
December 5, 2025
Scientists Discover Two New Bass Species Hiding in Plain Sight
Newswire

Scientists Discover Two New Bass Species Hiding in Plain Sight

by The Owner Press
September 22, 2025
PSG beat Arsenal to take advantage in Champions League semi-finals
Newswire

PSG beat Arsenal to take advantage in Champions League semi-finals

by The Owner Press
May 1, 2025
The VMA Awards Top 10 Best Dressed: Summer Walker in Custom Howie B, Ciara in Schiaparelli, Gunna in Valentino + More!
Newswire

The VMA Awards Top 10 Best Dressed: Summer Walker in Custom Howie B, Ciara in Schiaparelli, Gunna in Valentino + More!

by The Owner Press
September 9, 2025
Next Post
Kyle Richards Swears by This Viral Face Mask — On Sale for $14

Kyle Richards Swears by This Viral Face Mask — On Sale for $14

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LEARN FROM TOP VERIFIED OWNERS

Take a free live Course in the Metaverse

Take a free live Course in the Metaverse

User Avatar The Owner Press
Book an Office Hour

Related News

Texas GOP Proposes New Map Eliminating 5 Democratic Seats

Texas GOP Proposes New Map Eliminating 5 Democratic Seats

July 31, 2025
Australia politics live: new environment laws won’t be ‘worth the paper they’re printed on’ if Labor doesn’t actually act on climate, Greens say | Australia news

Australia politics live: new environment laws won’t be ‘worth the paper they’re printed on’ if Labor doesn’t actually act on climate, Greens say | Australia news

October 27, 2025
Cincinnati Open: Elena Rybakina upsets Aryna Sabalenka to set up semi-final meeting with Iga Swiatek | Tennis News

Cincinnati Open: Elena Rybakina upsets Aryna Sabalenka to set up semi-final meeting with Iga Swiatek | Tennis News

August 15, 2025

The Owner School

February 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
232425262728  
« Jan    

Recent Posts

Ex-RFU chair Ilube provides missing LINK for ATM network | Money News

Ex-RFU chair Ilube provides missing LINK for ATM network | Money News

February 1, 2026
GOP STUNNED: Democrat Flips Reliably Republican Texas State Senate Seat

GOP STUNNED: Democrat Flips Reliably Republican Texas State Senate Seat

February 1, 2026
Trump Says U.S. Is ‘Starting To Talk To Cuba’ As He Moves To Cut Its Oil Supplies

Trump Says U.S. Is ‘Starting To Talk To Cuba’ As He Moves To Cut Its Oil Supplies

February 1, 2026

CATEGORIES

  • Newswire
  • People and Stories
  • SMB Press Releases

BROWSE BY TAG

Australia big Cancer China climate Cup data Day deal Donald Entertainment Football Gaza government Health League Life live Money News NPR people Politics reveals Science scientists Season show Star Starmer Study talks tariffs Tech Time Top trade Trump Trumps U.S Ukraine War win World years

RECENT POSTS

  • Ex-RFU chair Ilube provides missing LINK for ATM network | Money News
  • GOP STUNNED: Democrat Flips Reliably Republican Texas State Senate Seat
  • Trump Says U.S. Is ‘Starting To Talk To Cuba’ As He Moves To Cut Its Oil Supplies
  • Newswire
  • People and Stories
  • SMB Press Releases

© 2024 The Owner Press | All Rights Reserved

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Newswire
  • People and Stories
  • SMB Press Releases
  • Login
  • Sign Up

© 2024 The Owner Press | All Rights Reserved