cognitive cybersecurity intelligence

News and Analysis

Search

From Governance to Enablement: How Healthcare CIOs Can Stop Killing AI Innovation

From Governance to Enablement: How Healthcare CIOs Can Stop Killing AI Innovation

Tony Pastorino, Director of Healthcare Practice, Resultant

AI governance in healthcare has a branding problem. The word “governance” alone is enough to make half the room shut down. I’ve seen it happen. You put that word on a meeting invite, and suddenly everyone assumes this is the conversation where someone tells them what they can’t do. That framing kills innovation before it starts.

I spent nearly a decade leading data and analytics teams at the largest healthcare system in Indiana, and we eventually stopped calling it governance altogether. We started referring to it as data enablement, and that small shift in language changed the entire tone of how people engaged with the work. Where governance sounds like a wall, enablement sounds like a door. Healthcare CIOs who want their organizations to adopt and leverage AI need to think carefully about how they utilize oversight as an enabling mechanism.

The groundwork isn’t new, but the scope is

There’s a common narrative right now that AI came out of nowhere and organizations are scrambling to build frameworks from scratch. That’s only partially true. Healthcare has been grappling with AI and machine learning for longer than most industries give it credit for. Imaging and radiology have used it for years. The same goes for medical device vendors applying predictive modeling to device data. What’s changed is the surface area. Generative AI has expanded the challenge significantly, touching clinical workflows, administrative operations, and patient-facing tools in ways earlier models never did.

What health systems need to figure out now is whether the compliance frameworks they built for imaging and device data hold up against a much broader set of AI capabilities.

Start with an enablement team

If your organization doesn’t have a cross-functional team dedicated to responsible AI adoption, that’s Priority Number One. Full stop. This team should include privacy and security experts, patient ethics specialists, data scientists, and licensed healthcare providers. The technical people need to understand the guardrails they’re operating within, and the clinical and ethics representatives need to understand what the technology can actually do. That cross-pollination is where responsible innovation happens.

I’ve sat in rooms where every person at the table was a technologist, and the conversation moves fast but misses critical blind spots around patient impact. I’ve also been in rooms where compliance dominated, and every idea died on the vine. The right team has both perspectives, and neither side gets veto power. They define what responsible looks like together, and that definition gets applied consistently as new capabilities emerge.

Don’t lead with compliance 

Here’s where I think a lot of organizations get this wrong. They build a governance framework and then tell people to run every idea through it before they even get to brainstorm. Governance should enter the conversation at step two or three, once you’ve identified a high-value idea and need to figure out how to implement it responsibly. Put simply, innovation should come first. Let your people generate ideas freely. If you tell people they can only innovate inside a predefined box, you’re going to get incremental thinking at best. Incremental thinking isn’t going to solve the operational challenges health systems are facing right now. The organizations that separate idea generation from compliance assessment are the ones that will actually move.

What about Protected Health Information?

The PHI question often feels more daunting than it needs to be. There are 18 specific types of data fields that constitute protected health information: names, geographic data smaller than a state, dates tied to individuals, Social Security numbers, medical record numbers, and so on. 

The practical approach is straightforward. Determine whether any of those 18 field types are involved. If they are, validate that your team or vendor is following established de-identification standards and that there’s no viable path back to re-identification. Ask the question directly: what data are you training on, and how is identifiable information being protected?

Beyond de-identification, CIOs should be asking whether the AI’s output is explainable. Physicians have to be able to articulate the reasoning behind clinical decisions to their patients. Black-box decision support that can’t be explained is a liability, both legally and in terms of patient trust.

I’ve always used a simple gut check when evaluating whether an AI initiative is on the right track from a compliance standpoint: ask yourself whether what you’re doing puts you at even a medium risk of a news article about patient data being misused or leaked. If you look at every initiative through that lens, it tends to self-correct without needing a 50-page compliance manual for every project.

No health system wants to be the one explaining a data breach on the evening news. That healthy fear, channeled productively, is actually one of the best compliance mechanisms available.

Moving fast while staying compliant 

For CIOs feeling pressured to accelerate AI adoption faster than their compliance framework allows, the advice is simple: don’t put the compliance burden on your idea generators. Let them keep generating. Take the high-value ideas and see how they line up with your existing framework. If the framework needs to flex, adjust it. But don’t slow down the people whose job it is to think creatively about how AI can improve care, reduce costs, and give providers more time to do what they got into medicine to do.

The health systems that will lead in this next era are those that have figured out how to move responsibly without treating every new idea as a threat.

About Tony Pastorino

Tony Pastorino is the commercial health strategy director at Resultant. He brings deep healthcare domain expertise to his role, having led various components of information services at the largest Indiana healthcare system for nearly ten years. Today, Tony leads delivery on healthcare client engagements and the advancement of Resultant’s commercial healthcare practice. He is passionate about serving the urgent need of health care systems to extract value from data investments beyond basic trend analysis through predictive and prescriptive analytics.

Source: hitconsultant.net –

Subscribe to newsletter

Subscribe to HEAL Security Dispatch for the latest healthcare cybersecurity news and analysis.

More Posts