As artificial intelligence continues to accelerate across industries, few sectors are undergoing more transformation than healthcare. At USC Gould School of Law, academics are exploring how these changes demand not only technological understanding, but also rigorous legal and ethical scrutiny.
In a recent “Lunch and Learn” session hosted by USC Gould lecturer Nazanin Tondravi, who serves as Director of Regulatory Affairs at Memorial Healthcare System, legal experts and compliance professionals explored the complex and evolving relationship between AI and healthcare — from the promise of innovation to the pressing realities of regulation, privacy and ethics. The conversation took a nuanced look at how legal professionals can help guide responsible AI implementation in health systems today.
Read a summary of the conversation below, or listen in podcast format:
Transcript
Speaker 1:
Okay, let’s, let’s unpack this. We’re diving deep into something that’s really transforming. Well, pretty much every sector. But right now we’re focusing specifically on its, explosive growth and implications in health care and talking about artificial intelligence. That’s right. And our source for this deep dive is a recent, lunch and learn session. It featured an expert from Memorial Health Care who really laid out some key points.
Speaker 2:
Yeah. So our mission here is to pull out the most important pieces from that discussion. We want to help you understand not just how AI is being used, you know, in healthcare today, but also the critical stuff around it implementation, compliance. The bigger questions it raises. Exactly. Because I isn’t some like far off future concept anymore. It’s here and it’s rapidly becoming part of the health care system you interact with.
Speaker 1:
Totally. Think about how it might say streamline your hospital visit, or maybe help your doctor analyze a scan more effectively. Right. It’s moving beyond just, simple automation into really complex areas, things that are poised to revolutionize medicine as we know it. So where do we even start? With the impact. The source material. It painted a pretty broad picture, didn’t it?
Speaker 2:
AI is potential across the whole healthcare spectrum. Yeah, what really stood out was just the sheer range of areas they mentioned where I can make a difference. It’s not just, you know, one thing they talked about accelerating drug discovery, for instance, dramatically speeding up that process of finding new treatments. That’s huge, isn’t it? And, on the clinical side, they really emphasized diagnostics.
Speaker 1:
Yeah. You know, I getting incredibly good at reading lab results, interpreting medical images, sometimes potentially faster or even with greater accuracy than a human alone. Absolutely. And it goes beyond that to into optimizing the, the operational side of healthcare, like imagine AI helping hospitals run more efficiently. That could mean you get seen faster or the system saves significant money and the promise of truly personalized medicine.
Speaker 2:
Tailoring treatments, maybe for complex diseases like cancer, right? Based on analyzing your unique data gets just vast amounts of research. AI’s power, there is really its ability to sift through and find patterns in these mountains of data that would be, well, impossible for humans. We’re talking medical records, images, genomic data, even real time vital signs for monitors.
Speaker 1:
So it helps healthcare professionals make more informed decisions, maybe automate some tasks. It really bogs them down. Yeah. Frees them up. So just to be clear though, the point isn’t that AI replaces that essential human element of care, right? The doctor listening, the nurse providing comfort. No, not at all. It was presented very much as a powerful tool.
Speaker 2:
Another layer of insight, if you will, another set of sophisticated AIS to assist those on the frontlines, augmenting human capability, not substituting for human judgment and, compassion. Exactly. Which kind of brings us to what’s happening right now. The experts shared examples of AI already in practice, actually making a tangible difference. Yeah. What were some of those? Well, we’re seeing real world applications, like, detecting early signs of certain cancers from scans or identifying eyesight issues for diabetics much sooner than maybe traditional methods might allow.
Speaker 1:
Okay. So real clinical benefits and on the efficiency side. Yeah, AI is already being used to make hospital workflows smoother. It’s also playing a role in personalizing treatment approaches based on patient data. Like we touched on even simple things like patient engagement. Right. Think about those chat bots you sometimes interact with on a health care providers website. They’re often AI powered, providing initial info or guiding you.
Speaker 2:
Okay, but getting I actually implemented in health care settings. That sounds complex. It requires careful planning, I imagine. Oh, absolutely. The sauce really highlighted that. You have to start by clearly identifying the specific problem you’re trying to solve with AI. It’s not just tech for texting, right? It’s a tool for a defined task and implementation. Usually start small pilot programs are crucial, you know, to test effectiveness and integration.
Speaker 1:
A major hurdle is just getting AI to talk seamlessly with all the existing systems. The electronic medical records, different vendor platforms without creating more work for staff who are already stretched thin. Exactly. The goal is to reduce burden, not add a new technical headache. So if a pilot works out, then maybe they roll it out in phases, like one department or one hospital in a big network.
Speaker 2:
Precisely. A phased rollout is typical, and the driving force behind a lot of this, as the expert pointed out, is freeing up clinicians, speeding up those back end administrative tasks. Notes. Scheduling. Managing appointments allows doctors and nurses to spend more valuable time directly with patients, which could be key to chipping away at those really frustratingly long wait times.
Speaker 1:
For a specialist, for example. That’s a huge potential outcome. Yeah, something that impacts everyone. And some big players are already deep into this, right. The source mentioned some names. Yeah. Academic medical centers and places like the Mayo Clinic, Cleveland Clinic, they were highlighted as being really at the forefront. They’re actively integrating AI into diagnostics, treatment planning, research showing how it can work is that kind of second opinion or streamline processes to ultimately benefit the patient.
Speaker 2:
Okay, so let’s shift gears a bit. We’ve talked about the exciting potential the current uses. But now let’s get into the, the absolutely critical foundation needed for doing this responsibly. Compliance. Yeah, this is crucial. And it adds entirely new layers to an already complex area in health care. So how do you approach that? Well, the experts stress the importance of applying, well established principles.
Speaker 1:
You know, the standard elements of an effective compliance program like written policies, a compliance officer training, exactly. Those things they apply directly to AI. You’re basically building AI implementation on that existing compliance bedrock. You don’t throw it out, so you extend your existing framework to cover AI. Makes sense. And what about specific regulations? Well, IPR is obviously top of mind.
Speaker 2:
AI systems often process just vast amounts of health data, and even if it’s initially de-identified, protecting that information is paramount. The source specifically flagged concerns around data security, especially when using third party AI vendors, right? Vendor breaches are a huge risk. And it’s not just privacy data, is it? The FDA gets involved, too. Yes. When AI is used in ways that impact clinical decision making or diagnosis, the FDA actually classifies it as a medical device.
Speaker 1:
And that comes with its own whole set of rigorous regulatory requirements and rules. Wow. Okay. So multiple agencies, multiple layers of rules and the pace and specifics of those regulations can even be influenced by, broader factors. The political environment, for instance, was noted as potentially impacting how quickly agencies can finalize these rules. But regardless of the regulatory pace, training and education for the workforce sound critical.
Speaker 2:
Absolutely. Paramount staff need to really understand the privacy implications when they’re using these powerful new tools, especially if they come from outside vendors. It raises the big question, doesn’t it? How do you make sure everyone using the AI understands the data they’re handling and the rules that apply, particularly with external software? That’s the challenge. And it leads us right into the broader set of legal privacy and, well, other significant considerations like data privacy with vendors.
Speaker 1:
That’s a huge piece. Making sure any third party handling patient data via AI meets those strict IPR standards. Definitely. And there’s a fascinating challenge around data that’s initially de-identified. What happens if the AI analyzes it and finds something critical, something that means you have to re-identify the patient to notify them? How do you handle that legally and compliantly?
Speaker 2:
That’s tricky. And the source pointed out that some places are already trying to create frameworks. Right? California. Yeah. California is attorney General’s office, for example, has already issued guidance specifically on air use in health care. So things are starting to move. And what about the patient perspective consent and transparency must be huge here. Vital components. Patients really need to be informed when and how AI is being used in their care.
Speaker 1:
The source gave a good practical example. Didn’t it say something about radiology reports? Yeah. Adding language to scan reports that basically says I was used alongside the radiologists expert review. It manages expectations, builds trust. You’re being upfront and trust is key, especially when I introduces complexities like, the potential for bias in the models themselves. That’s a huge ethical dimension, right?
Speaker 2:
If the data used to train the AI reflects existing health care disparities, the AI could inadvertently just perpetuate or even like, amplify them. That’s a critical point. The expert raised. Building transparency with both staff and patients seems essential to avoid suspicion. Build confidence that AI is a helpful tool, not some mysterious black box. Absolutely. And then there’s the really significant kind of open question of accountability, meaning who’s responsible if something goes wrong?
Speaker 1:
Exactly. If an AI assistant diagnosis or treatment plan leads to a bad outcome, who bears responsibility? The AI developer. The clinician who use the tool, the hospital. The source highlighted that there’s currently very little legal precedent here. And while disclaimers exist on these tools, their actual legal weight in the real world is largely untested. That lack of clear legal guidance that sounds like a major challenge right now.
Speaker 2:
It really is. And it absolutely underscores the, the non-negotiable need for human oversight, doesn’t it? Completely. Humans must be in control. Control of the data fed into AI, what output is shared, and ultimately, how AI recommendations inform patient care decisions. It can’t be fully automated decision making in critical areas. The Q&A from that session, it reinforced this too, right?
Speaker 1:
The need for human oversight and clarity with patients. Yes, exactly. Explaining how AI is assisting. Maybe it’s handling notes helping with scheduling so patients understand it’s enhancing the doctor’s ability to care for them, not, you know, replacing that core relationship. It’s about positioning AI correctly. Yeah. As a tool to enhance human capacity, improve efficiency, ultimately for the patient’s benefit.
Speaker 2:
Precisely. So looking ahead, the landscape is just incredibly dynamic. AI tech and its applications in healthcare are literally changing day by day. And a major challenges we’ve touched on is that lack of established legal precedent, as the expert put it, nobody wants to be the textbook example for when things go wrong legally with health care. AI, right, is uncharted legal territory.
Speaker 1:
But navigating this, it really requires collaboration. It’s similar to how compliance work is often a team effort across an organization. Think sense. Yeah. And policy and advocacy are moving fast, too. Yeah. Groups are forming coalitions specifically focused on health care AI frameworks. Kai was mentioned in the source, for example. It’s really framed as an era of significant opportunity, a chance to fundamentally improve how we do health care.
Speaker 2:
So if we pull together the key takeaways from the expert from this whole deep dive. What should people keep in mind? Well first be proactive. Understand what AI tools are out there, how they might already be interacting with your systems or systems you use. Look at the whole picture right where can I genuinely integrate to improve workflows?
Speaker 1:
Free up clinicians, actually enhance patient care? Absolutely. And prioritize data privacy and security like never before. If you’re using vendors for AI, you have to scrutinize their hyper compliance, their security rigor, and advocate for responsible use position. AI is that tool to enhance human expertise and care, not replace it. Definitely. This is clearly just the very beginning of this journey.
Speaker 2:
Health care operations and especially compliance. We’ll have to continuously adapt and evolve right alongside AI technology. You can see that need for expertise growing to the experts own background, the mention of specialized programs like law and AI, health care compliance or privacy law. It really highlights that need. Yeah, it shows that understanding these complexities and building expertise is becoming crucial for navigating the future of health care.
Speaker 1:
Okay, so there you have it. We’ve, taken a look at the potential impact, the current practical uses, the absolutely critical world of compliance and regulation and those significant legal privacy and related considerations surrounding AI in health care, all informed by that lunch and learn session. AI is undeniably becoming a more integrated part of the health care experience for all of us, but its success, its beneficial adoption it fundamentally depends on responsible implementation, robust compliance, full transparency, and that continuous human oversight.
Speaker 2:
Absolutely. So maybe here’s a final thought for you, the listener, to mull over. Given this breathtaking speed of AI development and the current lack of legal precedent we talked about, how can we ensure that the incredible potential to improve health and health care access through AI is truly balanced, balanced with that fundamental need to maintain trust, ensure equity and care, and keep the human patient squarely at the center of the system.
Speaker 1:
What specific questions should you maybe be prepared to ask about AI the next time you encounter it in your own health care journey? Something to think about is this field keeps evolving so quickly. Definitely food for thought. Thanks for joining us on this deep dive.
The Expanding Role of AI in Healthcare
AI is no longer a futuristic concept in medicine, it is actively reshaping clinical practice, operational systems and patient care. From accelerating drug discovery to interpreting medical images and optimizing hospital workflows, AI offers enormous potential to improve healthcare efficiency and outcomes. As the expert panel emphasized, AI’s value lies in augmenting human judgment, not replacing it. This distinction is essential to how we structure legal frameworks and compliance measures around its use.
Implementation: From Pilots to Practice
Deploying AI in real-world healthcare settings is no small feat. Effective integration demands clarity of purpose, thoughtful pilot testing and systems that communicate seamlessly with existing electronic medical records. The goal, as outlined by the panelists, is not to introduce complexity, but to reduce administrative burdens so that clinicians can focus on patient care. As more academic medical centers pioneer these integrations, legal and compliance professionals play a pivotal role in guiding policy and mitigating risk.
Compliance and Regulatory Foundations
Session speakers underscored the importance of building AI systems on established healthcare compliance principles: clear policies, training programs, dedicated compliance officers and robust data governance. AI tools, particularly those used in diagnostics, often fall under FDA medical device regulations and involve significant HIPAA-related concerns. Especially when third-party vendors are involved, ensuring data security and regulatory adherence becomes paramount.
California and other states are already issuing AI-specific healthcare guidance, while national regulatory bodies are evaluating how to keep pace. As the discussion highlighted, the legal landscape is shifting rapidly, with little precedent to rely on — making proactive legal planning and continuous oversight indispensable.
Ethical Challenges and Patient Trust
One of the most pressing ethical challenges is AI bias, especially if training data reflects preexisting disparities. Panelists stressed the importance of transparency — with both patients and providers — regarding how AI is being used. For example, adding simple disclaimers in radiology reports can enhance trust by clarifying that AI supported, but did not replace, physician decision-making.
Crucially, accountability remains a gray area. If AI contributes to a poor outcome, who is liable? With legal precedent still forming, this question demands urgent attention from lawmakers, educators, and the broader legal community.
A Call for Legal Expertise
The intersection of law, technology and healthcare is not just a niche — it’s a vital frontier for the legal profession. That’s why academic offerings at USC Gould School of Law include specialized certificates and degree programs in the areas of law and artificial intelligence, health care compliance and privacy law. For example, Gould offers a Law and Artificial Intelligence certificate, a Law and Regulation of Artificial Intelligence Minor for undergraduates, a Master of Science in Innovation Economics, Law and Regulation (MIELR) which coursework that explores big data and machine-learning innovation through the lens of antitrust privacy, data security and intellectual property law. These programs prepare professionals to lead in this evolving space, ensuring innovation proceeds with integrity and compliance.
Looking Ahead
AI in healthcare holds extraordinary promise, but its responsible use depends on clear legal frameworks, institutional trust and ongoing human oversight. The role of the legal community is not just to react to change — but to shape it by asking critical questions: How do we safeguard patient rights in an AI-driven system? What frameworks will ensure equity and fairness? And how can we ensure AI remains a tool to elevate, rather than replace, the human touch at the heart of care?