AI in Healthcare Needs More Than Just Ethics—It Needs a Full Operational Framework
- Katarzyna Celińska
- 20 hours ago
- 2 min read
The National Academy of Medicine has published a powerful and timely AI Code of Conduct for Health and Medicine, a comprehensive frame work designed to guide the ethical and safe development, implementation, and oversight of AI in one of the most sensitive and life-impacting sectors—healthcare.
AI holds enormous promise in improving diagnostics, patient care, system efficiency, and population health—but without deliberate governance, it could just as easily harm the most vulnerable. The Code of Conduct offers six key commitments and ten detailed principles that tackle challenges head-on:
🔹 Advance humanity
🔹 Ensure equity
🔹 Engage impacted individuals
🔹 Improve workforce well-being
🔹 Monitor performance
🔹 Innovate and learn
Each principle aligns with critical areas like privacy, cybersecurity, explainability, bias mitigation, data integrity, and accountability, ensuring both AI developers and healthcare providers implement technology with human-centered values.

NAM doesn’t stop at high-level values—it maps the Code to real-life roles across the AI ecosystem:
🔹 Developers must ensure data quality and algorithmic transparency
🔹 Healthcare providers must adapt AI safely within workflows
🔹 Federal agencies are urged to create strong evaluation and certification programs
Crucially, the framework integrates well with existing global initiatives, including the AI Act, OECD principles, and WHO AI guidelines, offering a truly interoperable model for global healthcare innovation governance.
It’s crucial that we implement a robust framework like this, especially in healthcare—where AI decisions can directly impact the safety and lives of millions. While laws like the AI Act provide a foundation, they’re not enough on their own.
What’s impressive about the NAM Code is that it doesn't just cover cybersecurity and privacy—it also tackles how models operate, whether data is appropriate, how results are derived, and how to prevent biased or invalid outputs. These are real, practical issues I see in the field every day. In healthcare, the stakes are too high to wing it. Organizations must ensure data integrity, operational validation, and ethical application of AI—not just to comply with regulation, but to protect lives and deliver trustworthy care.
Read the Full Report: Link
Author: Sebastian Burgemejster
Comments