Yesenia Merino, Alessandra Bazzano and Anne Glauber on the promise and pitfalls of AI in public health, plus four things you can do as the technology evolves.
Artificial intelligence (AI) technology has seen widespread adoption in public health data spaces — and in seemingly every corner of our lives. This presents public health with an opportunity to thoughtfully consider how AI will be used, and how it will shape the ways we care for people. It’s no longer a question of if but HOW we will integrate AI into our research, teaching and practice. Public health has a duty to lead in conversations about how this technology can be used responsibly and ethically.
Opportunities
Supporting the public health workforce
AI tools can help public health professionals manage heavy workloads, especially in agencies facing growing responsibilities and limited resources. When used thoughtfully, AI can automate parts of routine workflows, support disease surveillance and strengthen health promotion and education. With human oversight, these tools can improve efficiency while preserving professional judgment and expertise. Rather than replacing people, responsible AI use can reinforce foundational public health capabilities, allowing teams to focus on complex decisions, community relationships and preventive action.
Innovative public health solutions
By expanding access to research, data and evidence-based information, AI has the potential to power new tools and interventions, particularly for communities that struggle to find or apply relevant health information. Faculty at the Gillings School are developing AI-driven language models, chatbots, data systems and screening tools to address real-world health challenges. Supported by the Center for AI and Public Health, these efforts bring interdisciplinary expertise to responsible development and application. AI also enables rapid prototyping, helping researchers test ideas, assess usability and gather feedback before making larger investments.
Meaningful policy discussions
As AI capabilities expand, public health leaders can shape policies that promote transparency, accountability and ethical innovation. By setting guardrails, experts help protect communities while encouraging responsible use across health care, research and society.
Challenges
Environmental impacts
AI systems rely on data centers that consume large amounts of electricity and water. These facilities are often built in rural areas where land is less expensive, shifting environmental and economic burdens onto nearby communities. Residents may experience increased infrastructure strain and resource costs long before shortages become visible, raising concerns about long-term sustainability and environmental justice.
Loss of skills and expertise
Although AI is often described as an assistant, effective oversight depends on strong foundational knowledge. Emerging research suggests that excessive reliance on AI, especially during learning and training, can hinder the development of critical thinking and problem-solving skills in both children and adults. Delegating complex tasks to AI can create “cognitive debt,” reducing people’s ability to evaluate information, detect errors and apply contextual understanding in professional and academic settings.
Hallucinations and bias
AI tools can produce confident but incorrect outputs, including fabricated information known as hallucinations. In addition, AI systems may reflect biases embedded in their training data and design. Without intentional, inclusive human oversight, these tools risk reinforcing systemic inequities and perpetuating harm to underserved communities.
What can you do?
- Remember that AI use is a choice. AI remains optional in many settings, and people may have valid reasons to limit or avoid its use.
- Use AI intentionally. Clear, thoughtful prompting improves outputs while reducing environmental and resource costs.
- Maintain human judgment. AI works best as a support tool, not a replacement for expertise or critical thinking.
- Stay engaged and curious. Ongoing discussion, learning and inclusive leadership are essential as AI continues to evolve.

Assistant Professor of Health Behavior

Chair and Professor of Maternal and Child Health

Director of Innovation


