Critical Considerations for Generative AI Use in HealthcareBarbee Mooneyhan of Woebot Health on the Need for Strong AI Guardrails
Generative AI holds great potential for many amazing applications in healthcare, but it's critical to establish a strong framework before deploying it, said Barbee Mooneyhan, vice president of security, IT and privacy of Woebot Health, a provider of AI-driven online mental health services.
"The use cases are endless. But we can't just implement it in healthcare because we're working with patients' lives," she said.
"It's important to consider in all of this that we have to have good guardrails," she said in an interview with Information Security Media Group conducted during the recent HIMSS cyber forum in Boston.
The company has not yet implemented generative AI, although it is studying it and will carefully test it, Mooneyhan said. Woebot Health's AI-driven mental health services does use a natural language processor that's paired with "curated content with clinical oversight," she said.
For any generative AI application in healthcare, "we have to understand what it's doing. We have to verify and test the outputs - and we have a lot of testing to do."
In the interview (see audio link below photo), Mooneyhan also discusses:
- The steps Woebot Health is taking to protect consumers' sensitive data, including having strict controls over privileged information;
- Potential insider and external security and privacy threats involving generative AI in healthcare;
- The most promising uses for generative AI in healthcare.
Mooneyhan started her IT career in 2002 while studying psychology at the University of Tennessee, Knoxville. Since then, she has built and matured multiple entities' IT, privacy, incident response, product security, threat hunting, vulnerability management, penetration testing, GRC, and security awareness programs. She is also a leader in the professional organization Women in Cybersecurity - or WiCyS.