
Artificial intelligence (AI) and machine earning (ML) stand to transform outcomes for stakeholders across the healthcare industry more directly than nearly any other technology since the discovery of antibiotics. Able to ingest and learn from massive amounts of data — including patient information, clinical data, billing, and financials — these technologies can benefit every part of a healthcare organization with time and cost savings. They enable everything from the identification of inefficiencies on the administrative side of medicine to predictive analytics for preventative patient care and real-time use of IoT medical device data in early identification of medical conditions.
It makes sense, then, that healthcare tech vendors are pouring their efforts into finding ever more powerful ways to leverage various datasets in their developing technologies. Furthermore, it’s likely we haven’t seen anything close to what will become a reality for AI-based healthcare solutions. With so many tech vendors and solutions to choose from, the challenges for any organization looking to implement AI in this industry are not simply a matter of selecting the right capabilities in a platform. Rather, the real problems lie in the usability and risks surrounding the data itself.
Risks Associated with AI in Healthcare
Because each hospital group, medical office, laboratory, insurance provider, billing company, or other component of a healthcare ecosystem has its own systems, applications, and platforms, data must be normalized before it can be used in an AI platform. Adding any step to data use, however, carries risks – especially when involving an outside data processor as a third-party vendor. Not only are there HIPAA concerns around patient data, requiring legal protections, Business Associate agreements, and other policies and procedures, but all datasets in a healthcare ecosystem, even those not containing protected health information, have high value in the eyes of cybercriminals.
According to Black Book Market Research, healthcare is currently the number one target for attackers, with reported breaches costing a total of $4 billion across the industry in 2019. Why would attackers favor healthcare attacks and data theft over vast silos of credit card information? Healthcare data has a much stronger upstream and downstream impact than other types of datasets, so attackers can bank on an organization’s willingness to pay a ransom in order to get it back. Additionally, they know that all stolen data will fetch a higher price on the black market. It’s no wonder, then, that the more forward-thinking health tech providers are taking a security-first approach to everything they do.
Managing Risks
Verinovum is one such organization, providing the data curation necessary to create clean, actionable data that can then be leveraged in powerful AI technologies. From the very beginning, when Mike Noshay, co-founder and Chief Customer Officer, was getting Verinovum off the ground, he worked hand-in-hand with security expert Jerald Dawkins, Ph.D., founder of True Digital Security.
“Verinovum has always taken a security-first approach, recognizing the risk that comes with providing value in this industry. It seems like, for some, security ends up becoming a barrier to advancements with revolutionary technologies, such as AI in healthcare,” says Dawkins. “The fact is, security challenges are real, but they are also manageable if properly mitigated.”
“The issue I often see is that the risks are not managed appropriately, and as a result, a technology that could otherwise lead to tremendous advancements in healthcare ends up becoming just another avenue for attack,” Dawkins continues. “I often say security is a team sport — that’s very much the case in healthcare technology. Cybersecurity experts and healthcare innovators must work as a team to manage risk accordingly and ensure that, ultimately, patient outcomes are impacted for the better by securely deployed, emerging technologies.”
Noshay comments, “A comprehensive security program was top of mind from day one. Jerry and his team helped us ensure our metaphorical windows and doors were locked tightly and helped shape a robust security program to guide sound internal practices. Much of this work was the foundation of our SOC 2 and HITRUST certifications.”
Tips for Vendor Selection
As healthcare organizations seek to implement more powerful, AI-based technologies, strong consideration should be given to what evidence AI tech vendors are willing to share with regard to their security validation. Vet vendors and their offerings carefully, using criteria including:
- When evaluating an application or platform, look for evidence that it has been architected securely with yearly robust security assessments and penetration tests, performed to help identify and remediate any vulnerabilities.
- Ensure the vendor has detailed information readily available around their compliance, security certifications, and security controls. If they hesitate, pass the buck, or tell you that this is privileged information, you have your answer and should move on. Anyone who is implementing security thoughtfully will have a solid list of answers, certifications, and attestations at the ready.
- Find out who their vendors are, and inquire into those partners’ security practices. For example, if the AI vendor houses their application or platform in a public cloud, and they tell you that the cloud provider is responsible for all security controls, you know you have a problem. Go beyond a simple inquiry, though, and call any of their partners who would be touching or housing your organization’s valuable data, following a thorough vetting process.
These steps may add a little bit of extra time to your decision-making processes, but be assured they will not cost you nearly as much time, money, and reputation loss as you would experience in the case of a serious security incident.
While AI technologies offer tremendous potential for the entire healthcare industry, the key to leveraging its capabilities is to take the necessary steps to normalize, curate, and secure your data. Ensuring you have usable, protected datasets will help you leverage emerging AI technologies without putting yourself, your organization, or any patients whose data you are holding in harm’s way.