Patient Dignity and Artificial Intelligence (AI) Prosperity

Privacy Plus+

Privacy, Technology and Perspective

 Patient Dignity and Artificial Intelligence (AI) Prosperity—Yes, People in the U.S. could have both with a Suggested New HIPAA Provision. This week, we’re addressing how (or if) the U.S. can have meaningful privacy protection and advance the nation’s leadership in AI at the same time.  We think that it can, but only if individual patients are incentivized.  It may be radical, but with a little legislation, people in the U.S. can chose to lead the medical AI race and at the same time themselves enjoy the results in addition to financial rewards.

There is wide agreement that AI will soon be vital to U.S. leadership in the world, especially as 5G systems become more available. But in order to realize its full value, AI needs giant pools of data. 

This means that in any field involving personal data (healthcare, for instance), countries with no privacy protections have a built-in advantage in AI development. Lacking privacy boundaries or processes, it’s easier for them to aggregate ocean-sized pools of personal data to use for state-directed purposes like AI development. Combine that data-pool “resource” with massive state funding, and AI development can leap ahead. 

Look at China and its healthcare system. Wired magazine has reported that with no equivalent of HIPAA or the GDPR, China is systematically entering its citizens’ health histories — all billion-plus of them — into a giant database across which AI can work.  The implications of that for medical research and advancement in the future, and hence world leadership in medical research, are clear. For a good article on how this works, see the following link:

https://www.wired.com/story/health-care-data-lax-rules-help-china-prosper-ai/

So how do we leap ahead of this in the U.S., while still maintaining the dignity of patients? 

Much work is already underway in the U.S., but the privacy risks are evident.  For example, recently, the Mayo Clinic announced that it selected Google as its “strategic partner,” both in storing the hospital’s patient data and unleashing Google’s AI across those patient records. A link to Mayo Clinic’s announcement follows:

https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-selects-google-as-strategic-partner-for-health-care-innovation-cloud-computing/

Wired has already written about how this choice by Mayo Clinic may become a patient’s nightmare, in part, because de-identified medical records can conceivably be re-identified, and therefore an individual’s protected health information (“PHI”), if processed through AI, could re-identify that person. Note, however, that it is reported that Google’s agreement with Mayo Clinic prohibits Google from combining Mayo Clinic’s data with other data Google has. The Wired article is available by following this link:

https://www.wired.com/story/ai-could-reinvent-medicineor-become-a-patients-nightmare/

 We have before written/warned about the risk of re-identification in connection with de-identified data when such data is faced with big data analytics.  Indeed, we have suggested that it may be time to regulate scrubbed data in the following post:

https://www.hoschmorris.com/privacy-plus-news/privacy-plus-july-13-2019

So, now, the question becomes how specifically can clinical data (regardless of whether it is PHI or de-identified data) be regulated in order to maximize medical research and innovation while minimizing privacy concerns?  We are thinking about something different – rather restricting medical-AI research to the data of one hospital system at a time (like Google’s reported contract with the Mayo Clinic—let’s consider creating a giant U.S. pool of medical data, to be used to engage private industry in AI and medical research while at the same time honoring patient dignity. Here, we hearken back to last week’s post, which addressed the concepts of “data dignity” and “inverse privacy,” and suggested that people should be paid for their personal data as follows:

https://www.hoschmorris.com/privacy-plus-news/data-dignity-and-inverse-privacy

Specifically then, we would propose that a new provision be added to HIPAA, something along the following lines:

·      With patient notice and consent (subject to a clear and specific privacy notice),

·      Incentivize individuals to authorize the direct submission of their medical records into a National Institute of Health database (aka a big national pool),

·      Limit access to registered, credentialed researchers,

·      Restrict use to medical/technological research and development, and

·      If results are commercialized (e.g. a pharmaceutical company uses the research to develop a diabetes cure), then require (1) a royalty back for further development of the database, and (2) honor the participating patients’ dignity by paying them for the use of information about them, through a significant price discount on all products or services so commercialized for each person whose PHI was part of the research and who would benefit from the results. 

 Of course, this opens many questions about how and who would have rights to deploy analytics over the entire database, but it’s time to start that conversation. Unless there is such a database across which AI can work, the U.S. will fall behind in the medical research race.  Rather than falling behind, each person in the U.S. could choose to join the race by agreeing to submit their medical records into a national database, restricted for medical research, leading not only to medical advances, but to real value and money for each American.

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

 

Previous
Previous

Freezing Facial Recognition – Let’s Revisit

Next
Next

Data Dignity and Inverse Privacy