Читать книгу Cyberphysical Smart Cities Infrastructures - Группа авторов - Страница 15
1.5 Cybersecurity, AI, and Healthcare
ОглавлениеThere is no doubt that in this era of technological advance and innovation, data is the currency that powers all. Data can be in any number of forms, but it is the driving force behind closed doors as to what decisions are being made. Even in the case of AI, this holds true. Gibian discusses in his article how having more data allows AI to be improved and be more accurate, even surpassing that of humans [5]. We see how data drives ML to make better informed decisions and produce better results. However, what happens when that data is corrupted or unavailable? How do we even know whether it is reliable? The answer to these questions centers around cybersecurity, securing our devices, and protecting our data.
In a traditional sense, cybersecurity focuses largely on the protection of data via access controls, encryption, and other methods. The idea is that when one machine is sending data, only those who should be seeing the data get to, and it does not get modified in between transmission and receiving. To put things into a physical perspective, think of mailing a letter. If one sends a postcard to someone, there is nothing stopping every single person from reading what they wrote on the card. Now, if instead that letter was put in an envelope, it would prevent outsiders from freely reading what you wrote. To take it one step further, what if there was a concern about someone opening the letter and reading it. Perhaps in this case, the letter is placed in a box that is locked and sent to the receiver who has the key (this is how encryption works). The more important and private the letter, the more protection placed on it, ensuring that only the receiver is the one to read it.
This closed‐off system poses a problem for AI cybersecurity. Gideon explains that for AI to function as they are designed, they need access to steady streams of data [5]. Going back to the letter analogy, if you bring AI into the mix, then AI algorithms would need a special way to get inside the box, be able to read the letter, and extract information from it. If someone were to break in and learn about these vulnerabilities, they could potentially exploit them by altering the data or causing the algorithm to behave other than intended.
Whenever there is an innovation in tech that disrupts the status quo, we see a similar pattern to it. Moisejevs explains in his article how after the PC, Mac, and smartphone were developed we saw an immediate rise in the usage as it became more popular and more uses were found for them. However, shortly afterward, we saw a similar growth in malware for these systems [6]. ML and AI are transitioning from their infancy stage to the growth phase, which means that in the next few years, we will continue to see more and more applications of AI in healthcare products, in sheer numbers and the depth of role that it plays. However, as these AI numbers increase so too will malware, and in the healthcare setting, this could have devastating effects on the patients.
When these concerns about malware are applied to healthcare, it is best to view it in two categories; similar to how AI is. There is the digital side that has to do with data, patterns, and ML, and there is the physical side. On the digital side, the primary concern is the protection of data. The reason for this is because all decisions made via AI stems from having reliable data. To give an example, many HCPs use AI to help diagnose patients. If a patient were to come in and have various scans and tests performed, unreliable data may cause the patient to be misdiagnosed, or possibly not diagnosed at all. Another example comes from the EMR. If the patient is taking chronic medication, and the data is corrupted, it might misinterpret the pattern and believe that the patient just picked up their medication from the pharmacy recently when in fact they are due for a refill. If this happens, it may cause issues for the patient because insurance will not pay for another refill since according to the EMR, the patient has plenty of medication.
There are just as many problems for physical AI in healthcare. Previously, it was discussed about surgical robots performing on their own. Depending on the procedure being performed, this could have immediate catastrophic events for the patient; this also includes carebots. Imagine that a carebot that is responsible for ensuring the patient is okay is hit with a ransomware. Suddenly, the carebot is being held hostage unless this person pays some amount of money; this carebot will cease to function.
The key takeaway is that cybersecurity is a critical aspect of this growing AI field. Without securing AI, it puts the user, patients, and HCPs at an unnecessarily high risk. In healthcare, we see the fastest connection between a cyber threat causing issues and people's lives being at stake. While other fields may experience this as well, healthcare, due to its nature, will experience it more often and with quicker response time. It is imperative that as new designs and new innovations for this field emerge, they are designed with cybersecurity built in from day 1, rather than added on later.