D’Amore and Pirone: Doctor 2.0 and i-Patient: information technology in medicine and its influence on the physician-patient relation

Doctor 2.0 and i-Patient: information technology in medicine and its influence on the physician-patient relation

Abstract

Δεῖ γὰρ μέτρου τινὸς στοχάσασθαι· μέτρον δὲ, οὐδὲ σταθμὸν, οὐδὲ ἀριθμὸν οὐδένα ἄλλον, πρὸς ὃ ἀναϕέρων εἴσῃ τὸ ἀκριβὲς, οὐκ ἂν εὑροίης ἄλλ’ ἢ τοῦ σώματος τὴν αἴσθησιν· διὸ ἔργον οὕτω καταμα- θεῖν ἀκριβέως, ὥστε σμικρὰ ἁμαρτάνειν ἔνθα ἢ ἔνθα· κἂν ἐγὼ τοῦτον τὸν ἰητρὸν ἰσχυρῶς ἐπαινέοιμι τὸν σμικρὰ ἁμαρτάνοντα. Τὸ δ’ ἀκριβὲς ὀλιγάκις ἐστὶ κατιδεῖν

[One ought to aim, in some capacity, at a measure. Yet, you will not find any measure, number, or weight as valuable as a reference to an exact knowledge except the bodily feeling. Thus, the aim is to acquire a wisdom so precise as to allow little mistake in a way or in another: I would greatly praise the physician who wronged little; yet certainty is rarely to be found.]

(Hippocrates, De Prisca medicina 9)




In a recent autobiographical novel, a surgeon gives an account of his first specialist experiences, including the doubts cast by the uncertainty common to the medical field, which have led him to investigate artificial intelligence (hereafter AI) systems applied to diagnosis and therapy.1

Reading the book led us to think about ambiguity in medicine, about the possiblity of modifying it with informatic tools, and, more generally, about the changes that these tools are introducing in the physician- patient relation.

The clinical case described in the book concerns a 77-year-old construction worker sent to a vascular surgeon for a left carotid stenosis. The symptomatology reported by the patient, which arose three years earlier, consisted in episodes of aphasia lasting few seconds and spontaneously regressing, leaving no trace.

The endarterectomy was performed successfully, however the patient experienced severe complications: the night after the surgery he suffered an acute cardiac arrhythmia and underwent an emergency therapy at the Arrhythmology Unit. Lastly, he was released and died few months later, due to unknown reasons.

Following this experience, the surgeon felt overwhelmed by doubts on the medicine’s actual possibility of formulating appropriate diagnosis and therapy for each individual patient.

Medicine, as hitherto practised, was characterized by a too wide and deep uncertainty, which needed to be overcome by the new potentials offered by systems based on AI.

Hence, the author’s deep self-criticism emerges as well as his need to find a method of deduction so to make clinical work less insecure and variable, transferring classifications of diseases (ontology) and their logical relations (algorithms) to digital tools that would be able to calculate and formulate diagnosis or therapies overcoming the limitations of human brain limits. These tools would avoid omitting one of the possible causes of disease or making wrong or late logical links.

Such fundamental doubts require a more general and deeper analysis of the processes underlying diagnosis and therapy paths, in order to better understand AI’s worth and potential. Medicine’s limits are struc- tural and inherent in the method itself applied to assessing patients’ health and deciding the best treatment to help them.

The laws of traditional physics are general and deterministic, leading to precise evaluations, whereas, biomedical knowledge is empiric and probabilistic and does neither make allowance for an exact explanation of pathophysiological processes, nor for their accurate prevention. In the field of mathematics and logics, controlled deductions can be made on the basis of axioms, on the contrary, in medicine there are only biomedical data and apparent information that are in some way linked, and have a tendency to take place. The method of knowledge in medicine can only be based on uncertainty, or better said, on its mathematical equivalent: probability.

Every living being belongs to a class of complex systems, managed by rules, which are independent and full of exceptions. The quantity and complexity of information and its related relations in medicine force physicians, and the systems in which they work, to manage the infinite aspects of a disease without truly controlling them all, to the extent of the limits of their own physical and mental possibilities (the number of variables a normal brain can reason and think of in order to solve a complex problem is maximum four).2

The diagnosis process begins with collecting symptoms and signs, through a complex method involving an articulate interaction between the physician and the patient. It is not about a detached and objective observation, as it requires knowing how to empathize with the patient and his/her relatives, listening carefully and encouraging - through an open and skillful dialogue - the identification of all symptoms and signs, even those not evident to the patient himself.

The collected symptoms and signs are then linked (on the basis of rules/algorithms) and classified in one or more diseases compatible with their syndromic scenarios, and their related probability is established. Only rarely symptoms and/or signs are indicative of a single disease and can be defined as pathognomonic.

The AI system can reduce some errors that are inherent to this diagnostic path: neglecting information, and incurring in logical errors (gaps in algorithms) typical of the human mind; however, it cannot remove the uncertainty linked to the collection of symptom’s (garbage in, garbage out), nor the one linked to a scenario compatible to more than one disease, as in the majority of cases.

The physician has to know the symptoms and signs and their relations, in order to investigate and gather them in the initial phase; this indispensable learning process could be compromised in the diagnosis run by AI. Moreover, the physician, during the diagnostic process, can use non-algorithmic activities - also known as clinical eye and intuition, that cannot always be rationally expressed.3

Once the diseases with their relative probabilities are classified, the decision on the treatment is based on the knowledge of the evidence-based medicine (according to different degrees of uncertainty and connected strength in recommendation) and on the human choices (value-related evaluations of the patient and physician). As a matter of fact, the treatment of the disease is followed by results of probabilistic nature, only according to what is statistically expected by the random-based and controlled clinical studies. In literature, the results are generally represented by the percentage of the reduction of the risk in the occurrence of the treated event, with the confidence interval of said percentage, and by the least number of patients to treat in order to obtain a favorable event.

In this regard, however, physicians can only foresee the risk reduction resulting from an adequate treatment (how many), but are not able to foresee which of those patients will benefit (who). Furthermore, in this phase of the medical activity, we ought to acknowledge that an inevitable uncertainty is still present even with AI.

In sum, the AI, as developed so far, has the advantage of reducing the residual risk, connected to the high volume of the information that must be managed and to the complexity of the logical rules to follow.4 It cannot in any way either remove or reduce the uncertainties inherent in the fundamental phase of symptoms and signs collection, to the multiplicity of compatible diagnosis, and to essential therapy program which is both subjective (the choice) and probabilistic (the results). In this regard, one ought to avoid creating false illusions, which would further increase the list of unjustified expectations.

An additional consideration is required with regards to AI’s potential and its limits in medicine. Even the most advanced AI systems (i.e., deep learning based on artificial neural networks) deliver competitive performance compared with the human ones, yet, as of now, they do not appear to be able to propose new solutions, different from the ones already known or undertaken (Table 1). To excessively rely on these systems could be extremely negative as the physicians’ knowledge and experience on the basis of which they discover and deliver new explanations and solutions could start withering away - for example, in the case of the collection of symptoms and signs.5

Besides the future developments of AI in the medical field, it is important to take into consideration the dangers and damages of the current use of information technology in medicine, with a possible subsequent worsening of the clinical practice (progressive delegation of the diagnostic-therapeutic responsibility to a machine, loss of human capacities, reduction of time focused on the patient that would then be dedicated to the machine, increase of a reductionist approach rather than an emphatic one, etc.).6

It has been already acknowledged for a long time that a strictly specialist approach increases the possibility of clinical errors. An approach based mainly on information technology can lead to reductionism, with a non-holistic attitude towards the patient, causing neglect of his/her overall subjective needs.

In order to highlight this latter risk, in 2008 Abraham Verghese, Professor of Theory and Practice of Medicine at Stanford University, created the new i- Patient term: The i-Patient’s blood counts and emanations are tracked and trended like a Dow Jones Index, and pop-up flags remind caregivers to feed or bleed. I-Patients are handily discussed (or ‘cardflipped’) in the bunker, while the real patients keep the beds warm and ensure that the folders bearing their names stay alive on the computer.7

It is, in fact, common current experience for physicians and interns to often spend more than 40-50% of their working time at the computer, to file documents or review clinical records, book specific examinations, download lab research works, or for online pharmaceutical prescriptions. The traditional visit of the patient at his bedside has been replaced by briefings and analysis of information, data, images on a faraway computer screen.

These changes have already impacted the medical practice to such an extent that the competences required from medical interns are not the traditional ones anymore - those skills needed to conduct an adequate anamnesis, a precise evaluation of symptoms and an accurate search for clinical signs. Skills required now include knowledge in managing electronic documents, patients’ admissions and discharges. Together with the i-Patient, we shall have a computerized doctor, Doctor 2.0, who will probably be disoriented by increasing interference, during the face-to-face meeting with his patient.8

The patients themselves could be led to believe that what their lament is essentially reducible to the information and numbers expressed by sophisticated diagnostic technology, creating the illusion of certainty of their health assessment, and of the possibility to solve any health problem, regardless of their interaction with a physician who is available to treat them with competence and care (Table 2).

The new technology is already very much present in the routine of many medical sectors and will become even more valuable and decisive. We ought to beware of its obsessive and dehumanizing use. It could run against the patient’s need be cared of and diminish his involvement in the medical decisions, which are two primary human needs.

References

1. 

D Zaccagnini. Moving boxes. Roma: L’asino d’oro; 2015.

2. 

GS Halford, R Baker, JE McCredden, JD Bain. How many variables can humans process? Psychol Sci 2005;16:70-6.

3. 

K Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme. Monatsheft Mathemat Physik 1931;7:173-98.

4. 

A Mazzone. iPhone or smartphone support diagnosis in internal medicine. Ital J Med 2015;9:513.

5. 

PC Austin, MM Mamdani, DN Juurlink, JE Hux. Testing multiple statistical hypotheses resulted in spurious associations: a study of astrological signs and health. J Clin Epidemiol 2006;59:964-9.

6. 

DI Rosenthal, A. Verghese Meaning and the nature of physicians’s work. N Engl J Med 2016;375:1813-5.

7. 

A Verghese. Culture shock - patient as icon, icon as patient. N Engl J Med 2008;359:2748-51.

8. 

MW Friedberg, PG Chen, KR Van Busum. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. Santa Monica, CA: RAND, 2013.

Table 1.

Artificial intelligence systems diffusion.

IBM is conducting research in the field of oncology (Watson Oncology at the Memorial Sloan Kettering Cancer Center and Cleveland Clinic) and is studying with the CVS Health the possible applications of AI in the treatment of chronic diseases. As of now, it seems that more than 50 hospitals in five continents have agreements with IBM, for the use of Watson in the therapy of patients with cancer (https://www.statnews.com/2017/09/05/watson-ibm-cancer/).
Moreover, another project in the field of oncology has been running by Microsoft’s Hanover, which, in collaboration with the Oregon Health & Science University’s Knight Cancer Institute, applies AI in order to establish the most suitable pharmacological treatment options for each individual patient.
The British National Health Service applies Google’s DeeMind platform, through the collection and analysis of considerable databases, to identify health risks and develop computerized algorithms to detect neoplastic tissue.
Other companies that have been developing AI in medicine are Lumiata, which employs these systems to identify patients with high risk of specific pathologies and develops treatment options; the Predictive Medical Technologies, which uses the intensive care’s data to classify patients based on the risk of negative cardiac episodes; modernizing medicine, applying physicians’ knowledge and clinical records to formulate therapeutic programs
Table 2.

Artificial intelligence and empathy.

Should Algorithms and Robots Mimic Empathy? Robots telling jokes and chatbots acting as life coaches sound astounding and terrifying at the same time. Extensive research is going on lately in the field of applying human features, emotions, gestures, and reactions to digital technology; and it raises thousands of questions. Could not only smart, but emotional algorithms or robots appear also in healthcare soon? Would there be a place or need for them? How would it impact the patient-doctor relationship or social interactions in general? (https://medicalfuturist.com/algorithms-robots-mimic-empathy/)
Abstract views:
135

Views:
PDF
44
HTML
8

Article Metrics

Metrics Loading ...

Metrics powered by PLOS ALM


Copyright (c) 2018 Francesco D’Amore, Florindo Pirone

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
 
© PAGEPress 2008-2018     -     PAGEPress is a registered trademark property of PAGEPress srl, Italy.     -     VAT: IT02125780185