Fernandes and collaborators; 2019 11
|
Provide results that can support the study of medical research in the decision-making process in clinical bioethics, particularly in cases of euthanasia. |
Cross-sectional |
Data processed by trait selection methods were used to create models capable of predicting the euthanasia decision using ML and an eye tracker. Statistical experiments showed that the predictive model resulting from the multilayer perceptron (MLP) algorithm led to the best performance. Interesting results (standards and rules) for bioethical decision making were extracted from simulations with MLP models. Some participants made a rational decision, respecting the code of ethics of nursing professionals and the Brazilian penal code, where euthanasia is considered homicide. Others considered the emotional aspect, linking this decision to the patient’s suffering. |
The good performance presented by the predictive model demonstrates that the proposed investigation approach can be used to test scientific hypotheses related to visual attention and decision making, verifying the extent to which vision is a determining factor in decision making, particularly in clinical bioethics when dealing with end-of-life issues. |
Silva, Lehoux, Hagemeister; 2018 12
|
Assess whether an innovation qualifies as responsible innovation in health using a tool developed in three stages: screening, evaluation and classification; and discuss the political aspects of using the tool. |
Prospective |
The screening and assessment tool for responsible innovation in health that was developed was judged by experts and, after the second round of comments on the topic, a consensus was reached for 16 of the 20 questions regarding the importance, clarity and adequacy of the tool structure. The sustainability of health systems is harmed by the current way in which health innovations are designed and brought to market. Consensus was reached on most of the criteria, attributes and scales of the tool. The future use of the tool can contribute to the development of innovations that provide greater social value. |
The development of this tool will help fill an important knowledge and policy gap by clarifying decisions made at an early stage by innovation stakeholders such as investors, technology developers, research funding agencies and policymakers. |
Lysaght and collaborators; 2019 13
|
Address and analyze the ethics framework for big data in health and research to demonstrate how decision-making can be based on it for the development and implementation of AI-assisted support systems in health ethically and responsibly 5. |
Case study |
Clinical decision support systems (CDSS) are programs that generate health information. The ones that use ML and AI are complex and the reason why physicians must be trained to improve their clinical decision-making skills when using this type of resource. CDSS AI algorithms can reinforce social prejudices but also bring benefits as more efficient public health systems. The inclusion and analysis of data happens on an ongoing basis, contributing with information about appropriate future practice. Feeding the CDSS with patient data may cause conflicts about the dual role of physicians—care and research. The substantive values listed are professional integrity and fairness; and the procedural values are transparency and accountability. AI-assisted CDSS must be explainable. The final decision must be made by the health professional and it must be considered that there may be moral judgments that the program is uncapable of making. The case study addresses the use of these programs in an intensive care unit. Despite bringing financial benefits, this tool could generate ethical issues such as economy over health, distrust in recommendations and concern about responsibility. |
Given the rising costs of healthcare, the development and implementation of AI assistance in clinical decision-making is likely to be unavoidable. Values of professional integrity and responsibility will play a more prominent role at patient care level, whereas values of fairness and the potential for harm to the group must be balanced with imperatives of public benefit at the societal level. Public benefit imperatives at the social level must be used for balance. Transparency affects both trust in the medical profession and health systems. |
Cawthorne, Robbins-Van Wynsberghe; 2020 14
|
Create an ethical framework to be applied during the design, development, implementation and evaluation of drones in public health. |
Descriptive |
The hierarchy of values used consists of ethical principles, human values, standards and design requirements. For the creation of the structure, the four principles of bioethics were considered, in addition to a fifth principle of AI ethics: explainability. Benevolence in the field of health drones can be translated into values of human (and non-human animal) well-being, human jobs and skills, and environmental sustainability. Nonmaleficence encompasses privacy, security, protection, tranquility, jobs and human skills, and environmental sustainability. Autonomy includes free will, human values, responsibility and trust. Justice includes the equitable distribution of benefits and damages. The adoption of health drones can lead to the reduction of local health infrastructure, reducing personal assistance. However, they can also connect people in remote places to modern services. Explainability deals with the ease with which systems can be understood. The use of an ethical framework is especially useful for those with limited experience in technology ethics. |
Ethical principles are abstract and need more contextualization and specification for reflection. The creation of this ethical framework reinforces the value of integrating ethics into practice and serves as a model for design and development in drone and non-drone domains. The framework helped to identify and refine potential benefits and mitigate risks. |
Antes and collaborators; 2021 15
|
Develop a new measure and assess the openness and extent of concerns and perceived benefits regarding AI-based health technologies in a sample of adults in the United States. |
Cross-sectional |
Participants were moderately open to AI-based healthcare technologies, but there was variation depending on the type of application. The trust in healthcare system and technology were the strongest and most consistent correlates of openness, concern and perceived benefit. Older participants were less open to technologies and men were more open than women. Full-time employment was associated with greater openness and less concern. The two technologies that made predictions about serious illness—heart attack risk and the probability of survival from cancer—were the technologies best evaluated. |
The openness of the participants seems tenuous, suggesting that early promotion strategies and experiments with new AI technologies can strongly influence opinions on the subject. Addressing trust in targeting the acceptance of these innovations in healthcare may be needed. |
Batlle and collaborators; 2021 16
|
Understand the best practices in sharing patient data in healthcare institutions. |
Exploratory |
Five broad domains of important activities for collaboration using patient data were identified by a working group: privacy, informed consent, standardization of data elements, supplier contracts, and data evaluation. The methods and ethical understanding of commonly used legal frameworks for these purposes were presented, as well as data flow design that can help inform how permissions are created and revoked. A description of the careful preparation and annotation of datasets is needed when discussing anonymity and de-identification in the zeal for privacy, technically pointing out the difficulties. The volume of data required for the preparation of the AI algorithm is very high and therefore the premise is that the preparation of such data takes place in a safe and shareable way with its owners (patients). |
Creating a data sharing relationship involves ethical and information technology complexity. Patient anonymity and privacy maintain trust and protect entities seeking to safely share data. |
Green and collaborators; 2021 17
|
Develop tools integrated into digital health systems to support shared decision-making and optimize preparation for treatment in chronic kidney disease. |
Randomized clinical trial |
Using the tools, 243 (24%) of 1,032 patients in four nephrology clinics were identified as high risk for progressing to renal failure within two years. Kidney transition specialists enrolled 117 (48%) high-risk patients until the end of the first year of research. Nurses used the app for 100% of patients to document 287 planning steps for renal replacement therapy. All kidney transition experts (100%) rated the ease of use and usefulness of the tool, agreeing or strongly agreeing with all items. |
Nurses reported that the tools developed facilitated the identification of patients who need support and their navigation activities. And the fast identification of patients who need shared and informed decision-making and their preparation for renal replacement treatments. |
Martinho, Kroesen, Chorus; 2021 18
|
Obtain information about patterns of reasoning and moral opinions about health AI from people involved in medical practice. |
Cross-sectional |
Based on physicians’ questions about ethics around health AI, four main perspectives were identified: 1. AI is a useful tool: let physicians do what they are trained to do. 2. Rules and regulations are crucial: private companies are all about money. 3. Ethics is enough: private companies can be trusted. 4. Explainable AI tools: learning is necessary and inevitable. All perspectives consider that physicians should participate in the design process of AI health technologies, contributing to explainability. Physicians are more concerned with the role of large companies in the health area and less aware or concerned with issues such as equity, prejudice and inequalities in health. |
Each perspective provides valuable and often contrasting insights into ethical issues that must be operationalized and taken into consideration in the design and development of AI in health. |
Shen and collaborators; 2021 19
|
Analyze highly portable magnetic resonance imaging (MRI) surveys in remote and resource-limited international settings for creating ethical and legal guidance in a complex global landscape. |
Cross-sectional |
It is necessary to ensure that local communities are partners in the research enterprise and to guarantee the local social value of the research. Field MRI studies need to be responsible for the safety of participants and all around them. It is important to pay attention to data privacy and security regulations (local and international). It is necessary to identify if the sample where the AI model was trained was diverse so that the predictions are more accurate, considering a diversity of factors. Exam results need to be communicated to the participants in an enlightening way and when it comes to incidental findings, a challenging issue is how to provide clinical support and referral in these remote communities. |
More affordable and portable MRI scanners provide opportunities to address unmet research needs and health inequities in remote and resource-limited international settings. Local communities must be continuous partners in the co-creation of knowledge. Research must produce local value to justify the risks and minimize the possibility of abuse. |
Spiegel, Barker, Kistnasamy; 2021 20
|
Describe and evaluate the application of AI in the development of computer-aided diagnostics to support the more efficient adjudication of claims for former gold miners with occupational lung disease in Southern Africa. |
Cross-sectional |
The results were correlated with the principles of bioethics. Beneficence: AI could provide more consistent judgment than multiple professionals with varying skill levels. Nonmaleficence: maintain data privacy and security and avoid the infiltration of certain trends in decision-making systems. Autonomy: AI can lead to losses in professional qualification, as professionals come to trust technology more. Protocols must be established and professionals trained, so false negatives or false positives are identified. Justice: when market demand is weak for investments in AI and where public institutions have not responded to its use, failure to handle the technology can be a way of maintaining inequalities. This highlights the timely use of innovations to benefit those in need. |
Efforts to overcome technical challenges in the application of AI must be followed from the beginning to ensure its ethical use. |
Stahl and collaborators; 2021 21
|
Theoretically capture and empirically measure the benefits and disadvantages of AI for human progress, beyond the principles for machine learning; counterbalance the technical and economic benefits of AI and its legal, social and ethical aspects. |
Multidimensional approach |
The AI ethics discourse is discussed in three streams: 1. Issues related to ML application: it is difficult to predict to what degree data will be used for the given purpose—as a personal profile leads to a classification for purposes other than the initial one. 2. Social and political issues arising in a digital society: these systems require access to large data amounts for training and validation purposes, which generates distrust related to the autonomy of machines, replacement of humans by machines, injustice in the distribution of costs and benefits and data control, as well as the consequences. 3. Metaphysical questions about the nature of reality and humanity: these concern what machines should be allowed to decide autonomously. Economic consequences, employment, justice, freedom, human contact, individual autonomy, inequality, integrity, property, military use, power asymmetry, responsibility and sustainability fall into this category. |
There is currently no consensus between the various approaches to governance and information security. Human rights legislation can resolve many social and ethical issues. Complexity of contexts and scenarios are factors that pluralize the ways of approaching them and allow scholars or professionals to keep an overview. Attention in the current use of AI and ML and, to some degree, in broader socio-technical systems. |