Clinical artificial intelligence training on a reason such as a team of doctors

Visitors to the AI’s interactive gallery at the German Museum of Technology in Berlin use virtual reality glasses to display a picture of the brain.Credit: Jens Kalane/DPA/Alamy

After increasing the excitement after the launch of the Chatbot Chatgpt in November 2022, governments around the world were striving for policies that enhance the development of artificial intelligence while ensuring that technology remains safe and confident. In February, many provisions of the AI ​​law in the European Union – the world’s first comprehensive list of artificial intelligence – entered into force, and prohibited the publication of certain applications, such as automatic systems that claim to predict crime or emotions from facial features.

Most artificial intelligence systems will not face a direct embargo, but it will instead using a risk -based scale, from high to low. Fierce discussions on the classification of law for “at risk” regulations are expected, which will have the finest supervision. More visible guidelines will start from the European Union in August, but it will likely attract many clinical solutions that artificial intelligence driven due to the possible damage associated with biased or wrong predictions in a medical environment.

AI Clinical-If it is published with caution-it can improve access to health care and its results by simplifying hospital management operations (such as scheduling the patient and clarifying doctors), supporting diagnosis (such as determining distortions in X-rays) and treatment plans that imagine individual patients. But these benefits come with risks-for example, it is not always possible to explain the decisions of the AI-moving system, which limits the scope of human supervision in actual time.

This is important, because such censorship is explicitly authorized by law. High -risk systems must be transparent and designed so that the supervisor can understand his limits and set the date for their use (see go.nature.com/3dtgh4x).

By default, compliance will be evaluated using a set of coordinated artificial intelligence standards, but these are still under development. (Meet these criteria will not be mandatory, but it is expected that it will be the preferred way for most organizations to show compliance.) However, there are a few technological methods in force to meet these upcoming legal requirements.

Here, we suggest that new methods of developing artificial intelligence – based on standard practices of multidisciplinary medical teams, which continue to disciplinary borders using broad and common concepts – can support censorship. This dynamic provides a useful scheme for the next generation of health -focused intelligence systems that healthy professionals trust and meet the European Union’s organizational expectations.

Cooperation with artificial intelligence

Clinical decisions, especially those related to managing people with complex conditions, usually take different sources of information in mind – from electronic health records and lifestyle to blood tests, survey radiology and the results of pathology. In contrast, clinical training is very specialized, and some individuals can explain multiple types of accurate specialized medical data (such as both radiation and pathology). Therefore, it is usually managed to treat individuals with complex cases, such as cancer, through multidisciplinary team meetings (known as tumor panels in the United States) in which all related clinical fields are represented.

Since it includes doctors of various disciplines, the team’s multidisciplinary meetings do not focus on raw properties for each type of data, because this knowledge is not shared by the full team. Instead, the team members communicate with the referral to the medium “concepts”, which are widely understood. For example, when justifying the proposed treatment cycle of the tumor, the team members may refer to aspects of the disease, such as the tumor site, the cancer or row and the presence of specific patterns of molecular signs. They will also discuss the advantages associated with the patient, including age, the presence of diseases or other conditions, the body mass and weakness index.

These concepts, which represent interpretable summaries, are high-level raw data, are the basic building blocks of human thinking-the language of clinical debate. It is also characterized by national clinical guidelines to choose treatments for patients.

It is worth noting that this discussion process using the language of common concepts is designed to facilitate transparency and collective supervision in a way that parallels the intentions of the European Union law. In order for the clinical artificial intelligence of the law and the acquisition of doctors’ confidence, we believe that these clinical decision -making should reflect. Clinical artificial intelligence-like doctors in multidisciplinary teams-should use well specific concepts to justify predictions, rather than just indicating their possibility.

Explanation of a crisis

There are two typical approaches to the interpretation of artificial intelligence1 A system explains the decision -making process. One of them includes the design of the model so that it contains built -in rules, and to ensure transparency from the beginning. For example, a tool for the detection of pneumonia from x -rays can lead to an assessment of lung fatigue, setting the degree of severity and classification of the issue based on pre -specified thresholds, showing its causes for doctors. The second approach includes an analysis of the model decision after its (post hook “). This can be done with techniques such as adequate maps, which highlights X -ray areas that affected the prediction of the model.

However, both rituals have dangerous restrictions2. To find out the reason, consider the trained artificial intelligence tool to help dermatologists determine whether the mole on the skin is benign or malignant. For every new patient, the approach of clarification of clarification after the designation may highlight the pixel in the form of mole that was more important to predict the model. This can determine the logic clearly incorrectly – for example, by highlighting the pixel units in the image that is not associated with the unit (such as signs of pen or other illustrations by doctors)3.

The European Parliament in Brussels adopted the law of artificial intelligence last March.Credit: Geert Vanden Wijngaert/AP Photo/Alamy

When the mole is highlighted, it may be difficult2and4 For the supervisory doctor – even those with high experience – to see if the distinctive pixels group is clinically meaning, or simply associated with the diagnosis. In this case, the use of the artificial intelligence tool may put an additional cognitive burden on the doctor.

However, the design -based design restricts learning the artificial intelligence model strictly corresponds to the well -known principles or causal mechanisms. However, the tasks that Amnesty International is likely to be a clinical useful that does not always correspond to simple decision -making operations, or may include causal mechanisms combined in complex or intuitive ways. These models on the rules will not do well in cases where the doctor may need the greatest help.

Unlike these methods, when a dermatologist explains his diagnosis of a colleague or patient, they do not tend to talk about pixels or causal structures. Instead, they take advantage of the easily high -level concepts, such as the lack of consistency of mole, irregular borders and color, to support their diagnosis. Doctors who use artificial intelligence tools that provide such high -level concepts about the recommendations of the increasing tools5.

In recent years, curricula have been developed to clarify artificial intelligence that can cure such conceptual thinking and help support group decisions. CBMS models are a promising example6. These are trained not only on learning results of interest (such as the diagnostic or treatment cycle), but also to include important intermediate concepts (such as the tumor or degree stage) with a meaningful for human supervisors. These models can provide a comprehensive prediction and a set of understandable concepts, which have been learned from data, which justify the recommendations of the model and support for discussion between decision makers.

This type of artificial intelligence can be particularly useful when processing complex problems that require the format of distinctive data types. Moreover, it is perfectly suitable for regulatory compliance under the European Union’s Law of Amnesty International, because it provides transparency in a specially designed manner to facilitate human oversight. For example, if CBM assigns an important clinical concept for a specific patient (such as predicting an incorrect tumor stage), the clinical team overseeing the Fur knows that it does not depend on predicting the prosecution.

Moreover, due to how CBMS training, these errors can also be corrected at the concept level immediately by the clinical team, allowing the model to “receive help”7 And review its prediction and justification in general with the help of the doctor’s inputs. In fact, CBMS can be trained to anticipate and use such human interventions to improve the performance of the model over time.

Leave a Comment