
One of the privileges of the presence of the Massachusetts Institute of Technology (MIT) in Cambridge is witnessing the future, from progress in quantum computing and energy sustainability and production, to the design of new antibiotics. Do I understand everything deeply? No, but I can wrap my head around it when I am asked to create a picture to document the research.
The joy to be a scientific photographer is that I must learn about the things that I document to produce communication and confidential images, which aim to a form of data, for researchers who welcome me in their laboratories.
But now, with the extensive artificial intelligence tools (GENAI) is available, many questions must be asked. Will there be a point in which, with some keys and demands, create a “visible” for its research, as I do with the camera, and consider that image record work? Will researchers, magazines and readers be able to discover the created images and understand that they are not truly documenting work? Finally, from a personal point of view, will there remain a place for a scientific photographer like me to apply to communicate with the research? Here’s what I discovered while trying to artificial intelligence photo generators (AI).
Reality and acting
First, let’s mention ourselves with the differences between the image, as each pixel corresponds to photons in the real world, and Genai Visual, created using a publishing model-a complex calculation that generates something that looks real but may never exist.
To explore these differences, I decided to try the Genai models made from Midjournyy and Openai to reproduce the work shown in one of my most popular scientific image-with the help of Gaël McGill Scientific Resignant at Harvard University at Cambridge, Cambridge, Massachusetts.
In 1997, Moungi Bawendi, the chemist at the Massachusetts Institute of Technology, asked me to take a picture of nanoparticles (quantum points). When they are excited with UV light, these crystals shine in different wavelengths depending on their size. Bawindi, who later participated in the Nobel Prize for this work, did not like the first image (see “Three Views”), in which Vials Flat placed on the laboratory seat, and took a picture from top to bottom. You can say that this is what you put them, because you can see air bubbles in the tubes. He intentionally thought it made the picture more interesting.
Credit
The second repetition was used in the November 1997 cover of Physical chemistry magazine b (See “Three Persons”). This image provides a direct record of research and highlights the importance of cooperation with the world – an essential part of my work.
To create a comparable image in the Dall-E, I used a mentor, “Create a picture of nanoparticles in Mounti Bawindi in flasks on a black background, and coordinate in different wavelengths, depending on their size, when they are excited with ultraviolet light.”
China has made waves with Deepseek, but its real ambition is the industrial innovation that AI moves
People may think that the image produced by the program is attractive (see “three views”), but it is not close to the reality taken in the original image. DALL-E made the beads similar to the claim. It is assumed that the algorithm found the phrase “quantum points” in the data group of the artificial intelligence model on which it was based and used that information to replace the phrase “nanomota”.
The most anxious is the fact that in every vial, there are points in different colors, which means that the samples contain a mixture of materials that shine in a range of wavelengths – this is inaccurate. Moreover, some points appear lying on the surface of the table. Was this a aesthetic decision made by the model? I find that the resulting wonderful visual (see supplementary information).
The results of my experience of artificial intelligence are often cartoon-like images that cannot pass as a reality-not to document-but there will be time in it. In conversations with colleagues in research and computer sciences, they all agree that we must have clear criteria about what is not allowed. In my opinion, Genai should never be allowed as documents.
It was manipulated in exchange for artificial intelligence
The emergence of artificial intelligence means that we need to clarify three basic problems related to visual communication: the difference between clarification and documentation, the ethics of image processing and the desperate need to train in the visual communication of scientists and engineers.
Decisions on how to frame a picture – what to include or leave – is already a process of reality. The tools that people decide to use are also part of the manipulation. Each digital camera creates a distinctive image. Apple iPhone is enhanced the colors of the image differently from the colors of the Samsung phone. Likewise, the images near the infrared rays produced by the James Web telescope are designed to be different from the optical examination of the bubli, however, complete it.
How to open artificial intelligence ancient texts – and you can rewrite history
Take the point further, the colors we see in all of these amazing images of the universe are promoted and give us more deportations of reality. Through this lens, it is clear that humans were, in fact, they were born artificial images for years, without necessarily describing it in this way. However, there is a decisive difference between enhancing a picture with programs to photograph reality and create a reality of trained data collections.
As a scientific photographer, I fully realize the difference between clarification and documentary image, but I am less confident that artificial intelligence programs can distinguish this. Clarification or graph is a representation of something, translates personally and visually describes concept or structure using symbols, colors, shapes, etc. A documentary photo is created, or an image made using the electronic microscope of the survey or transmission, with photons and electrons, and thus represents an element, even if the element is not itself. The difference between the two is in intent.
With clarification, intention is to describe and clarify the work. Genai’s photos are likely to excel in this task. But for a documentary image, the intention is to bring us the closest possible. Both, in essence, are already a form of manipulation or an artificial generation, and here lies the importance of identifying and discussing their ethics before we include GENAI tools.
Publishers have nature 626697-698; 2024), but frankly, artificial intelligence programs will eventually be able to circumvent this failure. There are ongoing efforts to find ways to track the source of the image or document any original treatment. For example, the criminal photography community, through the Global Coalition in favor of content and originality, provides technical information for camera manufacturers regarding the ability to track the source of the image by maintaining a record in the camera of any processing. One can also imagine, not all manufactures on board.
Does the Internet and AI affect our memory? What does science say
The scientific community still has time to create a system of transparency and form instructions regarding pictures created by artificial intelligence. At least, all Genai Visual should be clearly classified as such, and the process and tools used to create it should be clearly and include, wherever possible, credit for any source of the source of the artificial intelligence engine. However, the narration of the sources is a challenge.
Two articles raised an important problem by highlighting privacy violations and potential copyrights when using the proliferation forms (N. Carlin And others. Preprint in Arxiv https://doi.org/grqmsb (2023); And see go.nature.com/4jqyevn). Credit is only possible in a closed system (i.e. proliferation models not), which training data is fully known and documented. For example, Nature Springer, which is published nature ((nature She is independent of her publisher), and has recently included an exception in her policy for the Google Deepmind program to cover this type of use (for trained models on a specific set of scientific data). However, people should bear in mind that Alphafold is not a Genai tool that creates images – it generates structural models (data format) that is then converted into images by people (not by Genai tools).
Fortunately, efforts address privacy issues. Creators can now use a kind of clear data for tampering called content adoption data, as Adobe explains in its guide, “Obtaining proper recognition and enhancing transparency in the content creation process” (see go.nature.com/3wx92ng).
Ethical standards
For years, I suggested that scientists need training on visual communication ethics, and adds the availability of the image creation program easy to artificial intelligence, urge to this discussion.
For example, I remember one experience with an engineer that changed a picture I made from their research and wanted to publish it, along with the article provided (see supplementary information). The researcher did not consider that the change of the image was, in fact, similar to changing his data because they were not taught the basic ethics to process images and visual communication.