Nature
Q: You’ve mentioned that as soon as a photograph is taken, the image may be considered “manipulated.” There are methods you’ve manipulated your individual images to create a visible that more successfully communicates the specified message. Where is the road between acceptable and unacceptable manipulation?
A: Within the broadest sense, the choices made on the right way to frame and structure the content of a picture, together with which tools used to create the image, are already a manipulation of reality. We’d like to recollect the image is merely a representation of the thing, and never the thing itself. Decisions must be made when creating the image. The critical issue isn’t to govern the info, and within the case of most images, the info is the structure. For instance, for a picture I made a while ago, I digitally deleted the petri dish by which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The info within the image is the morphology of the colony. I didn’t manipulate that data. Nevertheless, I all the time indicate within the text if I actually have done something to a picture. I discuss the thought of image enhancement in my handbook, “The Visual Elements, Photography.”
Q: What can researchers do to be certain that their research is communicated accurately and ethically?
A: With the arrival of AI, I see three primary issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a unbroken need for researchers to be trained in visual communication. For years, I actually have been attempting to develop a visible literacy program for the current and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what in regards to the visual, which isn’t any longer tangential to a journal submission? I’ll bet that almost all readers of scientific articles go right to the figures, after they read the abstract.
We’d like to require students to learn the right way to critically have a look at a broadcast graph or image and choose if there’s something weird happening with it. We’d like to debate the ethics of “nudging” a picture to look a certain predetermined way. I describe within the article an incident when a student altered certainly one of my images (without asking me) to match what the coed desired to visually communicate. I didn’t permit it, in fact, and was disillusioned that the ethics of such an alteration weren’t considered. We’d like to develop, on the very least, conversations on campus and, even higher, create a visible literacy requirement together with the writing requirement.
Q: Generative AI isn’t going away. What do you see as the longer term for communicating science visually?
A: For the article, I made a decision that a strong approach to query the usage of AI in generating images was by example. I used certainly one of the diffusion models to create a picture using the next prompt:
“Create a photograph of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”
The outcomes of my AI experimentation were often cartoon-like images that might hardly pass as reality — let alone documentation — but there will probably be a time once they will probably be. In conversations with colleagues in research and computer-science communities, all agree that we must always have clear standards on what’s and isn’t allowed. And most significantly, a GenAI visual should never be allowed as documentation.
But AI-generated visuals will, in reality, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I consider the researcher MUST
- clearly label if a picture was created by an AI model;
- indicate what model was used;
- include what prompt was used; and
- include the image, if there may be one, that was used to assist the prompt.