This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Value-Oriented Investigation of Photoshop’s Generative Fill

Ian P. Swift University of Illinois ChicagoIllinoisUSA iswift@uic.edu  and  Debaleena Chattopadhyay University of Illinois ChicagoChicagoIllinoisUSA debchatt@uic.edu
(2024)
Abstract.

The creative industry is both concerned and enthusiastic about how generative AI will reshape creativity. How might these tools interact with the workflow values of creative artists? In this paper, we adopt a value-sensitive design framework to examine how generative AI, particularly Photoshop’s Generative Fill (GF), helps or hinders creative professionals’ values. We obtained 566 unique posts about GF from online forums for creative professionals who use Photoshop in their current work practices. We conducted reflexive thematic analysis focusing on usefulness, ease of use, and user values. Users found GF useful in doing touch-ups, expanding images, and generating composite images. GF helped users’ values of productivity by making work efficient but created a value tension around creativity: it helped reduce barriers to creativity but hindered distinguishing ‘human’ from algorithmic art. Furthermore, GF hindered lived experiences shaping creativity and hindered the honed prideful skills of creative work.

Value sensitive design, generative AI, generative fill, values
journalyear: 2024copyright: rightsretainedconference: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems; April 23–28, 2023; Hamburg, Germanybooktitle: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’23), April 23–28, 2023, Hamburg, Germanydoi: 10.1145/3544549.3585697isbn: 978-1-4503-9422-2/23/04ccs: Human-centered computing Empirical studies in HCI

1. Introduction

In the last few years, generative models have had a tremendous impact on the world, including but not limited to creative professionals (Jiang et al., 2023; Bender et al., 2021; Hacker et al., 2023). For example, AI has impacted healthcare (Zhang and Kamel Boulos, 2023), mental health support (Jo et al., 2023), software engineering (Weisz et al., 2022), and eduation (Baidoo-Anu and Ansah, 2023), to name just a few. Creative professionals are currently bombarded with a variety of new generative tools. At times there is a question of how creative users are adapting their work practices (Zamfirescu-Pereira et al., 2023; Gmeiner et al., 2023). Research on scientifically advancing generative models (Chung and Adar, 2023; Feng et al., 2023; Brade et al., 2023; Lawton et al., 2023) and their practical use as tools are occurring in parallel. In practice, AI tools in creative work such as DALL-E, Stable Diffusion, Firefly, and Generative Fill have garnered both enthusiasm and concern among creative professionals (De Cremer et al., 2023; Franzen, 2023). However, as the technological landscape evolves rapidly, several sociotechnical questions have emerged and remain unanswered. A major question is how these models and tools will align creative professionals’ values in their workflows.

AI-generated art is having a growing impact on the creative world, specifically appearing in the form of text-to-image (T2I) generation. T2I is the transformation of a text prompt input into a generated image. However such text prompts require prompt engineering, which is often a struggle. One study showed that individuals were challenged by systematic prompt design (Zamfirescu-Pereira et al., 2023). Users experienced difficulties with generating prompts and evaluating the effectiveness of their prompts. Accordingly, several works have emerged on how to improve the prompt engineering process. To address struggles with T2I prompt engineering, the tool ”RePrompt” was designed for refining T2I prompts (Wang et al., 2023). RePrompt automates the process of transforming user-generated prompts by adding and removing parts of speech to increase the emotional expression of image generation. A similar work is the Promptify tool (Brade et al., 2023). This tool allows users to specify subject and style information and generate a more thorough image prompt. Other tools address the T2I method through further abstractions. PromptPaint offers an alternative approach borrowing from painting concepts (Chung and Adar, 2023). This tool introduces vectors between discrete semantics (interpolating between ”cat” and ”dog”), adding a directional semantic (shifting ”dog” towards ”fluffy”), as well as interventions during the generation process, all while utilizing analogies to art creation metaphors (mixing paints, layering paints, etc). Another T2I tool Reframer allows the user to prompt the AI for a drawing that is created with strokes, and then add or modify strokes alongside the AI (Lawton et al., 2023).

The experience of creative professionals with generative models is an important factor in the advent of generative models; as is the effect of generative models on society at large. One research group found that self-identified ”creative professionals” were largely not worried about the existence of AI tools (Inie et al., 2023). They found that a reason they were excited was because of changes in productivity, a finding confirmed in our study. A focus on the relationship between writers and AI revealed the nuances of how writers relate to assistance from other writers versus how they relate to AI assistance (Gero et al., 2023). Similar to our work, these authors looked into the values of creative professionals. The authors noted that creative writers value intention, authenticity, and creativity and that particularly the values of authenticity and creativity impacted whether a writer would or would not consider using a computer for support. The existence of such tools also led to a variety of ethical concerns: the anthropomorphization of AI is problematic, in suggesting that the image generator is as much to credit for the result as a human creator; there is concern about the usage of generative AI to forge the style of artists; and, among other concerns, the uncertainty of the relationship between copyright law and training image generators (Jiang et al., 2023). Beyond creative concerns, concerns about generative AI are prevalent across the field. For example, one set of authors worries that 1) the generative models being as large comes at a cost of money and carbon emissions, and 2) by training on large sets of data across the internet, these models will amplify a hegemonic world views, negatively impacting marginalized populations (Bender et al., 2021). The ethical landscape of generative AI is still clearly an important issue, which is why our work on generative AI offers another unique perspective, incorporating moral and ethical considerations as part of our process.

In this paper, we discuss results from an empirical study of Photoshop’s generative fill (GF), adopting a value sensitive design (VSD) approach (Friedman, 1996). VSD defines values as what is important to people in their lives, with an emphasis on ethics and morality, e.g., autonomy, ownership, and usability. We focus on currently active creative professionals, particularly Photoshop users, and examine how they interacted with GF—and through those interactions what values relationships are manifested. Specifically, we examined the following research question:

How does the generative fill (GF) being useful for certain tasks, and making certain tasks easy to accomplish help or hinder users in embodying their values?

Results from our study identified specific ways that GF was useful in users’ work practices and was easy to use helped or hindered values of the creators. For example, GF users found the tool useful in doing touch-ups (e.g., removing blemishes), expanding images (e.g., changing dimensions or adding space for text), and generating and tailoring prototypical images (i.e., images based on an idea in the user’s mind, e.g., stock photos). Consequently, GF helped users with their value of productivity by making work practices efficient. However, a value tension was created around creativity: GF helped values of reducing the barriers to being more creative, but as a result hindered the value of distinguishability of ‘authentic’ art created by a human without any AI assistance. Specifically, users languished about people gradually losing appreciation for creativity and creative work.

2. Methods

We collected data about GF use from online forums, including posts on Reddit r/photography, r/graphic_design, and r/photoshop, as well as posts on the website DPReview. Data collection occurred on November 28th, 2023, using the Google search API. 2666 unique posts were obtained from threads that contained mentions of GF. We filtered out posts that were not relevant to our research question through a multi-step process. First, we used a randomized BERTopic model to identify posts that did not contain information relevant to generative fill. Next, the remaining posts were manually reviewed by the first author and eliminated if determined not relevant. When the relevance of a post was not immediately clear, additional review was conducted to examine contextual information, including finding the thread of “replied to” posts starting at a given post, and reviewing all posts by a particular user. A final corpus of 566 posts was then used for a qualitative analysis.

We used reflexive thematic analysis to analyze data from the 566 posts (Braun and Clarke, 2021, 2012). ATLAS.ti was used. A subset of the posts was open coded by the first author and then discussed and iterated upon in group data analysis sessions to define the scope of all subsequent analyses—different uses of GF, perceived ease of using GF, and user values embodying GF’s use and usability. With that scope in mind, we focused later analysis (selective coding) on the ease and difficulty of using different features of GF, useful and non-useful aspects of GF, and user values manifested in their posts. After over two months of iterative coding, we identified four overarching themes that interweave to answer our research question.

3. Results

Following the thematic analysis of our data, we identified four overarching themes that addressed our research question about how different user values are embodied through GF use and usability. Some values were supported, some were hindered, while some competed against each other (value tension (Friedman, 1996)). Primarily, some users found that generative fill was useful in performing touch-ups to images (such as removing blemishes), expanding images, and generating prototypical images (borrowing the concept of idealized prototypes from psychology (Rosch, 1973)). This relationship helped and hindered users’ values in a variety of ways. The inclusion of these useful features (touch-ups, etc.) reinforced users’ value of productivity by replacing complex repetitive tasks with simpler streamlined commands. Simultaneously, the useful aspects created a value tension within the spectrum of creativity – both enabling users to do more with less and also adding the doubt of “Was this made by an AI?”. The change of methods led to hindering both lived experiences as well as a sense of accomplishment. Finally, generative fill inherently put designers in both a personal and interpersonal value tension where regulating content is either too loose or too restrictive. We conclude that the impact of this technology on creative workers is notable and that the resolution of value tensions may be found not only in the careful design of future tools but also in the broader response by society to sociotechnical progress.

3.1. Generative fill is useful

The three tasks that we observed users describing as easy and useful, in a way that impacted their values, are performing touch-ups, generating prototypical content, and expanding images. Performing touch-ups is defined as when a user makes a small change, such as removing a tattoo or removing powerlines to showcase the subject of the picture better. Generating prototypical content is when a user has an image in mind and has to create a corresponding image to work with. Expanding images is changing the dimensions of the image, providing space for marketing text on the image, or changing the background of images. Before discussing the value implication from these observations, we will first describe these tasks in more detail.

3.1.1. Performing Touch-ups

Without generative fill, touch-ups are complicated, often involving several tools, and can take a significant amount of time (Chen, 2010). With the tool, it is simplified. One user commented on the speed-up saying ”30 min work in 30 seconds.” We found users describing the difference with and without the tool:

”The time I spent doing selections, clone stamp, repainting, color match…. I can now do in seconds.”

Additionally, we found that users described how they could perform the task in a wide variety of cases. This speaks to the versatility of the method. For example, one user listed several ways in which they could perform touch-ups with the tool:

”I like to use gen fill to clean up movie and book covers, e.g. remove text labels, logos, watermarks.”

3.1.2. Generating Prototypical Content

Generating prototypical content is an ideal use case for text-to-image generative models. GF replaces the need for a creator to search through existing (stock) photographs until they find one that meets their needs. Surprisingly, the reaction of users to this particular task was mixed. For example, while some users approved of GF’s ability to generate new images,

”With generative fill I can start with an empty frame and zero stock and end with something quite presentable.”

It was also the case that other users struggled with building up from a blank canvas or generating new images,

”If you’re just trying to generate art from nothing, then I can see how you’d think it’s useless. It’s just not there yet.”

Workarounds to this problem included limiting the size of edits and using the tool as a ”word brush”. However, it is worth mentioning that there have been a number of research initiatives into redesigning the T2I prompting mechanism (Wang et al., 2023; Brade et al., 2023; Chung and Adar, 2023), indicating that perhaps the current selection and text prompt mechanism could be further evaluated for usability.

3.1.3. Expanding Images

The ability to extend images received a wealth of positive reactions, being referred to as ”jaw-dropping”, ”godly”, and a ”game-changer”. The use of the technology simply involved selecting the area to expand the image and prompting (or leaving the prompt blank). A user describes the process as follows:

”Say I’m developing a vertical poster, and the image I want to use across the full height is horizontal. You can frame the image how you want it and generate the gaps in art.”

The user feedback on expanding images was consistent. Wth touch-ups or prototypical images, there were the occasional concerns that users raised such as, ”It’s been useful for cleanup work but even then, it hasn’t done anything I can’t do myself with honestly more predictable results.” On the other hand, image extension appears to lack the same divisiveness, except for concerns that exist for all usage of the technology, such as limitations on the resolution of the images that can be generated.

3.2. Generative fill helps with productivity

We define productivity as ”being able to accomplish tasks with less time or effort.” We observed this to be a key value in users of generative fill, an observation which has been repeatedly noted elsewhere (Rao, 2024; Inie et al., 2023). .” Users who valued productivity were concerned with effectively getting the desired output. One user discusses how they saw this from GF as the replacement of complex and time-consuming operations with a quicker and easier workflow. They said,

”It speeds my workflow tremendously eliminating a ton of clone stamp/healing brush manipulation that is, frankly, a waste of my time.”

Additionally, users often associate productivity with materialistic gains, such as an ability to have an increased earning potential or decreased work hours. One user describes their experience of both gains, resulting from their increase in productivity.

”AI has literally turned my 10 hour day into a 2 hour day. And my work has improved to the point where I’m making more money in 2 hours than I was in 10”

3.3. Creativity: helped or hindered?

Making the technology easier to learn and work with, or reducing barriers to entry, is another observed value of stakeholders in the technology. In fact, we found that there were a number of users who explicitly highlighted this proliferation as a value. Two users spoke to this saying,

”It’s a creative medium, people will create what they want to and this just makes it easier for many to manipulate images in lots of ways.”

and

”This tool just cuts down the time spent on it dramatically, as well as opening up the ability to do it to people who don’t have years experience mastering every little trick.”

However, in contrast with the benefits to less experienced users, there were also concerns that AI art is indistinguishable from human work. We observed users who highlighted that such increases in creative capacity leads to an indistinguishability, cheapening the value of their work,

”From a hobbyist standpoint everyone’s “art” will look the same. If everything from rough idea to finished work is done by a program what’s the fucking point? This is stupid.”

One user spoke to the duality of the problem,

”It both terrifies and impresses me equally. Photography and static art in general as we know it is going to change in one instant more than ever before.”

In VSD this is what is known as a value tension, where multiple values exist that are in opposition but potentially could be made to coexist. (Friedman, 1996). Since these two values are in tension, resolving the tension would mean finding a solution that addresses both the concerns around authenticity of human art while still keeping the creative opportunities afforded by GF.

3.4. The Changing craft: a feeling of loss

Values related to what can emerge from the existence of the tool tell only part of the story. The aspects of life that are lost with the emergence of new technology are also a key part of the user experience.

The experience of the ”Lived Body” is potentially lost if technologies like GF replace the need for ”in the world” work. ”Lived Body” comes from the work of Merleau-Ponty (Merleau-Ponty et al., 2013). In his phenomenological philosophy, he considers the experiences of the body such as the quality of the air, the smells and sounds, and the overall feeling of being in a place, as essential components of how we perceive the world. Being present in an environment with a camera is one instance of this. Some users claim that the inspiration for being in a location provides its own value beyond that of the quality of the image. One user accounts how this is important, should it be possible,

”[I]f I were to be doing a photoshoot and wanted it to be in the rainforest and could afford it, I’d want to go to the freaking rainforest. Not just for how it looks, but the feeling and inspiration that comes with shooting on location.”

Simultaneously, GF also has the potential to make obsolete old modes of work, which would lead to a loss of pride in craft. Practitioners with Generative Fill claim that they enjoy the work (a user says ”I enjoy retouching” and another says that the new technique ”sounds really boring”). Loss of pride in craft is observed as the notion that people took pride in their method of work and enjoyed the method. In comparison, one user languishes over the new method of work,

”Quite frankly I don’t feel the same sense or accomplishment upon completion either because it’s just so easy.”

3.5. Respect for dignity and privacy and freedom of expression

Finally, we found that there is a value tension that exists around what GF is capable of, and how it is regulated. Particularly, the question is how the ethical decision should be made if the creation of content is allowed or prohibited. Adobe addressed this issue directly in the technology with the creation of ”guidelines” that would block the use of GF if it determined the user was attempting to make illicit content.

The two values clearly conflict with each other concerning explicit content. The respectful use of technology, not creating unsettling content or directly manipulating someone’s person without their consent, maintains individuals’ sense of dignity and privacy. And simultaneously there is the belief that an individual should be free to create content without interference if they are using it for legitimate means.

Adobe’s guideline restrictions prevent unethical use of the technology by preventing the creation of the type of images that could be considered explicit. As an example specifically of how this led to values coming into tension, we look at two users who took the issue from two different perspectives. The first spoke to the risk of the feature being used for fake pornographic images of people, which should be censored,

”I think it would be a very bad idea to have a feature in Photoshop that allows you to easily make high-quality fake nude images of people.”

While this is a valid instance for censorship, sometimes censorship is used when the use case is legitimate. For example, how do you choose to censor nudity in the case of classical art which often works with nude models? One user comments on their frustration with getting censored repeatedly as they are trying to go about their work,

”I know a lot of us work with nudity and it’s extremely annoying that we are getting an insane amount of guidelines violation messages when working on those pictures.”

We found that a large number of users were bothered by guideline violations (the current solution to the problem of illicit content), so it would seem reasonable to expect that a more nuanced solution is still necessary.

4. Future Work

The value of creative arts to society is fundamental (Milbrandt, 2010; Stuckey and Nobel, 2010). Creativity is a uniquely human quality (Balter, 2009). Today, however, generative AI applications can produce new content in the form of text, images, audio, and video, or a combination of those. While some creators are experimenting with these tools to augment their creative work, some are alarmed at how they can impact their lives and livelihoods, e.g., by creating unfair competition (Walton, 2024). Some have even speculated a future “techlash” against algorithmically generated content, where people begin to value authentic creativity more and be willing to pay a premium for “human-made”(De Cremer et al., 2023). This new value, the ability to create, recognize, and appreciate art created by a human, without any AI assistance, is then shaped not due to how the technology affects the creators or consumers, per se, but rather the position the technology places them in. Thus, it is important to understand how the work practices of creative professionals are evolving with the evolution of generative AI tools and which of their values are being helped, hindered, or are in tension.

In this paper, we reported results from a preliminary study on how users’ values are embodied in ways that they find Photoshop’s Generative Fill (GF) meaningful to their workflow. As a future step, it is important to understand how individual differences in creative professionals influence their value perceptions when using generative models in their work practices. For example, how age, experience, and expertise would influence how people use generative tools in creative work? And, as a result, how would workforce training, skill-building, hiring, and mentoring practices change? Would prompt engineering be a required skill in graphic design in the future? How would the lived experiences of people play a role in how they use generative models in creative work practices? On the other hand, it will also be interesting to study how these tools can help people who are not in the creative profession express their creativity better. And when doing so, how can these tools uphold their values, like becoming an expert with colors or mastering portrait photography?

References

  • (1)
  • Baidoo-Anu and Ansah (2023) David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI 7, 1 (2023), 52–62.
  • Balter (2009) Michael Balter. 2009. On the origin of art and symbolism. Science 323, 5915 (2009), 709–711.
  • Bender et al. (2021) Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623.
  • Brade et al. (2023) Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Grossman. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14.
  • Braun and Clarke (2012) Virginia Braun and Victoria Clarke. 2012. Thematic analysis. American Psychological Association.
  • Braun and Clarke (2021) Virginia Braun and Victoria Clarke. 2021. One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative research in psychology 18, 3 (2021), 328–352.
  • Chen (2010) Yi Chen. 2010. Photoshop touch up 101: Smooth, Brighten & polish skin textures. Retrieved from https://photoble.com/photoshop-tutorials/photoshop-touch-up-101-smooth-brighten-polish-skin-textures/
  • Chung and Adar (2023) John Joon Young Chung and Eytan Adar. 2023. PromptPaint: Steering Text-to-Image Generation Through Paint Medium-like Interactions. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17.
  • De Cremer et al. (2023) David De Cremer, Nicola Morini Bianzino, and Ben Falk. 2023. How generative AI could disrupt creative work. Retrieved from https://hbr.org/2023/04/how-generative-ai-could-disrupt-creative-work
  • Feng et al. (2023) Yingchaojie Feng, Xingbo Wang, Kam Kwai Wong, Sijia Wang, Yuhong Lu, Minfeng Zhu, Baicheng Wang, and Wei Chen. 2023. Promptmagician: Interactive prompt engineering for text-to-image creation. IEEE Transactions on Visualization and Computer Graphics (2023).
  • Franzen (2023) Carl Franzen. 2023. The copyright case against AI art generators just got stronger with more artists and evidence. Retrieved from https://venturebeat.com/ai/the-copyright-case-against-ai-art-generators-just-got-stronger-with-more-artists-and-evidence/
  • Friedman (1996) Batya Friedman. 1996. Value-sensitive design. interactions 3, 6 (1996), 16–23.
  • Gero et al. (2023) Katy Ilonka Gero, Tao Long, and Lydia B Chilton. 2023. Social dynamics of AI support in creative writing. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
  • Gmeiner et al. (2023) Frederic Gmeiner, Humphrey Yang, Lining Yao, Kenneth Holstein, and Nikolas Martelaro. 2023. Exploring Challenges and Opportunities to Support Designers in Learning to Co-create with AI-based Manufacturing Design Tools. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–20.
  • Hacker et al. (2023) Philipp Hacker, Andreas Engel, and Marco Mauer. 2023. Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1112–1123.
  • Inie et al. (2023) Nanna Inie, Jeanette Falk, and Steve Tanimoto. 2023. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–8.
  • Jiang et al. (2023) Harry H Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 363–374.
  • Jo et al. (2023) Eunkyung Jo, Daniel A Epstein, Hyunhoon Jung, and Young-Ho Kim. 2023. Understanding the benefits and challenges of deploying conversational AI leveraging large language models for public health intervention. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–16.
  • Lawton et al. (2023) Tomas Lawton, Francisco J Ibarrola, Dan Ventura, and Kazjon Grace. 2023. Drawing with Reframer: Emergence and Control in Co-Creative AI. In Proceedings of the 28th International Conference on Intelligent User Interfaces. 264–277.
  • Merleau-Ponty et al. (2013) Maurice Merleau-Ponty, Donald Landes, Taylor Carman, and Claude Lefort. 2013. Phenomenology of perception. Routledge.
  • Milbrandt (2010) Melody K Milbrandt. 2010. Understanding the role of art in social movements and transformation. Journal of Art for Life 1, 1 (2010).
  • Rao (2024) Srinivas Rao. 2024. 21 Keys to Creative Productivity. Retrieved from https://unmistakablecreative.com/creative-productivity/
  • Rosch (1973) Eleanor H Rosch. 1973. Natural categories. Cognitive psychology 4, 3 (1973), 328–350.
  • Stuckey and Nobel (2010) Heather L Stuckey and Jeremy Nobel. 2010. The connection between art, healing, and public health: A review of current literature. American journal of public health 100, 2 (2010), 254–263.
  • Walton (2024) Adele Walton. 2024. Creative Workers Say Livelihoods threatened by Generative AI: Computer Weekly. Retrieved from https://www.computerweekly.com/feature/The-threat-of-generative-AI-to-creative-work-and-workers
  • Wang et al. (2023) Yunlong Wang, Shuyuan Shen, and Brian Y Lim. 2023. RePrompt: Automatic Prompt Editing to Refine AI-Generative Art Towards Precise Expressions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–29.
  • Weisz et al. (2022) Justin D Weisz, Michael Muller, Steven I Ross, Fernando Martinez, Stephanie Houde, Mayank Agarwal, Kartik Talamadupula, and John T Richards. 2022. Better together? an evaluation of ai-supported code translation. In 27th International Conference on Intelligent User Interfaces. 369–391.
  • Zamfirescu-Pereira et al. (2023) JD Zamfirescu-Pereira, Richmond Y Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
  • Zhang and Kamel Boulos (2023) Peng Zhang and Maged N Kamel Boulos. 2023. Generative AI in medicine and healthcare: Promises, opportunities and challenges. Future Internet 15, 9 (2023), 286.