With the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD).
Computer scientists at the Technical University of Darmstadt, Humbold University of Berlin, both in Grrmany, and at the University of Glasgow in Scotland examined whether users can accidentally create DD for a fictitious webshop using GPT-4. They recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., “increase the likelihood of us selling our product”). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings.
When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT’s recommendations.
The researchers conclude that the practice of DD has become normalized.
The group has posted their research on the arXiv preprint server.
“Inadvertently”? Can we please force every journalist in the world to sit through a 5-minute overview of how LLMs work?
Can we do the same with CEOs and politicians, please?
And, ideally, subscribers to this community? There are so many weird takes and misunderstandings about this stuff.
That’s just straight out of the abstract of the paper, no journalists involved.