Creativity often emerges from the interplay of disparate ideas—a phenomenon known as combinational creativity. Traditionally, tools like brainstorming, mind mapping, and analogical thinking have guided this process. Generative Artificial Intelligence (AI) introduces new avenues: large language models (LLMs) offer abstract conceptual blending, while image (T2I) and T2-three-dimensional (3D) models turn text prompts into vivid visuals or spatial forms. Yet despite their growing use, little research has clarified how these tools function across different stages of creativity. Without a clear framework, designers are left guessing which AI tool fits best. Given this uncertainty, in-depth studies are needed to evaluate how various AI dimensions contribute to the creative process.
A research team from Imperial College London, the University of Exeter, and Zhejiang University has tackled this gap. Their new study (DOI: 10.1016/j.daai.2025.100006), published in May 2025 in Design and Artificial Intelligence, investigates how generative AI models with different dimensional outputs support combinational creativity. Through two empirical studies involving expert and student designers, the team compared the performance of LLMs, T2I, and T2-3D models across ideation, visualization, and prototyping tasks. The results provide a practical framework for optimizing human-AI collaboration in real-world creative settings.
To map AI's creative potential, the researchers first asked expert designers to apply each AI type to six combinational tasks—including splicing, fusion, and deformation. LLMs performed best in linguistic-based combinations such as interpolation and replacement but struggled with spatial tasks. In contrast, T2I and T2-3D excelled at visual manipulations, with 3D models especially adept at physical deformation. In a second study, 24 design students used one AI type to complete a chair design challenge. Those using LLMs generated more conceptual ideas during early, divergent phases but lacked visual clarity. T2I models helped externalize these ideas into sketches, while T2-3D tools offered robust support for building and evaluating physical prototypes. The results suggest that each AI type offers unique strengths, and the key lies in aligning the right tool with the right phase of the creative process.
“Understanding how different generative AI models influence creativity allows us to be more intentional in their application,” said Prof. Peter Childs, co-author and design engineering expert at Imperial College London. “Our findings suggest that large language models are better suited to stimulate early-stage ideation, while text-to-image and text-to-3D tools are ideal for visualizing and validating ideas. This study helps developers and designers align AI capabilities with the creative process rather than using them as one-size-fits-all solutions.”
The study's insights are poised to reshape creative workflows across industries. Designers can now match AI tools to specific phases—LLMs for generating diverse concepts, T2I for rapidly visualizing designs, and T2-3D for translating ideas into functional prototypes. For educators and AI developers, the findings provide a blueprint for building more effective, phase-specific design tools. By focusing on each model’s unique problem-solving capabilities, this research elevates the conversation around human–AI collaboration and paves the way for smarter, more adaptive creative ecosystems.
###
References
DOI
10.1016/j.daai.2025.100006
Original Source URL
https://doi.org/10.1016/j.daai.2025.100006
Funding information
The first and third authors would like to acknowledge the China Scholarship Council (CSC).
About Design and Artificial Intelligence
Design and Artificial Intelligence (DAAI) is a peer-reviewed international journal dedicated to advancing interdisciplinary research at the intersection of design and artificial intelligence (AI). The goal of DAAI is to serve as a leading platform of scholarly exchange and innovation in the integration of design and artificial intelligence through a multidisciplinary approach. DAAI is a peer-reviewed international journal dedicated to advancing interdisciplinary research at the intersection of design and artificial intelligence (AI). The goal of DAAI is to serve as a leading platform of scholarly exchange and innovation in the integration of design and artificial intelligence through a multidisciplinary approach.