A report by Goldman Sachs stated that AI could replace the equivalent of 300 million full time jobs, and an open letter by the Future of Life Institute that calls for a pause of at least six months of “the training of AI systems more powerful than GPT-4” has so far been signed by over 33,000 people including Steve Wozniak, Elon Musk, and Rachel Bronson.
Such a powerful technology should be implemented with caution, but there are myriad ways in which AI can be embraced successfully by designers to improve everything from visualisation to efficiency. Design and creativity necessarily adapt to emerging technologies and in viewing AI as a tool rather than a threat, savvy designers can future-proof their practice and unlock a new level of design proficiency.
Just like Blender, Pacdora, Sketch, an Adobe suite, or a pen and paper, AI can be considered another useful tool in a design kit. Perhaps the best way to approach it is to stop thinking of AI in an all-encompassing sense, which is fairly overwhelming given its breadth, and focus specifically on AI-based programmes and platforms with distinct functions.
In the world of image-making, DALL-E 2 by OpenAI and Midjourney are certainly among the most well-known platforms, allowing users to create images from written prompts. DALL-E 2 also offers the option to extend an existing image beyond its original canvas (imagine Girl with a Pearl Earring stood amid a messy studio), add new features to existing artworks (a duck amongst Monet’s Water Lilies, perhaps) and variations of images (the Mona Lisa in a city scape, or wearing modern attire). The excitement around image-based generative AI has been very much based upon the idea that anyone can create art now, and because of that there have been concerns about who’s work the platforms are being trained on and whether it erodes creative talent. However, the likes of Midjourney and DALL-E 2 are also brilliant for creating moodboards, generating ideas, and understanding what the broader perceptions of certain words, colours, or products are.
Because AI is trained on what already exists, a very generic prompt such as “a packet of granola” will generate a selection of images based on what’s already out there. From there, you can refine and consider it a sketching process. One image might feature the approximation of a kitchen counter backdrop and the homely feel plays into the values of your brand, so you iterate on that theme (“a packet of granola sitting upon a rustic wooden kitchen counter”). The inclusion of the word “rustic” then starts to generate a colour palette or approximate a handwritten-style typeface so you iterate on that, and so on. Just like any design process, you start broad and refine as you go.
If the granola flavours are inspired by the founder’s childhood spent foraging in the Norwegian countryside, a prompt based around Norwegian forests can start to generate supplementary textures and tones to feed into the overall language of the design. You get instant feedback on both your initial ideas and what the general connotations of the key words you’re using are. In this context, AI is no longer a lazy shortcut to becoming an artist but a helpful ideation and visualisation tool.
of employees who use AI regard it as a co-worker, not a threat
Using AI, Unilever developed a new enzyme five times faster than usual
AI can reduce part weight by up to 50% and reduce part cost by up to 20%
DALL-E 2, Midjourney, Stable Diffusion, Firefly and the myriad other generative AI platforms are incredibly useful for the early stages of ideation. Even text-based applications like ChatGPT can be useful in creating design prompts and directions. A prompt like “describe a refillable shampoo bottle for millennial women” can suggest colours, materials, silhouettes, and even the desired tactility. But where AI tools move from the generically generative to the original and innovative is in the visualisation, rendering, and modelling stage. Imagine you have made some preliminary sketches during an early stages design meeting. Depending on the complexity of the design, rendering can take hours, even days, therefore a thinning out of ideas would be necessary before reaching that stage.
Now AI-based platforms like Midjourney, Vizcom, controlnet-scribble, Kaedim, and Scribble Diffusion allow you to render a simple line drawing in just seconds. To improve accuracy, most 2D to 3D render AI platforms combine sketch input with text prompts and other parameters such as render style, polycount, symmetry, and generation quality. To improve the quality of renders even further, programmes can be used in combination. For instance, a quick render from controlnet-scribble, which is free and opensource, can then be inputted into Kaedim for further refinement, and if a designer wishes, into a traditional 3D rendering programme like Adobe Dimension or Autodesk 3dsMax for final tweaks.
In allowing for 3D realisation on a broader scale, designers can explore more possibilities and take leaps with more complex structures or silhouettes that previously might not have been deemed worth taking a risk on. New York-based AI design pioneer Sherry Horowitz calls this approach “rapid ideation” or “rapid prototyping”.
While the examples above are packaging-focused, it can be applied to product development too and one company taking advantage of this rapid prototyping is Unilever. The CPG company has used AI to create performance-boosting enzymes for its home care products. The new enzymes, which are claimed to “fight stains better, use less water and energy, and replace petrochemical-derived ingredients,” were developed in just 18 months, five times faster than previous efforts. Unilever is also using AI to “predict the response of the biological process when the skin is exposed to certain chemicals or ingredients” as a way to make animal testing a thing of the past, and to develop new food products and meat-free proteins.
It can be useful to think of AI as a peer or co-creator, something to assist you in the design process and bounce ideas off. Canva calls its integrated generative AI “a little assistant sitting up to the side that’s there when you need it”, and according to a report from MIT Sloan Management Review, 60% of employees using AI regard it as a coworker, not a job threat.
It can operate on many levels. Dovetail, for instance, could thematically group negative user feedback about a previous product, providing you with a comprehensive overview of the issues to fix in a future iteration. Other text-based generative AI platforms can answer your ‘How Might We’ questions, while Let’s Enhance can improve your images for you, and ChatGPT can even act as a so-called “prompt engineer” for more specific and effective prompts for image generators.
Those examples are at the more everyday end of the scale. Australian creative consultancy and design practice Studio Snoop has taken the idea of an AI peer a little more literally by creating Tilly2, the “world’s first heart-centric AI design collaborator”. Presented during Milan Design Week 2023, Tilly has been given a human form (AI generated of course) and has become a member of “the first human-AI design team”, conceived to push the boundaries of design, conceptually and visually. In order to ensure the outcomes of the collaboration sit within the studio’s remit, Tilly has been trained on “large amounts of data guided by [its] own design principles”. So far, the collaboration has resulted in the creation of a mycelium stool and a hempcrete communal table.
Given that AI is trainable, it can therefore be trained to prioritise the most sustainable or efficient designs or strategies. Amanda Talbot, the designer behind Studio Snoop and Tilly, told Dezeen that “Tilly will challenge you on materials… if you try to come up with something that’s actually not great for the environment, she’ll tell you.”
As far back as 2017 Adidas was using generative design to create the ultimate lightweight lattice structure for the 3D printed sole of its Futurecraft 4D Shoe. H&M is using AI to reduce stock and production volumes, redistribute stock to where there is demand, and “improve precision on buying levels,” therefore reducing waste. The likes of Air New Zealand and Qantas are using AI to determine more fuel-efficient flight paths, and per McKinsey, across industries ranging from aerospace to sporting goods, generative algorithms have “reduced part cost by six to 20%, part weight by ten to 50%, and development time by 30 to 50%”.
AI can be used to assess the carbon footprint or environmental commitments of a manufacturer or raw materials supplier, making choosing supply chain partners easier, and it can help uncover local materials to promote localised design too. For a project called Products of Place, oio partnered with Ikea’s Space10 (now sadly closed), to use ChatGPT to collect data points from agricultural residues, construction waste, and manufacturing offcuts to help identify abundantly available local materials around the world. The outcome was an interactive map, which designers can use to see that Seoul has an abundance of coffee grounds, Brussels has plenty of spent beer grain, Libreville has discarded tires to spare, and Malmö has excess textile waste. Using this information, generative AI was then used to design a range of plates made from the various site-specific raw materials.
Of course, it may not make sense to make a plate from e-waste or indeed seal waste (the suggestion for Iqaluit, Canada), but the concept can be extrapolated and applied to more realistic scenarios.
Because AI is a tool, it needs an experienced hand to fulfil its potential. Ask a layperson to create a prototype for a glass, refillable fragrance bottle and they may be able to come up with something that roughly resembles one, but it takes extensive design knowledge to be able to create a specific, original vision.
A novice might ask an AI programme to create a “tall, modern refillable glass fragrance bottle”. But a designer would know that they want a clear glass flacon with a gloss black stopper and embossed branding in the style of a 19th century whiskey bottle, or a transparent green-hued fluted bottle with a rectangular base, rounded shoulders, an FEA15 neck size, and a sticker label with floral detailing reminiscent of Kasamatsu Shiro’s woodblock prints. Working in Flair or Product Studio, a creative with extensive experience in creating product imagery would know they want to emulate, for instance, the bold, oversaturated photography style of Guy Bourdin. They would know the aspect ratio, the lighting style and direction, the artistic inspirations, and countless other details that play into creating a specifically tailored product image.
As with any data driven system, the rule of garbage in, garbage out very much applies to AI so it serves companies to arm their designers with AI rather than replacing them with it. Scott Belsky, chief product officer at Adobe, addressed the matter, saying “AI will increase the surface area that creatives can consider and explore before finding even better solutions to pursue and iterate”.
Federico Casalegno, executive vice president of design at Samsung Electronics, which has adopted AI into its design process, has also spoken on the subject, telling a Wall Street Journal forum that the company applies sophisticated AI and machine learning technologies “to empower designers to fully unleash their creativity”, confirming that designers will remain the ones in the driver’s seat. “Technology without humanity is perfection without purpose,” Casalegno said. AI isn't replacing us yet.