Article
Article
Article

AI, Digital Twins, and the Future of Product Design Processes

Share this Article

The age of computational design has finally arrived. Incorporating data and artificial intelligence (AI) at every stage of a design process transforms our design tools, and perhaps the very definition of what it means to design and engineer. We have started to tap the enormous opportunity in the data flows and computational capabilities available to us. And as access to data—and our ability to extract knowledge from it—grows, we see opportunities for organizations to rethink design processes at a fundamental level.

We see a near future where today’s quantitative change in computational capabilities powers a qualitative shift in design practice. Manipulating geometry will soon no longer be the primary activity that distinguishes design and engineering from other practices. Instead, in the vision below we outline how teams of designers and engineers will work with a range of ‘AI assistants’ to define generative product possibility spaces that simultaneously define multitudes of possible design solutions. Overall, we see the increased use of AI assistants and data-driven processes as a way to make designers exponentially more productive.

Supporting Design with Many AI Assistants

The potential of AI for design assistance has existed since the earliest days of CAD in the 1960s. It remained more theoretical than practical for decades, largely because the skills for designers to fully take advantage of computational capabilities and data were, until recently, quite rare. Most architects can’t also be expert programmers, statisticians, and data scientists, so the potential has largely remained locked up in our devices—hinted at in exotic solutions, but not part of everyday workflows.

However, as we’ve seen with the various AI-driven tools that have begun to appear in the rest of our lives—the AI that uncannily suggests the correct next word in an email, the map that finds a surprisingly efficient route around a traffic jam—AI assistants are beginning to hit their stride as tools that are accessible to people without a technical AI background.

Mike Kuniavsky and Nik Martelaro talk about how artificial intelligence can support product design.

We don’t see AI assistants monolithically. There isn’t going to be a single unified helper, or a chatbot, or a replacement junior engineer. Many AI design assistants will play different roles across the design process and will complement the skills of human designers. During early stages of a design process, AI assistants can provide quantitative research and collect customer preferences. As a design space is defined, AI assistants can help distill constraints and requirements. AI assistants can then generate many design options and help their human teammates curate and select from those options based on various simulations and evaluations. Powering this constellation of AI assistants is a digital thread that connects data from design, simulation, manufacturing, sales, and usage. Collectively, this computational design system can enable companies to develop more product variations with fewer resources.

How then can companies achieve a transition to AI assistant-driven computational design? From the production perspective, creating small-run product lines is feasible today with CNC machines and vast catalogs of off-the-shelf components that can be algorithmically reassembled. Once a design can be produced with a set of specific manufacturing machines and interchangeable parts, these are minimal barriers to making changes to a design later on.

The real challenge lies in design talent capacity. A design team’s capacity is linear: the number of designs you can produce is directly proportional to the number of designers you have. It doesn’t scale economically without some form of parametric and generative systems to support those designers in creating exponentially many more designs without hiring exponentially more designers. Currently though, many design tasks are computerized versions of what we could do by hand in the predigital era (although at great cost and time). While computerization has greatly improved the efficiency of traditional practices, the next step in improving design team capacity will be a shift toward computational design practices. Computational design practices specifically take advantage of and leverage the combinatorial capacity of computing to generate designs, sort through libraries of potential components, and support the evaluation of design options with a speed that is not possible by human designers manipulating geometry by hand.

Computational Design Process

Design process

Inspired by the British Design Council’s Double Diamond process, we define a divergent-convergent computational design process with four primary phases that is followed by combined human + AI teams. Discover-Define, the first divergent-convergent stage, focuses on determining the right product to design, and Explore-Evaluate focuses on generating solutions and choosing an acceptable solution to move into production, so the team can (to paraphrase Bill Buxton) both “get the right design” and “get the design right.” Throughout the process, human and AI teammates act on shared representations (digital twins) of the design. Much as a wall of post-its and a CAD model are different shared representations of a design idea in different phases of current design practice, we see a wide range of shared representations, and roles, throughout.

Discover—Quantitative and qualitative data is collected directly from customers and from the market at large. Intelligent research and data collection tools bring together data from human and AI researchers to form a set of data that is used throughout the rest of the design process.

Define—Customer and market data is used to create a set of design requirements and constraints that are suitable for use with a computational generative design tool. Software tools help to manage requirements and help teams negotiate priorities based on upstream customer needs and downstream production systems. The industrial design team creates a style grammar that can be used by the generative design system to create designs that meet the brand’s aesthetic quality.

Explore—The requirements and constraints are then fed into a generative design system where a wide space of design alternatives are digitally created with varying levels of fitness to different objective attributes such as weight, power consumption, and manufacturing costs.

Evaluate—After generating a set of possible design options, human and AI teammates then evaluate the digital design options using a suite of simulation tools and design curation and selection systems. The qualitative selection process can be done by the design team or by end customers.

Where our computational design process differs from more traditional practice is that instead of focusing on generating a single canonical design, or a small number of design alternatives, we see teams defining parametric design spaces that delineate a large—perhaps infinite—set of acceptable design options as the final product. We also see automatic evaluation using simulation and quantitative analysis tools (which could include everything from heat distribution to manufacturing cost and compliance with brand guidelines) throughout, at a level not practical in current approaches.

Once the parametric space is defined, end customers or domain specialists select the most relevant instantiation of the design space, which is then produced in much smaller batches than current mass production methods. Perhaps end customers won’t even know they’re manipulating a parametric design space. Perhaps to them the experience is identical to shopping online today, selecting from a wide range of possibilities, but with the results tuned to their specific needs and preferences, just as media streaming services recommend long-tail content.

Additionally, because the designs remain digital throughout the process, the design team can circle back to earlier phases and update parameters based on new data from customers, markets, and manufacturers. With the work of analyzing vast amounts of data, creating geometry, and simulating models done by AI assistants, human team members focus more on understanding their customers, negotiating design requirements, defining a product’s overall style, and curating the best possible designs. This design system can allow for more rapid product iteration and a much wider range of potential product designs. We illustrate this with an example of a customer-facing design tool we prototyped that uses a generative design backend to allow car buyers to customize their wheel design based on their driving data, their aesthetic preferences and the brand’s style guidelines.

Example: Customer Data-Driven Generative Design for Car Wheels

Generative design can be a powerful tool for customizing product design, as Nik Martelaro and Mike Kuniavsky explain.

Car buyers increasingly want vehicles and goods that are customized to their preferences, while automotive companies need to ensure that any custom options meet safety, performance, and brand standards. A customer may love the look of a set of custom wheels, but if those wheels don’t meet safety standards or weigh so much that they hurt the car’s fuel economy, the company can’t sell them. Meanwhile, small-batch manufacturing and automation make it easier to move from design requirements to custom manufactured parts.

GD

We built a demo of this approach by extending the generative design capabilities of Autodesk’s Fusion 360 generative design capability with customer data. Our generative design pipeline begins with a model of a specific customer’s driving style collected with in-car sensors during a test drive. We also model the customer’s stylistic preferences by asking them to identify wheel styles they like, and develop algorithmic brand guidelines with the carmaker. The tool applies these requirements and preferences—along with safety and manufacturing constraints—as it automatically generates, refines, and evaluates dozens of potential design options. The designs that best match the design, performance, and price targets are presented back to the user. Once the user selects from the generated designs they most like, the final wheel design can be directly fed into a PLM system where engineers go through the final approvals and tweaks for the custom wheel before manufacturing. With generative design, companies can bring a high-end customization experience to all customers.

Managing Data and Roles

As shown in our custom wheel example, enabling a generative design system requires implementing a digital thread that connects data across the design process from car sensors to manufacturing line, and makes it available to various CAD, PLM, and customer-facing systems to use. With this data pipeline in place, human and AI team members can focus on what they are best at while maintaining an open design space that allows for rapid iteration. We identified three principles we apply throughout the computational design process:

1. Make data shareable across design processes with a digital thread.

2. Augment humans with AI systems that increase productivity and expand design options.

3. Develop usable digital twin representations that both humans and AI can work on.

As an example of applying these principles, we can think about a hypothetical design for a personalized smart speaker. Every house has different acoustic and networked environments, and every family has different aesthetic preferences and smart speaker use behavior. During the Discover phase, human team members act as research directors to define research questions and conduct formative research directly with a wide range of customers. AI research assistants then work from these questions to run market analyses, conduct large-scale surveys, and collect current usage data. Data from all teammates is then managed by a research system that collects, stores, and analyzes so the team can synthesize the needs and preferences across a wide range of potential situations. One emergent need may be the desire of multiple groups that their speaker hear around corners, or to have the speaker act as a Wi-Fi-repeater.

Next, the product team works with an AI constraint manager to prioritize and negotiate different product features using downstream production data and upstream customer needs, say to prioritize adding a Wi-Fi-repeater, increasing speaker clarity, and ensure a maximal range on the microphone. The resulting set of requirements and constraints is restructured as a set of constraints (for example acoustic processing capability, BOM cost, exterior material manufacturability, estimated assembly time) that become part of the set of speaker design parameters to be explored in future steps. The human designers then direct a generative design system to explore the parametric design space, generate a large number of possible designs, and use a suite of evaluation functions to prioritize designs that balance the constraints.

Finally, the teams examine simulations of digital home environments and identify a product subspace where any configuration that an end user may make would meet different requirements of both the users and the company. With this set of feasible designs (imagine one design process generates 20 to 50 different design families) in hand, the end-customer selects, then configures one of the designs to fit their specific preferences. As a fully digital process, changes in markets, manufacturing techniques, supply chains, and user preferences can be incorporated as new data to update the design families over time. This digital adaptability creates a living product line that continually changes to meet the needs of the customer, while managing the constraints of product manufacturing.

Conclusion

We recognize that the picture of AI-driven parametric design, abandoning the long-established practice of designers creating detailed geometric descriptions of singular objects, is challenging. Our goal is not to throw out design practice or familiar tools and replace them (or designers!) with software; we’ve seen that fail time and again. We aim to expand what design and engineering can be, and what designers and engineers are capable of, when we bring the full power of data and computation into every stage of the design process, from conception to recycling.

The future of product design involves close collaboration between human designers and AI, according to Mike Kuniavsky and Nik Martelaro.

As designers, engineers, and AI researchers, we personally know how data, AI assistants, and the rapid iteration they enable creates and adapts the digital products we’re all familiar with—our familiar apps, sites, and software—and we have begun a concerted program to bring those practices, and versions of those tools, to the design of physical products. In our daily practice as researchers, we see firsthand how various forms of AI (from computer vision and predictive analytics to planning and reasoning) are more accessible and powerful. We believe these different forms of AI can dramatically improve the designer’s ability to create better products for increasingly smaller audiences. We also recognize that there is a significant gap between what we see can be done and what has been done, so we created this computational design process structure to focus our research efforts. We intend to pursue the development of AI assistants across a wide range of design tasks, using a broad palette of data and AI techniques. If you would like to join us in our research, to collaborate, or to use this framework in your own research and practice, please get in touch.

Mike Kuniavsky leads the Digital Experiences team at Accenture Labs. His team works to invent new user-centered products that combine IoT, AI, and smart materials. His current research is centered on AI-powered design assistants.

Nikolas Martelaro is an assistant professor at Carnegie Mellon University. His research group looks at how to augment designer’s capabilities with new design tools and processes.

Nick Akiona is a researcher at Accenture Labs focused on how technologies like robotics and generative design will impact companies and allow them to operate more efficiently.