Article
Article
Article

Autonomous Geometry Processing Using Machine Learning and Forge

Share this Article

Overview

This article details an autonomous geometry processing application CCTech is building, which uses machine learning algorithms to decipher intelligent information from STL meshes. These goals are being achieved using four steps:

  • First, segment the STL mesh into six basic surface types: cylindrical, spherical, planar, conical, B-spline, and torus
  • Next, group these detected surfaces and try to identify high-level features such as holes, taper holes, chamfers, fillets
  • Further on, use this feature-level information to understand parts and assemblies
  • Finally, an AI engine will identify various types of parts and tag them per the industry

A micro web services has been created that takes STL files as input and gives fully segmented mesh as output. Another web app will facilitate the object detection functionality from 3D CAD files.

Introduction

Most of our design techniques as engineers date back thousands of years. Perspective drawing was invented in the 1300s, descriptive geometry was invented in 1765, orthographic projection was invented in 1770, 2D CAD was invented in the 1970s, and 3D CAD was invented in the 1980s. As we can see, 3D CAD software came to the market just a couple of decades earlier, but our mainstream manufacturing and architectural industry has been using other older forms of engineering drawings from over a century or so.

The products manufactured and the constructions done before the wide-scale adaptation of 3D CAD are widely in use today. The designs of most of these wonderful creations would be either done on real paper or may have been digitized into 2D CAD drawing files. We know that in today’s world, 3D CAD files are of utmost importance. They help to carry out renovation work, to perform simulations, and to improvise the designs. And hence, it becomes highly important for us to convert these designs into 3D CAD formats.

There are millions of such designs in the world that need to be converted into 3D CAD files. However, organizations can’t afford to deploy CAD designers to carry out this mammoth task manually. If done so, it might take ages to complete. We need some sort of software automation or artificial intelligence to swiftly get this job done.

Let’s first discuss solutions to bring the non-digital design data into digital formats:

  • Designs which are on paper can be primarily scanned and then software algorithms or AI can be applied to generate their 3D CAD models

  • 3D scanners can be used to scan and create point data cloud files from the manufactured products or architectural buildings which do not have design data in any format at all

Once we have the design data in any digital format, we can work on solutions to understand the data and create 3D CAD models containing all feature-level information inside it. But what exactly makes 3D CAD models so useful than other format of storing design data? It is the rich information of features and connectivity between the features that is present in the 3D CAD files that makes them so important. Whereas raw files in which design data is stored may have much less information.

For example, in STL files, the shape of the 3D geometry is created using many triangles. The file in true sense contains the 3D coordinate values of every vertex of these triangles. You must have figured out by now that the automation solution to generate a feature-rich CAD model from every raw design data format will vary largely.

However, in this era of automation and artificial intelligence, we should not stop with the generation of 3D CAD models from this low-level design information. Instead, we should bring in an aspect of intelligence into the 3D CAD models like detecting contextual names of each part present in a 3D CAD file. For example, there should be an AI engine which can read a 3D CAD file and identify the names of all objects present in that CAD model.

Let’s say, if I upload a CAD model of a house with all furniture inside it, the AI engine should be able to identify objects like walls, sofa, table, chairs, TV, fridge, etc. Imagine if we start to have 3D CAD models and this type of contextual and intelligent information of every single design ever made in this world, a lot of possibilities open that can be done with this intelligent design data. So, let’s make the design information smarter.

Based on our market understanding, we realized that one of the major needs is to process the 3D scanner data of a real-world geometry and generate a 3D model out of it. We also found that most of the design data available in digital format are in STL or a few other similar formats. Even the output of a 3D scanner (i.e., point cloud data) can be easily transformed into formats like that of STL. In our current work, we start with a target to convert STL files into feature-rich 3D CAD models. In order to achieve this, we decided to take a phased approach.

  • Phase 1.1 – Detect and group triangles that belong to a primitive surface type like cylindrical surface, spherical surface, planar surface, etc.

  • Phase 1.2 – Detect and group a collection of primitive surface types (detected in Phase 1) into a feature; e.g., primitive features like chamfer, filler, or engineering features like hole, taper hole, threading for screw, flange in pipe, etc.

  • Phase 1.3 – Use all the information collected from the previous phases to generate the feature-rich 3D CAD model of the given STL geometry

  • Phase 2 – Create an AI-powered solution that can identify names of 3D objects from CAD files, like that of object detection from images

This article limits the scope to Phases 1.1 and 2 only, to identify primitive surface types from an STL geometry file and object detection from 3D CAD models.

Our first attempt was to create a heuristic algorithm with a lot of rules that can classify every triangle of a given STL file into the following surface types:

  • Cylindrical surface

  • Spherical surface

  • Torus surface

  • Conical surface

  • Planar surface

  • Any other generic BSpline surface

It gave good success on our collection of geometries used for testing purposes. But as we kept on using it on more and more professional STL files, the accuracy started to drop. We understood that the variations in triangulations from case to case can be enormous and writing rules for each such variation could simply be out of the scope of heuristic approach of solving this problem. The lesson we learned from our very first attempt was that we need to find a solution using artificial intelligence (AI) and machine learning (ML) here.

Phase 1: Problem Definition

Detect and group triangles from an STL file that belong to a primitive surface type.

surface

An AI-powered autonomous web application is to be created which will accept an STL file of a 3D geometry (left) and generate a new OBJ file (right) in which all the triangles have been classified into its primitive surface types.

Phase 2: Problem Definition

Identify names of 3D objects from CAD files.

cad

An AI-powered autonomous web application is to be created which will accept a 3D CAD file and identify each object inside it.

Existing Experimentation

Finding a good amount of literature helped us to understand that data scientists around the globe have done commendable work in decomposing a complex 3D CAD model into simpler sub-objects. You will be aware of the term “3D semantic segmentation” which can be described as splitting a 3D geometry based on its connectivity.

3d
Images courtesy of Shu, Z., et al., and Efi Arazi School of Computer Science.

There is a community of researchers in Geometric Deep Learning which is nicely coming up together to facilitate advancements in the area of geometry using artificial intelligence. There are a lot of research articles and software libraries coming up in this area. We also got inspired with this movement and started applying our knowledge of geometry and AI together.

Challenges in Applying AI to Geometrical Data

Most of the machine learning is done on a structured data set. We can call a data set structured when all the data points in the data set can always be presented with the same number of properties. Let's say we have an image data set of “cats” versus “non-cats.” 

cats

And every image in the data set has the resolution of 512 x 768 pixels. So, each image can be described using a fixed number of properties:

512 (pixels horizontally) ×

768 (pixels vertically) ×

3 (RGB values of each pixel) =

?,???,??? (properties)

Face

So, the input to the machine learning system will be a 2D matrix of RGB values along with a binary output of 0 or 1. If the image is of a cat, then the output is set to 1; otherwise it is set to 0.

Let’s take another example of a simple two-dimensional classification problem. Here all data points are specified using X and Y coordinates. And hence this data set is also an example of structured data.

data

coordinate

Today in the machine learning field, we have achieved tremendous success in data sets consisting of images, text, audio, and numbers. All of these are structured data. However, if you see, none of the real-world problems are inherently structured in nature. It is the work of data scientists to convert this unstructured data collected from the real-world problems into a structured pattern so that machine learning algorithms can be applied on them.

In our case, this challenge popped up as the first hurdle in the project. What we figured out was, just like images are built out of pixels, 3D meshes are built from triangles, edges, and vertices. If we can form structured data out of images, then we can try the same for mesh also.

We brainstormed for more than a month and came up with an innovative strategy to encode STL data into a machine learnable format.

format

format2

Phase 1 Work

Data Set Preparation

Machine learning depends heavily on data. If you can’t make sense of data records, a machine will be nearly useless or perhaps even harmful. That’s why data set preparation is such an important step in the machine learning process. In a nutshell, data preparation is a set of procedures that helps make your data set more suitable for machine learning.

The process for getting data ready for a machine learning algorithm can be summarized in the following steps:

Data Set Collection

Data sets are an integral part of the field of machine learning. Having rich data helps the learning algorithms to learn in more precise manner and make the ML model more reliable. High quality labeled training data sets for supervised and semi-supervised learning algorithms are usually difficult and expensive to produce.

In our study, we build our own 3D CAD model database as a test bed for the research. All the CAD models are collected from several mechanical manufacturing enterprises. The models are designed by engineers with mainstream commercial CAD toolkits such as Autodesk Inventor. There are a total of 1,500 models belonging to several generic categories: engine, valves, gears, screws, nuts, wheels, keys, bearing houses, flanges, washers, etc.

These 1,500 models contain 37 million triangle counts which are split into three parts: train, validation, and test. The training set and validation set are used to perform model selection and hyper parameter selection, whereas the test set is used to evaluate the final generalization error and compare different classifiers in an unbiased way. The figure below shows a portion of the 3D models in our data set.

fig

Triangle labels in a complete data set are categorized as follows:

set2

Feature Generation

Feature generation is also known as feature construction, feature extraction, or feature engineering. There are different interpretations of the terms feature generation, construction, extraction, and engineering. Some nearly equivalent, yet differing definitions for these terms are:

  • Construction of features from raw data

  • Creating a mapping to convert original features to new features

  • Creating new features from one or multiple features

It is a process of using domain knowledge of the data to create features that make machine learning algorithms work. Feature engineering is fundamental to the application of machine learning and is both difficult and expensive.

A feature is an attribute or property shared by all of the independent units on which analysis or prediction is to be done. Any attribute could be a feature, as long as it is useful to the model.

According to the specific nature of the problem domain, selecting features that have obvious distinguishable meaning is a critical step in the pipeline of our approach. The features should be invariant to irrelevant deformation, insensitive to noise, and very effective for distinguishing different categories of CAD models.

With the increase of data, it has become clear that the learning algorithm input can have a significant impact on the performance of learning algorithms. Therefore, data pre-processing is becoming more and more important. Data pre-processing is a collective name for all methods that aim to ensure the quality of the data. So, in data pre-processing we focus on two methods called feature generation and feature selection.

In our case we have only coordinates of points in three-dimensional space. From these coordinates there are certain triangle properties which can be derived like area, aspect ratio, edge lengths, circumradius, centroid distance from all the three vertex, internal angles, normal of a triangle, valency of triangle, type of triangle, and many others.

We have also derived features from the above properties because derived features are a way to inject expert knowledge into the training process and so to accelerate it. Some of our derived features are: ratio of angle between normal, ratio of edge length, featuring lower and upper quartile values of numerical parameters, distance of centroid from other triangle, triangle illumination, etc.

We wanted to give our ML algorithm knowledge of visual information of all triangles that it will try to categorize. And solid angle was our choice to make that happen. A solid angle is a measure of the amount of the field of view from some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point.

The total count of features added up to 100.

Data Analysis and Visualization

After getting the data set, the next step in the model building workflow is always data analysis. Analyzing the data by using general statistical tools and obtaining numerical values consisting of count, max value, min value, mean, median, standard deviation, and quartile range. These numerical values provide the univariate analysis of data which sometimes is tough to read through values. Analyzing the data is easier using data visualization tools, as some tools help us to relate various features within data.

Understanding insights using files becomes more difficult as the size of the data set increases. For data visualization, we have used different graphs and plots to visualize our data to ease the discovery of data patterns. This helped us to identify areas that need attention—outliers, for example, and to understand the factors that have more impact on our result. All the data is transferred into some form of plots and analyzed further.

Sandip Jadhav is a successful entrepreneur in the CAD/CAE space. He has co-founded CCTech, Zeus Numerix, Adaptive 3D Technologies, LearnCAx, and recently simulationHub, a cloud-based fluid flow simulation web service. Sandip has led several product development teams in conceptualizing, designing software, and implementation of apps in CAD and simulation space. Sandip is a passionate software developer and loves to tinker with technology.

Vijay Mali is a technology explorer, a visionary, and a product maker. As CTO of the company, he plays a critical role in deciding the technology vision of the company. He also leads the center of excellence (CoE) department at CCTech which is responsible for exploring new technologies and building a strategy to bring it to common designers.

Nem Kumar is director of consulting at CCTech and has been doing product development with companies from the Manufacturing, Oil & Gas, and AEC domains. He has vast experience in desktop as well as cloud software development involving CAD, CAM, complex visualization, mathematics, and geometric algorithms. Nem has been actively working with Autodesk Vertical, Research, and Fusion 360 teams. His current areas of interest are generative modeling and machine learning.

Want more? Download the full class handout to read on.