Menu Close

OpenAI releases Point-E, an AI that generates 3D models

The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

Image Credits: OpenAI

Outside of the mesh-generating model, which stands alone, Point-E consists of two models: a text-to-image model and an image-to-3D model. The text-to-image model, similar to generative art systems like OpenAI’s own DALL-E 2 and Stable Diffusion, was trained on labeled images to understand the associations between words and visual concepts. The image-to-3D model, on the other hand, was fed a set of images paired with 3D objects so that it learned to effectively translate between the two.

When given a text prompt — for example, “a 3D printable gear, a single gear 3 inches in diameter and half inch thick” — Point-E’s text-to-image model generates a synthetic rendered object that’s fed to the image-to-3D model, which then generates a point cloud.

After training the models on a dataset of “several million” 3D objects and associated metadata, Point-E could produce colored point clouds that frequently matched text prompts, the OpenAI researchers say. It’s not perfect — Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text prompt. Still, it’s orders of magnitude faster than the previous state-of-the-art — at least according to the OpenAI team.

Converting the Point-E point clouds into meshes. Image Credits: OpenAI

“While our method performs worse on this evaluation than state-of-the-art techniques, it produces samples in a small fraction of the time,” they wrote in the paper. “This could make it more practical for certain applications, or could allow for the discovery of higher-quality 3D object.”

What are the applications, exactly? Well, the OpenAI researchers point out that Point-E’s point clouds could be used to fabricate real-world objects, for example through 3D printing. With the additional mesh-converting model, the system could — once it’s a little more polished — also find its way into game and animation development workflows.

OpenAI might be the latest company to jump into the 3D object generator fray, but — as alluded to earlier — it certainly isn’t the first. Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a generative 3D system that the company unveiled back in 2021. Unlike Dream Fields, DreamFusion requires no prior training, meaning that it can generate 3D representations of objects without 3D data.

While all eyes are on 2D art generators at the present, model-synthesizing AI could be the next big industry disruptor. 3D models are widely used in film and TV, interior design, architecture and various science fields. Architectural firms use them to demo proposed buildings and landscapes, for example, while engineers leverage models as designs of new devices, vehicles and structures.

Point-E failure cases. Image Credits: OpenAI

3D models usually take a while to craft, though — anywhere between several hours to several days. AI like Point-E could change that if the kinks are someday worked out, and make OpenAI a respectable profit doing so.

The question is what sort of intellectual property disputes might arise in time. There’s a large market for 3D models, with several online marketplaces including CGStudio and CreativeMarket allowing artists to sell content they’ve created. If Point-E catches on and its models make their way onto the marketplaces, model artists might protest, pointing to evidence that modern generative AI borrows heavily from its training data — existing 3D models, in Point-E’s case. Like DALL-E 2, Point-E doesn’t credit or cite any of the artists that might’ve influenced its generations.

But OpenAI’s leaving that issue for another day. Neither the Point-E paper nor GitHub page make any mention of copyright.

To their credit, the researchers do mention that they expect Point-E to suffer from other problems, like biases inherited from the training data and a lack of safeguards around models that might be used to create “dangerous objects.” That’s perhaps why they’re careful to characterize Point-E as a “starting point” that they hope will inspire “further work” in the field of text-to-3D synthesis.

OpenAI releases Point-E, an AI that generates 3D models by Kyle Wiggers originally published on TechCrunch

The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a
OpenAI releases Point-E, an AI that generates 3D models by Kyle Wiggers originally published on TechCrunch   TechCrunch 

Generated by Feedzy

Disclaimer

Innov8 is owned and operated by Rolling Rock Ventures. The information on this website is for general information purposes only. Any information obtained from this website should be reviewed with appropriate parties if there is any concern about the details reported herein. Innov8 is not responsible for its contents, accuracies, and any inaccuracies. Nothing on this site should be construed as professional advice for any individual or situation. This website includes information and content from external sites that is attributed accordingly and is not the intellectual property of Innov8. All feeds ("RSS Feed") and/or their contents contain material which is derived in whole or in part from material supplied by third parties and is protected by national and international copyright and trademark laws. The Site processes all information automatically using automated software without any human intervention or screening. Therefore, the Site is not responsible for any (part) of this content. The copyright of the feeds', including pictures and graphics, and its content belongs to its author or publisher.  Views and statements expressed in the content do not necessarily reflect those of Innov8 or its staff. Care and due diligence has been taken to maintain the accuracy of the information provided on this website. However, neither Innov8 nor the owners, attorneys, management, editorial team or any writers or employees are responsible for its content, errors or any consequences arising from use of the information provided on this website. The Site may modify, suspend, or discontinue any aspect of the RSS Feed at any time, including, without limitation, the availability of any Site content.  The User agrees that all RSS Feeds and news articles are for personal use only and that the User may not resell, lease, license, assign, redistribute or otherwise transfer any portion of the RSS Feed without attribution to the Site and to its originating author. The Site does not represent or warrant that every action taken with regard to your account and related activities in connection with the RSS Feed, including, without limitation, the Site Content, will be lawful in any particular jurisdiction. It is incumbent upon the user to know the laws that pertain to you in your jurisdiction and act lawfully at all times when using the RSS Feed, including, without limitation, the Site Content.  

Close Bitnami banner
Bitnami