The application of artificial intelligence is a three-dimensional drawing by the Open company. What do we monitor for you in this article so that everyone can be praised for knowing and keeping up with technology?
The next breakthrough to take the AI world by storm could be 3D model generators. This week, open OpenAI Source Point-E, a machine learning system that generates a 3D object at a text prompt.
According to an article published alongside the codebase, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.
Point-E does not create 3D objects in the traditional sense. Instead, it creates point clouds, or discrete groups of data points in space that represent a 3D shape—hence the subtle acronym.
(The “E” in Point-E stands for “efficiency,” ostensibly because it’s faster than previous methods of generating 3D objects.)
Point clouds are easy to collect from a computational point of view, but they do not accurately capture an object’s shape or texture – one of the main limitations of Point-E currently.
Application of artificial intelligence in education
To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds into grids.
Meshes—the collections of vertices, edges, and faces that define an object—are commonly used in 3D modeling and design.
But she notes in the paper that the model can sometimes miss certain parts of the objects, resulting in blocky or distorted shapes.
Apart from the network generation model, which stands on its own, Point-E consists of two models: a text-to-image model and an image-to-3D model.
A text-to-image model, similar to generative art systems such as OpenAI’s DALL-E 2 and Stable Diffusion, was trained on labeled images to understand associations between words and visual concepts.
On the other hand, the image-to-3D model was fed a set of images paired with 3D objects so it would learn to effectively translate between the two.
Examples of artificial intelligence
When given a text command—for example, “3D printable gear, one gear 3 inches in diameter and half an inch thick”—the Text-to-Image model in Point-E creates an artificial object that maps the image to the 3D is fed model, which then creates a point cloud.
After training models on a dataset consisting of several million’ 3D objects and their associated metadata, Point-E can produce colored point clouds that repeatedly match text commands, OpenAI researchers say.
It’s not perfect – Point-E’s image-to-3D model sometimes fails to understand the image from the text-to-image model, resulting in a shape that doesn’t match the text vector does not fit. However, it is faster than the previous state-of-the-art technology – at least according to the OpenAI team.
Read: The benefits of artificial intelligence in our daily lives are unknown to many – 20 proofs that prove it
“While our method performs worse in this assessment than state-of-the-art techniques, it delivers samples in a fraction of the time,” they wrote in the paper. “This could make it more practical for certain applications, or it could enable higher quality 3D object detection.”
What are these programs?
What exactly are the applications? Well, the OpenAI researchers have pointed out that Point-E point clouds can be used to produce real-world objects, for example through 3D printing.
With the additional network transformation model, the system – again polished – can also find its way into game development and animation workflows.
OpenAI may be the latest company to jump into the 3D drawing AI app creator fray, but — as we pointed out earlier — it’s certainly not the first.
Earlier this year, Google released DreamFusion, an expanded version of Dream Fields, a 3D system unveiled in 2021.
Unlike Dream Fields, DreamFusion requires no prior training, meaning it can generate 3D representations of objects without 3D data.
While all eyes are on 2D art generators right now, composite AI for models could be the next big industry disruptor.
3D models are widely used in film, television, interior design, architecture and various fields of science.
Architectural firms use them to display, for example, proposed buildings and landscapes, while engineers use models as designs for new devices, vehicles and structures.
Does the application of artificial intelligence 3D drawings need time?
3D models usually take some time to produce – from a few hours to a few days. Artificial intelligence like Point-E could change that if the kinks are worked out one day, and OpenAI makes a respectable profit doing so.
The question is what kind of intellectual property disputes may arise over time. There is a huge market for 3D models.
With several online marketplaces, including CGStudio and CreativeMarket, that allow artists to sell the content they’ve created.
If Point-E can make its way and its models to market, model artists may protest, pointing to evidence that modern generative AI borrows heavily from its training data — existing 3D models, in the case of Point-E.
Like DALL-E 2, Point-E does not credit any of the artists who influenced its generations.
But OpenAI will leave this issue for another day. Neither the Point-E paper nor the GitHub page makes any mention of copyright in the 3D drawing AI application.
This is for researchers that they They expect Point-E to have problems othersuch as biases inherited from training data and a lack of guarantees about which models can be used to create “dangerous things”.
Perhaps that’s why they’re keen to describe Point-E as a “starting point” that they hope will inspire “further work” in the field of text-to-3D synthesis.