Modern video cards easily render 3D images on a flat-screen monitor, but their performance is not enough to quickly create a three-dimensional model from a 2D image. NVIDIA solved this problem with the help of artificial intelligence, which is able to “finish” the missing parts of two-dimensional drawings, turning them into full three-dimensional images. In the future, technology will be able to create entire virtual worlds from ordinary drawings in a split second.
According to 4pda.ru, an algorithm called DIB-R is able to predict how a 2D image will look in three dimensions - it predicts the lighting, texture and depth of a future volumetric model. According to the developers, the mechanism of the AI is similar to the features of human vision, which allow you to convert an image visible with different eyes into a stereoscopic image.
According to representatives of NVIDIA, the new technology will be in demand not only in the gaming and multimedia industries, but will also help autonomous robots to safely interact with the environment, allowing them to "feel" the depth of space.
“Imagine that you can take a photo on the basis of which a 3D model will be formed. This means that you can look at the scene you took a picture from different angles. You can take old photos from your photo collection, turn them into 3D scenes and study them as if you were there, ”said Sanya Fiedler, head of the artificial intelligence division of NVIDIA.
Algorithm developers claim that DIB-R can transform 2D animal images into realistic 3D images in less than a second. In the future, the ability to render 3D worlds from photographs may lead to new ways to create content, including the design of open-world games like Skyrim or Grand Theft Auto.
The release date of the version of the algorithm for developers has not yet been announced. You can learn more about the features of DIB-R in the official NVIDIA blog.