NVIDIA System uses "AI" to Convert Real Life Into A Virtual World
The network operates on high-level descriptions of a scene, for example: segmentation maps or edge maps, that describe where objects are and their general characteristics, such as whether a particular part of the image contains a car or a building, or where the edges of an object are. The network then fills in the details based on what it learned from real life videos. The demo allows attendees to navigate a virtual urban environment that is being rendered by this neural network. The network was trained on videos of real-life urban environments. The conditional generative neural network learned to approximate the visual dynamics of the world such as lighting, and materials. Since the output is synthetically generated, a scene can be easily edited to remove, modify, or add objects.
This has interesting possibilities for creating realistic simulations of the real world on demand for not just driving simulation but also many other scenarios. A sense of familiarity allows users to feel as if they are in the simulation and not just operating in the 3rd person.
There is a short video at https://nwn.blogs.com/nwn/2018/12/nvidia-real-world-virtual-world.html
|Watch: NVIDIA System Converts Real Life Into a Virtual World -- But Probably Won't Replace Actual Virtual Worlds
Using a complex system of algorithms which marketers insist on calling "AI" (making my teeth grind in the process), engineers at NVIDIA have made it possible to convert real world video into a 3D virtual world: The network operates on...