|
How close are we to truly photorealistic, real-time games?
By Kyle Orland | Published about 23 hours ago
How close are we to truly photorealistic, real-time games?
While this scene from Crysis 2 looks pretty good, in a few decades it's going to look like outdated crap.
Every graphical and technical advance the game industry has seen from Pong to Crysis has been a small step toward the end goal of a real-time, photorealistic 3D world that is truly indistinguishable from a real-world scene. Speaking at the DICE Summit Thursday, Epic Games founder and programmer Tim Sweeney examined the speed and direction of computing improvements and determined that we "might expect, over the course of our lifetime, we'd get to amounts of computing power that come very close to simulating reality."
The necessary bounds for true photorealism are set by the physical limits of the human eye, Sweeney explained, which can only process the equivalent of a 30 megapixel image at about 70 frames per second. Given current trends, monitor display technology should be able to handle that level of detail for a small area in just a few more generations. Projecting that level of detail across a larger, 90 degree field of vision would take an 8000 x 6000 pixel display, which is still quite far off but "within sight," Sweeney said.
This level of detail sets an upper limit on the amount of memory and raw processing power we'd need to depict a "good enough" photorealistic scene, Sweeney says. That limit is about 50 times greater than the polygon-processing capabilities of today's top-end hardware, meaning it's at least two generations away.
A light matter
But simply pushing polygons isn't enough to get true realism. The ability to trace the subtle interplay of light on various surfaces is also key to creating a realistic scene. Yet the vast majority of current-generation games use a "two-bounce" light processing algorithm of the type used in games going back to 1999's Unreal. We're just now seeing much more convincing "three bounce" light processing in demos like Samaritan, which Epic showed off at last year's GDC.
And while Samaritan's 2.5 teraflops (that's trillions of floating-point operations per second, laymen) is a far cry from the 10 megaflops that were needed to power the original Doom, we're still a good deal short of the 5,000 teraflops Sweeney calculates we'd need to process a fully realistic 3D scene in real time.
And even then, that would only handle the visual effects we currently understand how to model realistically—things like shadows, skin tones, smoke, and water. Plenty of the intangible elements of a scene, like realistic human movements, speech, and even personality, are way beyond our ability to model realistically just yet. "We don't have the algorithms, so even if we had a perfect computer today... we'd be relying not on more computing power, but on innovation in the state of the art algorithms," Sweeney said.
Revolutionary interfaces
Outside of raw computing and algorithmic power, the future may also hold further revelations in the way we interact with virtual environments. Sweeney pointed to upcoming Sony sunglasses with transparent lenses that allow for hands-free image projection in a way that hasn't seemed cool since the '80s. He also predicted that increasing scarcity in real goods may drive up the value of increasingly realistic virtual goods, to the point where the market rivals the $25 trillion worldwide trade in real estate.
Whatever form the interface takes, though, the change caused by truly realistic real-time modeling is going to be truly revolutionary, Sweeney said. "When a whole generation of kids is raised with those devices pervasively around them, it's going to change the world," he said. "I see a bright future for computing and its implications on games. I see the ability as developers to exploit another 1,000-fold increase in power on platforms… I think our industry's brightest days are yet to come."
Image courtesy of EA
http://arstechnica.com/gaming/ne ... real-time-games.ars
|
|