|
同样是 High Performace Graphics 09 上的 paper,这次是 Intel 的 Morphological Antialiasing。
http://visual-computing.intel-research.net/publications/mlaa.pdf
摘要一下:
MLAA is designed to reduce aliasing artifacts in displayed images without casting any additional rays. It consists of three main steps:
1. Find discontinuities between pixels in a given image.
2. Identify predefined patterns.
3. Blend colors in the neighborhood of these patterns. For the sake of simplicity, we first describe the MLAA technique for (binary) black-and-white images, for which these three steps are trivial, and generalize it for color images later on.
Morphological antialiasing (MLAA) has a set of unique characteristics distinguishing it from other algorithms. It is completely independent from the rendering pipeline. In effect, it can be used for both rasterization and ray tracing applications, even though we consider it naturally aligned with ray tracing algorithms, for which there is no hardware acceleration available. It represents a single post-processing kernel, which can be used in any ray tracing application without any modifications and, in fact, can be implemented on the GPU even if the main algorithm runs on the CPU.
MLAA, even in its current un-optimized implementation, is reasonably fast, processing about 20M pixels per second on a single 3GHz core. It is embarrassingly parallel and on a multicore machine can be used to achieve better load balancing by processing the final output image in idle threads (either ones that finish rendering or ones that finish building their part of acceleration structure). Even though ray tracing rendering is highly parallelizable as such, creating or updating accelerating structures for dynamic models has certain scalability issues [Wald et al. 2007]. MLAA is well positioned to use these underutilized cycles to achieve zero impact on overall performance.
Among the chief shortcomings of the proposed algorithm is its inability to handle features smaller than the Nyquist limit. This is similar to all other techniques that rely on a single sample per pixel to find features inside an image. We discuss this problem at length in section 3 and also propose a palliative solution which aims at reducing these under-sampling artifacts.
"The upcoming Larrabee chip [Seiler et al. 2008], as well as modern GPU cards, are capable of handling 8-bit data extremely efficiently, so our algorithm will benefit from porting to these architectures." |
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有帐号?注册
x
|