找到你要的答案

Q:Aliased rasterization: why sampling pixel at center?

Q:锯齿栅格化:为什么采样像素中心吗?

Both OpenGL and Direct3D use pixel's center as a sample point during rasterization (without antialiasing).

For example here is the quote from D3D11 rasterization rules:

Any pixel center which falls inside a triangle is drawn

I tried to find out what is the reason to use (0.5, 0.5) instead of, say, (0.0, 0.0) or whatever else in range of 0.0 - 1.0f for both x and y.

The result might be translated a little, but does it really matter? Does it produce some visible artifacts? May be, it makes some algorithms harder to implement? Or it's just a convention?

Again, I don't talk about multisampling here.

So what is the reason?

OpenGL和Direct3D使用像素的中心,在光栅化过程中采样点(无抗锯齿)。

例如这是D3D11光栅化规则的报价:

绘制在三角形内的任何像素中心

我试图找到使用的原因是什么(0.5,0.5)代替,说,(0,0)或其他任何在范围0 - X和Y的1.0f

结果可能会被翻译一点,但它真的很重要吗?它会产生一些可见的假象吗?可能,它使一些算法难以实现?还是只是惯例?

再次,我不谈论这里的方法。

那么原因何在?

answer1: 回答1:

This answer mainly focuses on the OP's comment on Cagkan Toptas answer:

Thanx for the answer, but my question is: why does it give better results? Does it at all? If yes, what is the explanation?"

It depends on how you define "better" results. From an image qualioty perspective, it does not change much, as long as the primitves are not specifically aligned (after the projection). Using just one sample at (0,0) instead (0.5, 0.5) will just shift the scene by half a pixel (in both axis, of course). In the general case of aribitrary placed primitves, the average error should be the same.

However, if you want "pixel-exact" drawing (i.e. for text, and UI, and also full-screen post-processing effects), you just would have to take the convention of the underlying implementation into account, and both conventions would work.

One advantage of the "center at half integers" rule is that you can get the integer pixel coordinates (with respect to the sample locations) of the nearest pixel by a simple floor(floating_point_coords) operation, which is simpler than rounding to the nearest integer.

This answer mainly focuses on the OP's comment on Cagkan Toptas answer:

Thanx for the answer, but my question is: why does it give better results? Does it at all? If yes, what is the explanation?"

It depends on how you define "better" results. From an image qualioty perspective, it does not change much, as long as the primitves are not specifically aligned (after the projection). Using just one sample at (0,0) instead (0.5, 0.5) will just shift the scene by half a pixel (in both axis, of course). In the general case of aribitrary placed primitves, the average error should be the same.

然而,如果你想“像素精确”的绘图(即文本,和用户界面,以及全屏后处理效果),你只需要采取公约的基本实施考虑,这两个公约将工作。

一个利用“中心半整数”的规则是,你可以得到整数像素坐标(相对于样品位置)通过一个简单的楼最近的像素(floating_point_coords)操作,这比简单的舍入到最近的整数。

answer2: 回答2:

Maybe this is not the answer to your problem, but I try to answer your question from ray tracing perspective.

In ray tracing, you can get color of every single points in the scene. But since we have a limited amount of pixel, you need to downsample to your image to your screen pixels.

In ray tracing, if you use 1 ray per pixel, we generally choose center point to create our ray which gives the most correct render results. In the image below, I try to show the difference when you choose a corner of pixel or center. The distance will get bigger when your object is far from the rendering screen.

If you use more than one ray for each pixel, lets say 5 rays (4 corners + 1 center) and average the result, you will of course get more realistic image ( Will handle aliasing problems much better) However it will be slower as you guess.

So, it is probably the same idea that opengl and directX take one sample for each pixel instead of multisampling and taking average (Performance issues) and center point is probably giving the best result.

EDIT :

For area rasterization, center of pixel is used because if center of pixel remains inside Area, it is guaranteed that at least 50% of pixel is inside the shape.(Except shape corners) That's why since the proportion is greater than half that pixel is colored.

For other corner selections there is no general rule. Lets look at example image below. The black point (bottom left) is outside of area and should not be drawn (And when you look at it more than half of pixel is outside. However if you look at blue point %80 of pixel is inside area but since bottom left corner is outside area it shouldn't be drawn

也许这不是你的问题的答案,但我试图回答你的问题,从射线追踪的角度来看。

在光线追踪中,你可以得到场景中每一个点的颜色。但是,因为我们有一个有限的像素,你需要对你的图像降质屏幕像素。

在光线追踪中,如果你使用每像素1射线,我们通常选择中心点来创建我们的射线,给出了最正确的渲染结果。在下面的图片中,我尝试显示的差异,当你选择一个角落的像素或中心。当对象远离渲染屏幕时,距离会变大。

如果你使用一个以上的射线为每个像素,让我们说5射线(4角+ 1中心)和平均的结果,你当然会得到更逼真的图像(将处理走样问题要好得多),但它会慢,你猜。

因此,它可能是同样的想法,OpenGL和DirectX采取一个样品每个像素而不是多级,以平均(性能问题),中心点是可能的最好的结果。

编辑:

区域中心像素的光栅化,是因为如果中心像素保持区内,这是保证至少50%的像素内的形状。(除了形状角)这就是为什么自从比重大于一半的像素颜色。

对于其他角落选择没有一般规则。让我们看看下面的例子图像。黑色点(左下角)位于区域之外,不应该被绘制(当你看它超过一半的像素是外部的)。然而,如果你看蓝色点%的像素80是在区域内,但因为左下角是外部区域,它不应该被绘制

opengl  graphics  directx  rendering