找到你要的答案

Q:3D reconstruction based on stereo rectified edge images

Q:基于立体校正边缘图像的三维重建

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0

我有两个封闭的曲线立体声整流边缘图像。有可能找到差距(沿x轴的图像坐标)边缘图像之间做一个三维重建,因为我知道摄像机矩阵。我用matlab的过程。因为基于窗口的技术需要纹理,所以我不能做一个基于窗口的技术,因为它是二进制图像。的问题,我如何计算边缘图像之间的差距?图像可在以下链接。左边缘图像https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?DL = 0右边缘图像https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?DL = 0

answer1: 回答1:

For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.

The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.

For all other pixels in the image, you just don't have any information.

对于这种类型的图像,你可以很容易地映射每个边缘像素从左边的图像,以其对应的正确的图像,因此计算这些像素的差距如常。

映射可以做各种方式,这取决于这些图像是如何典型。例如,使用DTW想匹配曲率的方法。

对于图像中的所有其他像素,你只是没有任何信息。

answer2: 回答2:

@Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

@光子:谢谢你的建议。我照你的建议做了。我每一个匹配的边缘像素在左和右图像在DTW喜欢时尚。但也有一些y-pixel像素的坐标值相差1或2像素,尽管他们得到适当的纠正。所以我计算的平均深度的不同(高达2像素差异边缘像素的Y轴)采用最小二乘法。但最后我得到这个空间曲线(https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?DL = 0)时,他们真的应该是这样的(https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?DL = 0)这是使用RGB图像中获得的。我想不出任何其他的理由,为什么它会这样因为我比较通过沿408边缘像素。

matlab  image-processing  computer-vision