{"id":45264,"date":"2018-07-17T20:11:00","date_gmt":"2018-07-18T02:11:00","guid":{"rendered":"https:\/\/www.realsenseai.com\/uncategorized-cn\/the-basics-of-stereo-depth-vision\/"},"modified":"2018-07-17T20:11:00","modified_gmt":"2018-07-18T02:11:00","slug":"the-basics-of-stereo-depth-vision","status":"publish","type":"post","link":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/","title":{"rendered":"The basics of stereo depth vision"},"content":{"rendered":"\n<div>\n<p><em>By Sergey Dorodnicov, RealSense\u2122 SDK Manager<\/em><\/p>\n\n\n\n<p>In this post, we\u2019ll cover the basics of stereoscopic vision, including block-matching, calibration and rectification, depth from stereo using opencv, passive vs. active stereo, and relation to&nbsp;structured&nbsp;light.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-why-depth\">Why Depth?<\/h3>\n\n\n\n<p>Regular consumer web-cams offer streams of RGB data within the visible spectrum that can be used for object recognition and tracking, as well as basic scene understanding.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Identifying the exact dimensions of physical objects is still a challenge, even using machine learning. This is where <strong>depth cameras<\/strong> can help.<\/li>\n<\/ul>\n\n\n\n<p>Using a depth camera, you can add a brand\u2011new channel of information, with distance to every pixel. This new channel is used just like the others \u2014 for training and image processing, but also for measurement and scene reconstruction.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-stereoscopic-vision\">Stereoscopic Vision<\/h3>\n\n\n\n<p>Depth from Stereo is a classic computer vision algorithm inspired by the human <a href=\"https:\/\/en.wikipedia.org\/wiki\/Binocular_vision\" target=\"_blank\" rel=\"noreferrer noopener\">binocular vision system<\/a>. It relies on two parallel view\u2011ports and calculates depth by estimating disparities between matching key\u2011points in the left and right images:<\/p>\n<\/div>\n\n\n\n<div class=\"gb-element-09a6dc32\">\n<img decoding=\"async\" class=\"gb-media-f6dee91e\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/12\/stereo-ssd-1.png\"\/>\n<\/div>\n\n\n\n<div>\n<p><em><strong>Depth from Stereo<\/strong> algorithm finds disparity by matching blocks in left and right images<\/em><\/p>\n\n\n\n<p>Most naive implementation of this idea is the <strong>SSD (Sum of Squared Differences)<\/strong> <strong>block\u2011matching<\/strong>&nbsp;algorithm:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">import numpy\n \nfx = 942.8        # lense focal length\nbaseline = 54.8   # distance in mm between the two cameras\ndisparities = 64  # num of disparities to consider\nblock = 15        # block size to match\nunits = 0.001     # depth units\n \nfor i in xrange(block, left.shape[0] - block - 1):\n    for j in xrange(block + disparities, left.shape[1] - block - 1):\n        ssd = numpy.empty([disparities, 1])\n \n        # calc SSD at all possible disparities\n        l = left[(i - block):(i + block), (j - block):(j + block)]\n        for d in xrange(0, disparities):\n            r = right[(i - block):(i + block), (j - d - block):(j - d + block)]\n            ssd[d] = numpy.sum((l[:,:]-r[:,:])**2)\n \n        # select the best match\n        disparity[i, j] = numpy.argmin(ssd)\n \n# Convert disparity to depth\ndepth = np.zeros(shape=left.shape).astype(float)\ndepth[disparity &amp;gt; 0] = (fx * baseline) \/ (units * disparity[disparity &amp;gt; 0])<\/pre>\n<\/div>\n\n\n\n<div>\n<img decoding=\"async\" class=\"gb-media-0bdf4948\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/rectified-768x216-1.png\"\/>\n\n\n\n<p>Rectified image pair used as input to the algorithm<\/p>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"432\" class=\"gb-media-0645f7c0\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/ssd-depth-768x432-1.png\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/ssd-depth-768x432-1.png 768w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/ssd-depth-768x432-1-300x169.png 300w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/>\n\n\n\n<p>Depth-map using <strong>Intel RealSense D415 stereo camera<\/strong><\/p>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"360\" class=\"gb-media-257e8c27\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/realsense-depth.gif\"\/>\n\n\n\n<p>Point-cloud reconstructed using <strong>SSD block-matching<\/strong><\/p>\n\n\n\n<p>There are several challenges that any actual product has to&nbsp;overcome:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Ensuring that the images are in fact coming from two parallel views<\/li>\n\n\n\n<li>Filtering out bad pixels where matching failed due to occlusion<\/li>\n\n\n\n<li>Expanding the range of generated disparities from fixed set of integers to achieve sub\u2011pixel accuracy<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Calibration and Rectification<\/h3>\n\n\n\n<p>Having two exactly parallel view\u2011ports is challenging. While it is possible to generalize the algorithm to any two calibrated cameras (by matching along <a href=\"https:\/\/en.wikipedia.org\/wiki\/Epipolar_geometry\" target=\"_blank\" rel=\"noreferrer noopener\">epipolar lines<\/a>), the more common approach is <a href=\"https:\/\/en.wikipedia.org\/wiki\/Image_rectification\" target=\"_blank\" rel=\"noreferrer noopener\">image rectification<\/a>. During this step left and right images are re\u2011projected to a common virtual\u00a0plane:<\/p>\n<\/div>\n\n\n\n<div>\n<h2 class=\"gb-text\">The basics of stereo depth vision<\/h2>\n\n\n\n<p><em>By Sergey Dorodnicov, RealSense\u2122 SDK Manager<\/em><\/p>\n\n\n\n<p>In this post, we\u2019ll cover the basics of stereoscopic vision, including block-matching, calibration and rectification, depth from stereo using opencv, passive vs. active stereo, and relation to&nbsp;structured&nbsp;light.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-why-depth-0\">Why Depth?<\/h3>\n\n\n\n<p>Regular consumer web-cams offer streams of RGB data within the visible spectrum that can be used for object recognition and tracking, as well as basic scene understanding.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Identifying the exact dimensions of physical objects is still a challenge, even using machine learning. This is where <strong>depth cameras<\/strong> can help.<\/li>\n<\/ul>\n\n\n\n<p>Using a depth camera, you can add a brand\u2011new channel of information, with distance to every pixel. This new channel is used just like the others \u2014 for training and image processing, but also for measurement and scene reconstruction.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-stereoscopic-vision-0\">Stereoscopic Vision<\/h3>\n\n\n\n<p>Depth from Stereo is a classic computer vision algorithm inspired by the human <a href=\"https:\/\/en.wikipedia.org\/wiki\/Binocular_vision\" target=\"_blank\" rel=\"noreferrer noopener\">binocular vision system<\/a>. It relies on two parallel view\u2011ports and calculates depth by estimating disparities between matching key\u2011points in the left and right images:<\/p>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"343\" class=\"gb-media-1074262b\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/calibration_1-768x343-1.jpg\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/calibration_1-768x343-1.jpg 768w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/calibration_1-768x343-1-300x134.jpg 300w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/>\n\n\n\n<p>Image Rectification illustrated (Source: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Image_rectification\" target=\"_blank\" rel=\"noreferrer noopener\">Wikipedia<\/a>*)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Software Stereo<\/h3>\n\n\n\n<p><a href=\"https:\/\/opencv.org\/\">OpenCV<\/a> library has everything you need to get started with depth:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#calibratecamera\">calibrateCamera<\/a> can be used to generate extrinsic calibration between any two arbitrary view\u2011ports<\/li>\n\n\n\n<li><a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#stereorectify\">stereorectify<\/a> will help you rectify the two images prior to depth generation<\/li>\n\n\n\n<li><a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#stereobm\">stereobm<\/a> and <a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#stereosgbm\">stereosgbm<\/a> can be used for disparity calculation<\/li>\n\n\n\n<li><a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#reprojectimageto3d\">reprojectimageto3d<\/a> to\u00a0project disparity image to\u00a03D\u00a0space<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"360\" class=\"gb-media-cd246fb6\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/opencv-depth.gif\"\/>\n\n\n\n<p>Point-cloud generated using opencv <strong>stereobm<\/strong> algorithm<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">import numpy\nimport cv2\n \nfx = 942.8          # lense focal length\nbaseline = 54.8     # distance in mm between the two cameras\ndisparities = 128   # num of disparities to consider\nblock = 31          # block size to match\nunits = 0.001       # depth units\n \nsbm = cv2.StereoBM_create(numDisparities=disparities,\n                          blockSize=block)\n \ndisparity = sbm.compute(left, right)\n \ndepth = np.zeros(shape=left.shape).astype(float)\ndepth[disparity &amp;gt; 0] = (fx * baseline) \/ (units * disparity[disparity &amp;gt; 0])<\/pre>\n\n\n\n<p>The average running time of&nbsp;<a href=\"https:\/\/docs.opencv.org\/2.4\/modules\/calib3d\/doc\/camera_calibration_and_3d_reconstruction.html?highlight=calib#stereobm\" target=\"_blank\" rel=\"noreferrer noopener\">stereobm<\/a> on an Core(TM) i5\u20116600K CPU is around 110 ms offering effective 9 frames\u2011per\u2011second (FPS).<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Get the full source code\u00a0<a href=\"https:\/\/github.com\/dorodnic\/librealsense\/wiki\/source.zip\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Passive vs Active Stereo<\/h3>\n\n\n\n<p>The quality of the results you\u2019ll get with this algorithm depends primarily on the density of visually distinguishable points (features) for the algorithm to match. Any source of texture&nbsp;\u2014 natural or artificial \u2014 will significantly improve the accuracy.<\/p>\n\n\n\n<p>That\u2019s why it\u2019s extremely useful to have an optional <strong>texture projector<\/strong>&nbsp;that can usually add details outside of the visible spectrum. In addition, you can use this projector as an artificial source of light for nighttime or dark situations.Depth-map using <strong>Intel RealSense D415 stereo camera<\/strong><\/p>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"217\" class=\"gb-media-423f109a\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/with_projector-768x217-1.png\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/with_projector-768x217-1.png 768w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/with_projector-768x217-1-300x85.png 300w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/>\n\n\n\n<p>Input images illuminated with <strong>texture projector<\/strong><\/p>\n<\/div>\n\n\n\n<div>\n<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"216\" class=\"gb-media-f44907f1\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/projector_effect-768x216-1.png\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/projector_effect-768x216-1.png 768w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/projector_effect-768x216-1-300x84.png 300w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/>\n\n\n\n<p>Left: opencv\u00a0<strong>stereobm<\/strong>\u00a0without projector. Right:\u00a0<strong>stereobm<\/strong>\u00a0with projector.<\/p>\n<\/div>\n\n\n\n<div>\n<h3 class=\"wp-block-heading\">Structured\u2011Light Approach<\/h3>\n\n\n\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Structured_light\" target=\"_blank\" rel=\"noreferrer noopener\">Structured-Light<\/a> is an alternative approach to depth from stereo. It relies on recognizing a specific projected pattern in&nbsp;a&nbsp;single&nbsp;image.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For those interested in a structured\u2011light solution, there\u2019s the <a href=\"https:\/\/realsenseai.com\/?page_id=1836\">RealSense SR300<\/a>\u00a0camera.<\/li>\n<\/ul>\n\n\n\n<p>Structured\u2011light solutions do offer certain benefits; however, they are fragile. Any external interference, from the sun or another structured\u2011light device, will prevent users from achieving any&nbsp;depth.<\/p>\n\n\n\n<p>In addition, because a laser projector must illuminate the entire scene, power consumption goes up with range, which often requires a dedicated power&nbsp;source.<\/p>\n\n\n\n<p>Depth from stereo on the other hand, only benefits from multi-camera setup and can be used with or without projector.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">RealSense D400 series depth cameras<\/h3>\n<\/div>\n\n\n\n<div class=\"gb-element-08f3dbdc\">\n<img loading=\"lazy\" decoding=\"async\" width=\"555\" height=\"163\" class=\"gb-media-4ec0e720\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/realsense-cameras.png\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/realsense-cameras.png 555w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/realsense-cameras-300x88.png 300w\" sizes=\"auto, (max-width: 555px) 100vw, 555px\" \/>\n<\/div>\n\n\n\n<div>\n<p>RealSense D400 cameras:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Come fully calibrated, producing hardware\u2011rectified pairs of\u00a0images<\/li>\n\n\n\n<li>Perform all depth calculations at up\u00a0to\u00a090\u00a0FPS<\/li>\n\n\n\n<li>Offer sub\u2011pixel accuracy and high fill-rate<\/li>\n\n\n\n<li>Provide an on\u2011board texture projector for tough lighting conditions<\/li>\n\n\n\n<li>Run on standard USB 5V power-source, drawing about 1\u20111.5\u00a0W<\/li>\n\n\n\n<li>Designed from the ground up to:\n<ul class=\"wp-block-list\">\n<li>Address conditions critical to robotic\/drone developers<\/li>\n\n\n\n<li>Overcome the limitations of structured light<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<\/div>\n\n\n\n<div class=\"gb-element-d7c653f6\">\n<img loading=\"lazy\" decoding=\"async\" width=\"768\" height=\"432\" class=\"gb-media-2e23674b\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/d415-depth-768x432-1.png\" srcset=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/d415-depth-768x432-1.png 768w, https:\/\/www.realsenseai.com\/wp-content\/uploads\/2018\/07\/d415-depth-768x432-1-300x169.png 300w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/>\n\n\n\n<p>Depth-map using <strong>Intel RealSense D415 stereo camera<\/strong><\/p>\n<\/div>\n\n\n\n<div class=\"gb-element-84917039\">\n<img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"360\" class=\"gb-media-bf340a4f\" alt=\"\" src=\"https:\/\/realsenseai.com\/wp-content\/uploads\/2018\/07\/realsense-depth.gif\"\/>\n<\/div>\n\n\n\n<div>\n<p>Point-cloud using <strong>RealSense D415 stereo camera<\/strong><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Summary<\/h3>\n\n\n\n<p>Just like opencv, RealSense technology offers open\u2011source and cross\u2011platform set of <a href=\"https:\/\/github.com\/IntelRealSense\/librealsense\">APIs<\/a> for getting depth&nbsp;data.<\/p>\n\n\n\n<p>Check out these resources for more info:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"http:\/\/www.electronicdesign.com\/industrial-automation\/new-intel-realsense-cameras-deliver-low-cost-3d-solutions\" target=\"_blank\" rel=\"noreferrer noopener\">electronicdesign \u2013 New RealSense Cameras Deliver Low-Cost 3D Solutions<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=OOVl5dx7Bb8\" target=\"_blank\" rel=\"noreferrer noopener\">Augmented World Expo \u2013 Help Your Embedded Systems See, Navigate, and Understand<\/a><\/li>\n\n\n\n<li><a href=\"https:\/\/dev.realsenseai.com\/docs\/whitepapers\">List of whitepapers<\/a> covering RealSense D400 cameras technology<\/li>\n\n\n\n<li><a href=\"https:\/\/realsenseai.com\/?page_id=11\">Buy RealSense stereo depth cameras<\/a><\/li>\n<\/ol>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In this post, we\u2019ll cover the basics of stereoscopic vision, including block-matching, calibration and rectification, depth from stereo using OpenCV, passive vs. active stereo, and relation to structured light.<\/p>\n","protected":false},"author":10,"featured_media":42853,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"featured_image_focal_point":[],"inline_featured_image":false,"footnotes":""},"categories":[1206],"tags":[905,906,907],"capability_application":[],"industry":[],"class_list":["post-45264","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-whitepapers-cn","tag-depth-from-stereo-cn","tag-stereo-vision-cn","tag-stereoscopic-depth-cn"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.7 (Yoast SEO v27.0) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>The basics of stereo depth vision - RealSense<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The basics of stereo depth vision\" \/>\n<meta property=\"og:description\" content=\"In this post, we\u2019ll cover the basics of stereoscopic vision, including block-matching, calibration and rectification, depth from stereo using OpenCV, passive vs. active stereo, and relation to structured light.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\" \/>\n<meta property=\"og:site_name\" content=\"RealSense\" \/>\n<meta property=\"article:published_time\" content=\"2018-07-18T02:11:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"331\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"jaymie.tan@freshwatercreative.ca\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"jaymie.tan@freshwatercreative.ca\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\"},\"headline\":\"The basics of stereo depth vision\",\"datePublished\":\"2018-07-18T02:11:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\"},\"wordCount\":965,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg\",\"keywords\":[\"Depth from stereo\",\"Stereo vision\",\"Stereoscopic depth\"],\"articleSection\":[\"\u767d\u76ae\u4e66\"],\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\",\"url\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\",\"name\":\"The basics of stereo depth vision - RealSense\",\"isPartOf\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg\",\"datePublished\":\"2018-07-18T02:11:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage\",\"url\":\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg\",\"contentUrl\":\"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg\",\"width\":1024,\"height\":331},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"RealSense\",\"item\":\"https:\/\/www.realsenseai.com\/cn\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"\u65b0\u95fb\u4e0e\u6d1e\u5bdf\",\"item\":\"https:\/\/www.realsenseai.com\/cn\/category\/news-insights-cn\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"\u767d\u76ae\u4e66\",\"item\":\"https:\/\/www.realsenseai.com\/cn\/category\/news-insights-cn\/whitepapers-cn\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"The basics of stereo depth vision\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#website\",\"url\":\"https:\/\/www.realsenseai.com\/cn\/\",\"name\":\"RealSense\",\"description\":\"Powering Physical AI with Advanced Vision and Perception\",\"publisher\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.realsenseai.com\/cn\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#organization\",\"name\":\"RealSense\",\"url\":\"https:\/\/www.realsenseai.com\/cn\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/realsenseai.com\/wp-content\/uploads\/2025\/07\/realsenseai_logo.jpeg\",\"contentUrl\":\"https:\/\/realsenseai.com\/wp-content\/uploads\/2025\/07\/realsenseai_logo.jpeg\",\"width\":200,\"height\":200,\"caption\":\"RealSense\"},\"image\":{\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/linkedin.com\/company\/realsenseai\/\",\"https:\/\/www.youtube.com\/@IntelRealSense\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#\/schema\/person\/41f6ef6561b53e9f1630bdb32696053c\",\"name\":\"jaymie.tan@freshwatercreative.ca\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/www.realsenseai.com\/cn\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/0e2792e544c277e7af150f837b7dc6c0786155f67f34faf8431d3f0a3a573d34?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/0e2792e544c277e7af150f837b7dc6c0786155f67f34faf8431d3f0a3a573d34?s=96&d=mm&r=g\",\"caption\":\"jaymie.tan@freshwatercreative.ca\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"The basics of stereo depth vision - RealSense","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/","og_locale":"zh_CN","og_type":"article","og_title":"The basics of stereo depth vision","og_description":"In this post, we\u2019ll cover the basics of stereoscopic vision, including block-matching, calibration and rectification, depth from stereo using OpenCV, passive vs. active stereo, and relation to structured light.","og_url":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/","og_site_name":"RealSense","article_published_time":"2018-07-18T02:11:00+00:00","og_image":[{"width":1024,"height":331,"url":"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg","type":"image\/jpeg"}],"author":"jaymie.tan@freshwatercreative.ca","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"jaymie.tan@freshwatercreative.ca","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"6 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#article","isPartOf":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/"},"headline":"The basics of stereo depth vision","datePublished":"2018-07-18T02:11:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/"},"wordCount":965,"commentCount":0,"publisher":{"@id":"https:\/\/www.realsenseai.com\/cn\/#organization"},"image":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg","keywords":["Depth from stereo","Stereo vision","Stereoscopic depth"],"articleSection":["\u767d\u76ae\u4e66"],"inLanguage":"zh-Hans","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/","url":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/","name":"The basics of stereo depth vision - RealSense","isPartOf":{"@id":"https:\/\/www.realsenseai.com\/cn\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage"},"image":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage"},"thumbnailUrl":"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg","datePublished":"2018-07-18T02:11:00+00:00","breadcrumb":{"@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#primaryimage","url":"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg","contentUrl":"https:\/\/www.realsenseai.com\/wp-content\/uploads\/2026\/01\/intel_realsense_stereo_depth_vision_basics-1024x331-1.jpg","width":1024,"height":331},{"@type":"BreadcrumbList","@id":"https:\/\/www.realsenseai.com\/cn\/news-insights-cn\/whitepapers-cn\/the-basics-of-stereo-depth-vision\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"RealSense","item":"https:\/\/www.realsenseai.com\/cn\/"},{"@type":"ListItem","position":2,"name":"\u65b0\u95fb\u4e0e\u6d1e\u5bdf","item":"https:\/\/www.realsenseai.com\/cn\/category\/news-insights-cn\/"},{"@type":"ListItem","position":3,"name":"\u767d\u76ae\u4e66","item":"https:\/\/www.realsenseai.com\/cn\/category\/news-insights-cn\/whitepapers-cn\/"},{"@type":"ListItem","position":4,"name":"The basics of stereo depth vision"}]},{"@type":"WebSite","@id":"https:\/\/www.realsenseai.com\/cn\/#website","url":"https:\/\/www.realsenseai.com\/cn\/","name":"RealSense","description":"Powering Physical AI with Advanced Vision and Perception","publisher":{"@id":"https:\/\/www.realsenseai.com\/cn\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.realsenseai.com\/cn\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/www.realsenseai.com\/cn\/#organization","name":"RealSense","url":"https:\/\/www.realsenseai.com\/cn\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.realsenseai.com\/cn\/#\/schema\/logo\/image\/","url":"https:\/\/realsenseai.com\/wp-content\/uploads\/2025\/07\/realsenseai_logo.jpeg","contentUrl":"https:\/\/realsenseai.com\/wp-content\/uploads\/2025\/07\/realsenseai_logo.jpeg","width":200,"height":200,"caption":"RealSense"},"image":{"@id":"https:\/\/www.realsenseai.com\/cn\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/linkedin.com\/company\/realsenseai\/","https:\/\/www.youtube.com\/@IntelRealSense"]},{"@type":"Person","@id":"https:\/\/www.realsenseai.com\/cn\/#\/schema\/person\/41f6ef6561b53e9f1630bdb32696053c","name":"jaymie.tan@freshwatercreative.ca","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/www.realsenseai.com\/cn\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/0e2792e544c277e7af150f837b7dc6c0786155f67f34faf8431d3f0a3a573d34?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0e2792e544c277e7af150f837b7dc6c0786155f67f34faf8431d3f0a3a573d34?s=96&d=mm&r=g","caption":"jaymie.tan@freshwatercreative.ca"}}]}},"_links":{"self":[{"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/posts\/45264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/comments?post=45264"}],"version-history":[{"count":0,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/posts\/45264\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/media\/42853"}],"wp:attachment":[{"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/media?parent=45264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/categories?post=45264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/tags?post=45264"},{"taxonomy":"capability_application","embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/capability_application?post=45264"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/www.realsenseai.com\/cn\/wp-json\/wp\/v2\/industry?post=45264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}