Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

depth inaccuracy #2523

Closed
kubrakaracan opened this issue Oct 12, 2018 · 10 comments
Closed

depth inaccuracy #2523

kubrakaracan opened this issue Oct 12, 2018 · 10 comments

Comments

@kubrakaracan
Copy link

kubrakaracan commented Oct 12, 2018

Required Info
Camera Model { D435 }
Firmware Version (05.10.03.00)
Operating System & Version {(Ubuntu 16.04
Kernel Version (Linux Only) (4.15.0-36-generic)
Platform PC
SDK Version { legacy / 2.16.1 }
Language {C++ }
Segment {Robot}

Hi,
I am trying to conduct an experiment by using D435. The experiment is about comparing theoretical and actual displacements of my robot. D435 is helping me out on finding the actual displacement. However, error on the displacement on z-axis turned out to be ~ 8% (for example I need to find the height of a stick as 14 cm but I found it as 13.1). The camera is on a tripod and away from my workspace 80 cm. Also I checked the z-distance on depth-quality tool and it also has error ( it says distance to ground 745 mm but it is 800mm). Basically, my code takes color image as input and segments colors by using connected components algorithm. Then I give start and end pixels detected on the color image to rs2_deproject_pixel_to_point(). In deprojection function, I am using aligned depth frame intrinsics and the distance coming from get_distance-again w/ aligned depth frame. I am conducting a delicate experiment so I need high accuracy. What can be improved in my code/procedure or what is wrong? I also reviewed the issue's similar to mine and applied on my code but they didn't work... Thank you in advance!
My code's working procedure as follows;

int main(int argc, char * argv[]) try
{
rs2::pipeline pipe;
rs2::config cfg;

cfg.enable_stream(RS2_STREAM_DEPTH, 848, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 848, 480, RS2_FORMAT_BGR8, 30);

pipe.start(cfg);

rs2::align align_to(RS2_STREAM_COLOR);
rs2::frameset frames;
for(int i = 0; i < 30 ; i++) frames = pipe.wait_for_frames();

auto data = align_to.process(frames);

rs2::frame color_frame = data.get_color_frame();
rs2::frame depth_frame = data.get_depth_frame();

auto color_mat = frame_to_mat(color_frame);
auto depth_mat = depth_frame_to_meters(pipe, depth_frame);

pixel u;
pixel v;
cv::Mat m_1 = color_mat.clone();
cvtColor(m_1, m_1, cv::COLOR_RGB2GRAY);
run(m_1, 60, depth_frame, u, v);
return EXIT_SUCCESS;

}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}

void run( const Mat &img, int threshval, const rs2::depth_frame& frame, pixel u, pixel v, bool quiet )
{
Mat bw = threshval < 128 ? (img < threshval) : (img > threshval);
Mat labelImage(img.size(), CV_32S);
int nLabels = connectedComponents(bw, labelImage, 8, CV_32S);
vector colors(nLabels);
colors[0] = Vec3b(0, 0, 0);
for(int label = 1; label < nLabels; ++label)
{
colors[label] = Vec3b( (rand()&255), (rand()&255), (rand()&255) );
}
Mat dst(img.size(), CV_8UC3);
for(int r = 0; r < dst.rows; ++r)
{
for(int c = 0; c < dst.cols; ++c)
{
int label = labelImage.at(r, c);
Vec3b &pixel = dst.at(r, c);
pixel = colors[label];
}
}
Mat seeMyLabels;
normalize(labelImage, seeMyLabels, 0, 255, NORM_MINMAX, CV_8U);
applyColorMap(seeMyLabels, seeMyLabels, COLORMAP_JET);

vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
vector<segmentPair> holder;

int segment_size_threshold = 50; 
Mat src_gray;
cvtColor( dst, src_gray, COLOR_BGR2GRAY );
blur( src_gray, src_gray, Size(3,3) );
int thresh= 50;
Mat canny_output;
Canny( src_gray, canny_output, thresh, thresh*3, 3 );
findContours( canny_output, contours, hierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0) );
vector<Moments> mu(contours.size() );

for( size_t i = 0; i < contours.size(); i++ )
{
    mu[i] = moments( contours[i], false );
}
vector<Point2f> mc( contours.size() );
for( size_t i = 0; i < contours.size(); i++ )
{ 
    mc[i] = Point2f( static_cast<float>(mu[i].m10/mu[i].m00) , static_cast<float>(mu[i].m01/mu[i].m00) ); 
}
Mat drawing(canny_output.size(), CV_8UC3, Scalar(255,255,255));
for( int i = 0; i<contours.size(); i++ )
{
    if( mu[i].m00 >= segment_size_threshold)
    { 
        char str[200];
        sprintf(str,"%d",i);
        Scalar color = Scalar(0,0,255);
        drawContours(drawing, contours, i, color, 2, 8, hierarchy, 0, Point());
        circle( drawing, mc[i], 4, color, -1, 8, 0 );
        putText(drawing, str, mc[i], 1, 2,  Scalar (255, 0, 0), 2, 8);
        segmentPair temp;
        temp.first = i;
        temp.second = mc[i];
        holder.push_back(temp);
    }
}
cout<< "label 1 "<< endl;
int segmentLabel1;
cin >> segmentLabel1;
cout<< "label 2 "<< endl;
int segmentLabel2;
cin >> segmentLabel2;
float distance;
for(int i = 0 ; i < holder.size(); i++ )
{
    for(int j = 0 ; j < holder.size(); j++ )
    {
        if (segmentLabel1 == holder[i].first && segmentLabel2 == holder[j].first)
        {
            u.first = holder[i].second.x;
            u.second = holder[i].second.y;
            v.first = holder[j].second.x;
            v.second = holder[j].second.y;
            distance = dist_3d(frame, u, v);
            cout<< "distance: " << distance * 1000 << " mm"<< endl;
        }
    }
}

}

float dist_3d( const rs2::depth_frame& frame, pixel u, pixel v)
{
float upixel[2];
float upoint[3];
float vpixel[2];
float vpoint[3];

upixel[0] = u.first;
upixel[1] = u.second;
vpixel[0] = v.first;
vpixel[1] = v.second;

float udist =frame.get_distance(u.first, u.second);
float vdist = frame.get_distance(v.first, v.second);

rs2_intrinsics intr = frame.get_profile().as<rs2::video_stream_profile>().get_intrinsics(); // Calibration data
rs2_deproject_pixel_to_point(upoint, &intr, upixel, udist);
rs2_deproject_pixel_to_point(vpoint, &intr, vpixel, vdist);

std::ofstream outfile;
outfile.open("test.txt", std::ios_base::app);
outfile << "point x: " << (upoint[0] - vpoint[0])*1000 << " mm /" <<" point y: " << (upoint[1] - vpoint[1])*1000 << " mm /" << " point z: " << (upoint[2] - vpoint[2])*1000 << " mm" << endl;

return sqrt(pow(upoint[0] - vpoint[0], 2) +
            pow(upoint[1] - vpoint[1], 2) +
            pow(upoint[2] - vpoint[2], 2));

}

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi kubrakaracan,

  1. Could you please elaborate why you use D435, instead of D415? D415 has better accuracy. It is probably a better option for you.

  2. This looks like a calibration issue. Could you please try to run the dynamic calibration to see if the depth accuracy gets improved.

@kubrakaracan
Copy link
Author

Thanks for your answers !

  1. My main project requires accuracy in motion that's why I am using D435. I also wanted to utilize the camera for the experiment above.
  2. I will try and let you know. For the time being, rs-depth-quality gives error as ~4%, do you think it is normal or not?

@kubrakaracan
Copy link
Author

Addition to that, do you think I understand align process right? because I measure accurately the height of a stick by using rs-measure. However, my code measures with error of ~7%. This means that something is wrong with my code?

@ev-mp
Copy link
Collaborator

ev-mp commented Oct 16, 2018

Hello @kubrakaracan,

The accuracy measurement and profiling is convoluted, so I'll try address specific points:

the displacement on z-axis turned out to be ~ 8% (for example I need to find the height of a stick as 14 cm but I found it as 13.1

  • The device (in) accuracy is a function of range from camera to surface, not the size of the object. I.e the actual error is pretty constant for a given range regardless of whether the object's size is 14 or 140cm.
    Note that when object size is established as an Euclidean distance between two points, where each point has an embedded error then the accumulated error can be twice as big.

it says distance to ground 745 mm but it is 800mm

  • The depth represents distance from the physical sensor to object, not the camera's case. Basically it is several mm behind the front panel (D435 spec).
    This cannot explain 55mm offset, but can add on several mm to the error.
    When using the rs-depth-quality tool make sure to minimize the skew angle.

  • Aligned depth - when aligning depth to color, the depth coordinates are rounded to the nearest pixel on rgb grid. So when deprojecting aligned depth back to 3D space, the resulted x and y coordinates will be shifted.
    In other words, the alignment process keeps z, but does not preserves x, and y of the original depth.
    This, in turn, scales up the produced error.

In order to mitigate some of the errors produced during the alignment you may use the following methods:

  • Align color to depth (vs depth to color), and then proceed with the feature extraction with OpenCV on the modified rgb image.

  • Keep using depth to rgb alignment but increase the RGB resolution to FullHD, so that the rgb grid will much more dense, and the "rounding up" error will be more tightly bounded.

  • If the scene is static, as indicated, then the original depth error can be greatly improved by activating the post-processing filters, especially the temporal one.

  • The following white paper elaborates on the expected performance (accuracy) for D435 and provides fine-tuning and error minimization guidelines. A definitely recommended reading.

@kubrakaracan
Copy link
Author

Hi again, I changed the alignment from depth -rgb to rgb- depth. And the result is like in the screenshot below. I ran the code a few times but the result didnt change. What do you think the reason? Is it about latest release of sdk version?

original_screenshot_19 10 2018

@kubrakaracan
Copy link
Author

I solved the case in screenshot but the accuracy is still an issue. waiting for your answer! thanks in advance!

@kubrakaracan
Copy link
Author

you can see the difference between ground truth and distance on screenshot
screenshot from 2018-10-23 13-42-49

@YangJiao1996
Copy link

Hi @kubrakaracan,
I also have this issue of depth inaccuracy in my D435 camera. Similar to your screenshot, I got low RMS error but low Z accuracy compared to ground truth (~100 mm in 1500 mm). Is your problem solved by the methods they mentioned above?

@kubrakaracan
Copy link
Author

kubrakaracan commented Mar 18, 2019

Hi @YangJiao1996 ,
The method made my algorithm better, however, not perfect. I think it is worth to try.

  • When using the rs-depth-quality tool make sure to minimize the skew angle.
    especially the bullet above helped me a lot. I built a setup which I am sure it is perfectly perpendicular to the ground. Hence, the distance obtained in rs-depth-quality was mitigated .

@agrunnet
Copy link
Contributor

agrunnet commented Mar 18, 2019

The absolute accuracy can only be improved with a recalibration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants