Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SAVE depth data into file #1485

Closed
martijnkusters opened this issue Apr 5, 2018 · 29 comments
Closed

SAVE depth data into file #1485

martijnkusters opened this issue Apr 5, 2018 · 29 comments

Comments

@martijnkusters
Copy link

I like to save the depth data into a file for the D400 camera.
I work in c++. Is there some example available that I can use.

Best Regards,
Martijn

@GitwellAnyohub
Copy link

I'm basically trying to do the same thing. I've tried some of these examples https://github.com/IntelRealSense/librealsense/tree/master/examples
but they aren't working.

For rs-save-to-disk.cpp for example, here is what I did (I'm using ubuntu 16.04)

  1. Install sdk from source following these instructions https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md

  2. save the rs-save-to-disk.cpp script to my home directory

  3. copy and pasted this https://github.com/nothings/stb/blob/master/stb_image_write.h into a file called stb_image_write.h also in my home directory

  4. run the following in the terminal

$ g++ -std=c++11 rs-save-to-disk.cpp -lrealsense2

The result is a file called a.out that I see in my home directory (still have no idea what is is). It doesn't seem like anything worked.

@GitwellAnyohub
Copy link

@martijnkusters Sorry I just went to lunch. So the output of the thing I ran above is an a.out file that you can run using:

$ ./a.out #just put a ./ in front of the output file and it runs

This returns two png files, one depth and one rgb

@martijnkusters
Copy link
Author

I can save the depth and rgb as png files, but what I want to do is to save the Raw depth data in a data file format likes csv or txt.

@dorodnic
Copy link
Contributor

dorodnic commented Apr 9, 2018

bool save_frame_raw_data(const std::string& filename, rs2::frame frame)
{
    bool ret = false;
    auto image = frame.as<video_frame>();
    if (image)
    {
        std::ofstream outfile(filename.data(), std::ofstream::binary);
        outfile.write(static_cast<const char*>(image.get_data()), image.get_height()*image.get_stride_in_bytes());

        outfile.close();
        ret = true;
    }

    return ret;
}

@martijnkusters
Copy link
Author

And is there already code to save this depth data in a matlab format?

@GitwellAnyohub
Copy link

@dorodnic can you briefly explain the code and the context in which it has to be inserted?

@dorodnic
Copy link
Contributor

Of course. rs2::frame is the object SDK uses to reference a single frame. You can add save_frame_raw_data("1.bin", depth); to rs-capture.cpp:32 to save the depth frame (or change it to color if you want).
Inside the function, we check if the frame indeed comes from video stream (and not some other supported sensor). Then, you can access the underlying pixels byte-array using image.get_data() and calculate size of the frame in bytes using image.get_height()*image.get_stride_in_bytes(). From there its pretty much copy-paste from stack overflow to write bytes to file.

@martijnkusters I'm not sure what is matlab format, however there must be ways to import raw byte array in matlab. Generating giant CSV with depth values in meters is also not very complicated, please let us know if you need help.

@GitwellAnyohub
Copy link

GitwellAnyohub commented Apr 10, 2018

@dorodnic @martijnkusters Ok so just to tie the knot I'm pasting the code I'm using here. First, I started with the rs.capture.cpp and removed all the opengl nonsense because something isnt working there for me (which is fine I don't need to see the video as it's streaming I can do that in the downstream analysis). When I run the code below, the camera simply streams (I can tell that by the ir):

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include

// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{

// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();

while(true) // Application still alive?
{
    rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera

    rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
    rs2::frame color = data.get_color_frame();            // Find the color data
}

return EXIT_SUCCESS;

}
catch(const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch(const std::exception & e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}

NOW, I want to simply save the depth stream as a binary (little endian is preferred but it doesnt really matter) text file. I want each pixel to be a number that represents distance from the camera. You're saying to do that I need to copy and paste the code above like so:

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include

// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen

bool save_frame_raw_data(const std::string& filename, rs2::frame frame) //this function
{
bool ret = false;
auto image = frame.as<video_frame>();
if (image)
{
std::ofstream outfile(filename.data(), std::ofstream::binary);
outfile.write(static_cast<const char*>(image.get_data()), image.get_height()*image.get_stride_in_bytes());
outfile.close();
ret = true;
}

	 return ret;

}

int main(int argc, char * argv[]) try
{

// Declare depth colorizer for pretty visualization of depth data
rs2::colorizer color_map;

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();

while(true) // Until I deem necessary (so until I press ctrl C in the command window...
{
    rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
    rs2::frame depth = color_map(data.get_depth_frame()); // Find and colorize the depth data
    rs2::frame color = data.get_color_frame();            // Find the color depth data
save_frame_raw_data("1.bin", depth);

}

return EXIT_SUCCESS;

}
catch(const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch(const std::exception & e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}

What I'm doing is copying the function on the top and calling the function where you said to insert it. Of course there are errors because I have no idea what I'm doing and I can not follow variables well in c++. Is this totally wrong? Does the function need to be defined somewhere else?

Here are the errors below:

rockwell@rockwell-asus:~/Behavior_From_Depth_Cameras$ g++ -std=c++11 rs-recorddepth.cpp -lrealsense2
rs-recorddepth.cpp: In function ‘bool save_frame_raw_data(const string&, rs2::frame)’:
rs-recorddepth.cpp:15:28: error: ‘video_frame’ was not declared in this scope
auto image = frame.as<video_frame>();
^
rs-recorddepth.cpp:15:28: note: suggested alternative:
In file included from /usr/include/librealsense2/hpp/rs_sensor.hpp:8:0,
from /usr/include/librealsense2/hpp/rs_device.hpp:8,
from /usr/include/librealsense2/hpp/rs_record_playback.hpp:8,
from /usr/include/librealsense2/hpp/rs_context.hpp:8,
from /usr/include/librealsense2/rs.hpp:9,
from rs-recorddepth.cpp:4:
/usr/include/librealsense2/hpp/rs_frame.hpp:407:11: note: ‘rs2::video_frame’
class video_frame : public frame
^
rs-recorddepth.cpp:15:41: error: no matching function for call to ‘rs2::frame::as()’
auto image = frame.as<video_frame>();
^
In file included from /usr/include/librealsense2/hpp/rs_sensor.hpp:8:0,
from /usr/include/librealsense2/hpp/rs_device.hpp:8,
from /usr/include/librealsense2/hpp/rs_record_playback.hpp:8,
from /usr/include/librealsense2/hpp/rs_context.hpp:8,
from /usr/include/librealsense2/rs.hpp:9,
from rs-recorddepth.cpp:4:
/usr/include/librealsense2/hpp/rs_frame.hpp:361:11: note: candidate: template T rs2::frame::as() const
T as() const
^
/usr/include/librealsense2/hpp/rs_frame.hpp:361:11: note: template argument deduction/substitution failed:
rs-recorddepth.cpp:15:41: error: template argument 1 is invalid
auto image = frame.as<video_frame>();
^
rs-recorddepth.cpp:18:33: error: variable ‘std::ofstream outfile’ has initializer but incomplete type
std::ofstream outfile(filename.data(), std::ofstream::binary);
^
rs-recorddepth.cpp:18:50: error: incomplete type ‘std::ofstream {aka std::basic_ofstream}’ used in nested name specifier
std::ofstream outfile(filename.data(), std::ofstream::binary);

Also thanks for your help. I know I'm a lot of trouble!

@dorodnic
Copy link
Contributor

I think using namespace rs2; would solve the compilation problem.

To save you some time I modified rs-capture to dump depth and colour frames to CSV, you can use the code here: rs-capture.cpp
It has the extra benefit of converting depth values to meters and colour pixels to RGB. Hope this will do the trick :)

@GitwellAnyohub
Copy link

@dorodnic Thanks so much Sergey!! I can't wait to get to my local machine and try it out! I'll let you know how it goes as soon as possible. @martijnkusters once I have this working I can help you with the Matlab part (if you haven't done it already)

@GitwellAnyohub
Copy link

@dorodnic so something is def working! I was able to get multiple depth AND color csv files. I need to look into it more (I only had 5 minutes at my machine and I ran it quickly) and see why it's breaking it up into multiple files (but this may not be a problem). Thanks so much!

Here is what it looked like

running rs-capture-2 cpp

@dorodnic
Copy link
Contributor

I'm appending the frame number to the filename at lines 19 and 44, if you just want to save just the last frame remove the << "_" << frame.get_frame_number().

@martijnkusters
Copy link
Author

martijnkusters commented Apr 11, 2018

Thank you for the work. I will try the code. Currently I also changed the align example with some lines to capture depth and pointclouds and write it into a txt file to read it with matlab.

Best Regards,
Martijn Kusters

ofstream myfile("c:/tmp/depth" + IntToStr(j) +".txt");
ofstream myfile2("c:/tmp/pointcloud" + IntToStr(j) + ".txt");
points = pc.calculate(depth_frame);
//const uint16_t* p_cloud_frame = reinterpret_cast<const uint16_t*>(points.get_data());

auto vertices = points.get_vertices();
#pragma omp parallel for schedule(dynamic) //Using OpenMP to try to parallelise the loop
for (int y = 0; y < height; y++)
{
    auto depth_pixel_index = y * width;
    for (int x = 0; x < width; x++, ++depth_pixel_index)
    {
        // Get the depth value of the current pixel
        auto pixels_distance = 1000*depth_scale * p_depth_frame[depth_pixel_index];
		//auto depth = frames.get_depth_frame();
		//points = pc.calculate(depth_frame);

		//auto pointcloudpunt = points;

		//array[depth_pixel_index] = pixels_distance;
		if (myfile.is_open())
		{
			myfile << pixels_distance << " ";
		}

		if (myfile2.is_open())
		{
			myfile2 << vertices[depth_pixel_index].x <<" " << vertices[depth_pixel_index].y << " " << vertices[depth_pixel_index].z << " ";
		}
        // Check if the depth value is invalid (<=0) or greater than the threashold
        if (pixels_distance <= 0.f || pixels_distance > clipping_dist)
        {
            // Calculate the offset in other frame's buffer to current pixel
            auto offset = depth_pixel_index * other_bpp;

            // Set pixel to "background" color (0x999999)
            std::memset(&p_other_frame[offset], 0x99, other_bpp);
        }
    }
}
myfile.close();
myfile2.close();

@martijnkusters
Copy link
Author

Next step is to calibrate the camera.

@GitwellAnyohub
Copy link

@dorodnic something that i noticed when running the rs-capture.cpp script is that a lot of frames were skipped. Is there anything I can do about that?

@GitwellAnyohub
Copy link

I'm basically only saving ~10% of the frames. Does it take too long to open and write and close a file each time?

@dorodnic
Copy link
Contributor

There are two simple things you can do:

  1. Run cmake .. -DCMAKE_BUILD_TYPE=Release before rebuilding the SDK and the examples to enable compiler optimisations
  2. Run it on a partition with faster IO, like SSD

However, saving data to CSV is extremely not efficient by definition.
We worked hard to offer a decent quality recorder with *.bag format. It should be fairly easy to use (just add config. enable_record_to_file("filename.bag");) and reasonably optimised.

You can choose a hybrid approach of recording into bag file and then extract CSVs in post-processing. I will be happy to help with the technical details.

@GitwellAnyohub
Copy link

@dorodnic Ok I will try both options. (I really hope the second one works especially since im just using my laptop now). I do have a script for extracting the bagfiles but it’s super slow. In fact, the slowness is one the reason I wanted to try and write to disk. Would it be faster to write a binary file instead? Also, Im not thrilled about doing the bag exteactiin because it requires me to use ROS and thats something I could not download without sudo access. I therefore cannot process it using my universities cluster. Is there a very fast way to extract the bagfiles? If I wantd to do some real time feature detections, would it be too slow to analyze frames in the bag file?

@GitwellAnyohub
Copy link

GitwellAnyohub commented Apr 12, 2018

@dorodnic I was talking with some members of my lab about this issue and it seems like the best solution for us would be to save the depth information as a long string of bytes into one file. If each pixel is 16 bytes, then we can save a really long string of bytes. Since we know the resolution we can do whatever we want afterwards.

I guess this brings us back to the original code snippet you suggested, however I tried:

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API

#include
#include
#include
#include

bool save_frame_raw_data(const std::string& filename, rs2::frame frame)
{
bool ret = false;
auto image = frame.as<video_frame>();
if (image)
{
std::ofstream outfile(filename.data(), std::ofstream::binary);
outfile.write(static_cast<const char*>(image.get_data()), image.get_height()*image.get_stride_in_bytes());

    outfile.close();
    ret = true;
}

return ret;

}

// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
rs2::colorizer color_map;

auto depth_units = rs2::context().query_devices().front()
    .query_sensors().front().get_option(RS2_OPTION_DEPTH_UNITS);

rs2::pipeline pipe;
rs2::config myconfig; //I am declaring the configuration object
myconfig.enable_stream(RS2_STREAM_DEPTH, 848, 480, RS2_FORMAT_Z16, 90); //This is the configuration that I want. 
pipe.start(myconfig);

while(true) // Application still alive?
{
    rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = data.get_depth_frame(); // Find and colorize the depth data     
    save_frame_raw_data("1.bin", depth);      
}

return EXIT_SUCCESS;

}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}

and got the following errors:

errorsrunningstream_depth_save_binary

also pasted here as text
stream_depth_save_binary.cpp: In function ‘bool save_frame_raw_data(const string&, rs2::frame)’:
stream_depth_save_binary.cpp:13:27: error: ‘video_frame’ was not declared in this scope
auto image = frame.as<video_frame>();
^
stream_depth_save_binary.cpp:13:27: note: suggested alternative:
In file included from /usr/include/librealsense2/hpp/rs_sensor.hpp:8:0,
from /usr/include/librealsense2/hpp/rs_device.hpp:8,
from /usr/include/librealsense2/hpp/rs_record_playback.hpp:8,
from /usr/include/librealsense2/hpp/rs_context.hpp:8,
from /usr/include/librealsense2/rs.hpp:9,
from stream_depth_save_binary.cpp:3:
/usr/include/librealsense2/hpp/rs_frame.hpp:407:11: note: ‘rs2::video_frame’
class video_frame : public frame
^
stream_depth_save_binary.cpp:13:40: error: no matching function for call to ‘rs2::frame::as()’
auto image = frame.as<video_frame>();
^
In file included from /usr/include/librealsense2/hpp/rs_sensor.hpp:8:0,
from /usr/include/librealsense2/hpp/rs_device.hpp:8,
from /usr/include/librealsense2/hpp/rs_record_playback.hpp:8,
from /usr/include/librealsense2/hpp/rs_context.hpp:8,
from /usr/include/librealsense2/rs.hpp:9,
from stream_depth_save_binary.cpp:3:
/usr/include/librealsense2/hpp/rs_frame.hpp:361:11: note: candidate: template T rs2::frame::as() const
T as() const
^
/usr/include/librealsense2/hpp/rs_frame.hpp:361:11: note: template argument deduction/substitution failed:
stream_depth_save_binary.cpp:13:40: error: template argument 1 is invalid
auto image = frame.as<video_frame>();
^

@GitwellAnyohub
Copy link

ok im a fool i just needed to change <video_frame> to rs::video_frame

@GitwellAnyohub
Copy link

ok so basically I've gotten it to "work". The script:
`

#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API

#include
#include
#include
#include

bool save_frame_raw_data(const std::string& filename, rs2::frame frame)
{
bool ret = false;
auto image = frame.asrs2::video_frame();
if (image)
{
std::ofstream outfile(filename.data(), std::ofstream::binary);
outfile.write(static_cast<const char*>(image.get_data()), image.get_height()*image.get_stride_in_bytes());

    outfile.close();
    ret = true;
}

return ret;

}

// Capture Example demonstrates how to
// capture depth and color video streams and render them to the screen
int main(int argc, char * argv[]) try
{
rs2::colorizer color_map;

auto depth_units = rs2::context().query_devices().front()
    .query_sensors().front().get_option(RS2_OPTION_DEPTH_UNITS);

rs2::pipeline pipe;
rs2::config myconfig; //I am declaring the configuration object
myconfig.enable_stream(RS2_STREAM_DEPTH, 848, 480, RS2_FORMAT_Z16, 90); //This is the configuration that I want. 
pipe.start(myconfig);

while(true) // Application still alive?
{
    rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = data.get_depth_frame(); // Find and colorize the depth data     
    save_frame_raw_data("1.bin", depth);      
}

return EXIT_SUCCESS;

}
catch (const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
`
Gives me 1 file called 1.bin containing one frame of data. Now I just need it to record all the frames of data. Sorry I'm also using this thread as a living notepad!

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
hi @GitwellAnyohub,

thanks for sharing your experience, do you need more help on this topic?
thanks.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
closing this ticket here, please feel free to create another new ticket if you still have question.

@ywjia
Copy link

ywjia commented Sep 11, 2018

I have a question about saving data to file.

Is it possible to save frameset as whole, as below:

  rs2::frameset frameset_w = pipe.wait_for_frames();
  save_frameset("1.bin", frameset_w);      

then, in a separate post-processing step, we can simply do the following to retrieve all the data

rs2::frameset frameset_r = read_frameset("1.bin");
rs2::video_frame color = frameset_r.get_color_frame();
rs2::depth_frame depth = frameset_r.get_depth_frame();
...

Note that this is different from recording, since here the user can choose which framesets to save.

Thanks.

@gayanbrahmanage
Copy link

You can use,
Mat M;
M.create(height,width,CV_32FC1);
Then, save using .exr file type.

@Resays
Copy link

Resays commented Apr 29, 2019

I'm using RealSense 2.20 and task is to capture image with D415 and convert that image to rgb-d data and convert that data into xyz for pointcloud 3D demonstration
any solution ?

@Resays
Copy link

Resays commented Apr 29, 2019

i'm actually looking to implement this research http://rgbd.cs.princeton.edu/
using realsense and c++
anyone can help ? @dorodnic @gayanbrahmanage

@gayanbrahmanage
Copy link

@Resays
Copy link

Resays commented Jun 10, 2019

Here is an example to get RGB data from point how to get XYZ RGB data from RealSense

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants