Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I'm working with Kinect and OpenCV. I already search in this forum but I didn't find anything like my problem. I keep the raw depth data from Kinect (16 bit), I store it in a CvMat* and then I pass it to the cvGetImage to create an IplImage* from it:

CvMat* depthMetersMat = cvCreateMat( 480, 640, CV_16UC1 );
[...]
cvGetImage(depthMetersMat,temp);

But now I need to work on this image in order to do cvThreshdold and find contours. These 2 functions need an 8-bit-depth-image in input. How can I convert the CvMat* depthMetersMat in an 8-bit-depth-CvMat* ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
631 views
Welcome To Ask or Share your Answers For Others

1 Answer

The answer that @SSteve gave almost did the trick for me, but the call to convertTo seemed to just chop off the high order byte instead of actually scaling the values to an 8-bit range. Since @Sirnino didn't specify which behavior was wanted, I thought I'd post the code that will do the (linear) scaling, in case anyone else is wanting to do that.

SSteve's original code:

 CvMat* depthMetersMat = cvCreateMat( 480, 640, CV_16UC1 );  
 cv::Mat m2(depthMetersMat, true);  
 m2.convertTo(m2, CV_8U);

To scale the values, you just need to add the scale factor (1/256, or 0.00390625, for 16-bit to 8-bit scaling) as the third parameter (alpha) to the call to convertTo

m2.convertTo(m2, CV_8U, 0.00390625);

You can also add a fourth parameter (delta) that will be added to each value after it is multiplied by alpha. See the docs for more info.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...