I have a pair of camera's separated by some distance, looking at a scene. I have calibrated them using checkerboard and I have their respective intrinsic parameters and their pose and rotation matrix, I am stuck as to how to do data/image fusion.
I think I have to do feature detection and feature matching and then having the world coordinates to do data fusion but I am not sure if I am on the right path.
Would be very helpful if anyone can point me to some answer or help me with it.
Or if I am wrong then what is the right method or steps to follow?
Comments
Post a Comment