how to color point cloud from image pixels? -
i using google tango tablet acquire point cloud data , rgb camera images. want create 3d scan of room. need map 2d image pixels point cloud point. doing lot of point clouds , corresponding images.thus need write code script has 2 inputs 1. point cloud , 2. image taken same point in same direction , script should output colored point cloud. how should approach & platforms simple use?
here math map 3d point v
2d pixel space in camera image (assuming v
incorporates extrinsic camera position , orientation, see note @ bottom*):
// project tangent space. vec2 imagecoords = v.xy/v.z; // apply radial distortion. float r2 = dot(imagecoords, imagecoords); float r4 = r2*r2; float r6 = r2*r4; imagecoords *= 1.0 + k1*r2 + k2*r4 + k3*r6; // map pixel space. vec3 pixelcoords = cameratransform*vec3(imagecoords, 1);
where cameratransform
3x3 matrix:
[ fx 0 cx ] [ 0 fy cy ] [ 0 0 1 ]
with fx
, fy
, cx
, cy
, k1
, k2
, k3
tangocameraintrinsics
.
pixelcoords
declared vec3
2d in homogeneous coordinates. third coordinate 1 , can ignored practical purposes.
note if want texture coordinates instead of pixel coordinates, linear transform can premultiplied onto cameratransform
ahead of time (as top-to-bottom vs. bottom-to-top scanline addressing).
as "platform" (which loosely interpreted "language") simplest, native api seems straightforward way hands on camera pixels, though appears people have succeeded unity , java.
* points delivered tangoxyzij
incorporate depth camera extrinsic transform. technically, because current developer tablet shares same hardware between depth , color image acquisition, won't able color image matches unless both device , scene stationary. fortunately in practice, applications can assume neither camera pose nor scene changes enough in 1 frame time affect color lookup.
Comments
Post a Comment