1

Topic: Help explain the data gathering

I'm working in a group to build this scanner, and we have finally built it and got it to perform some pretty good scans! Our design differs slightly because we are doing it for a school project. We wired our own components to a bread board, added a LCD screen so that users don't need an external device, such as a phone/tablet/pc to operate. We also added some functionality such as being able to send the STL file output to an email so that it is easy to 3D print(again we are trying to make it user friendly). However we are having trouble trying to understand how exactly the software works.

Our understanding, so far, is that it takes a 2 pictures, one with the laser on, and one with it off. Since the position of the camera is centered to the middle of the turn table, the laser light is mapped in the software relative to the center. It uses the same pixels where the laser hit, from the image without the line laser, to display the image preview of the scan so far. When the turn table turns and it does the next iteration and adds the data points, next to the previous data points, and repeats for 360 degree view of the object (We feel that this is accurate, we took a scan of a sphere, and unplugged the power to the turn table. It took a scan without rotating the object, and the result looked like a sphere). If there is any flaw in our understanding please correct us. Now some of the things we cant seem to figure out. How does the line laser and camera work together to determine the depth of the object? We are having trouble understanding the code. We're glad it works but we also would like to know why/how. If anyone can give us a mathematical explanation/equations on how this happens it would be greatly appreciated. Or even post a link to a good explanation. Thank you!

2

Re: Help explain the data gathering

Hi vfasheri21,

I think it's great that you are using ATLAS 3D in an educational environment.  As you've surmised, the ImageProcessor::process (https://github.com/hairu/freelss/blob/m … #L119-L326) method is responsible for extracting the pixel locations of the laser from the before and after images.  These locations are then passed to the LocationMapper::mapPoints (https://github.com/hairu/freelss/blob/m … p#L51-L129) method.  This method maps the 2D pixel locations into 3D points by intersecting the plane formed by the laser line.  The pixel points are used to define a ray from the focal point of the camera (camera location), through the pixel on the imaging plane (a focal length away from the focal point) and into the 3D scene.  A plane intersection method is used (https://github.com/hairu/freelss/blob/m … #L187-L233) to map this ray onto the laser plane but really it's just a nicer way of handling the fringe trigonometry cases.  This results in a 3D point on the plane which we then rotate (https://github.com/hairu/freelss/blob/m … 1098-L1119) according to how much the table has already turned.

- Uriah