A “good” 3D scan either requires color or it doesn’t. Processes like photogrammetry depend on even lighting and consistent color relationships between photographs to triangulate depth, but RGB data is just an accessory for laser scanners—it gets slapped on after the depth data has already been collected.  

Color registers two kinds of information in a scan: material difference and light. Color is used to distinguish elements within the homogeneous surface of a mesh and, because scans use video or photography to gather color data, they also register the lighting conditions at the time of the scan. 

I am filtering the color data from found models in Autodesk’s 123D Catch library (photogrammetry) and from scans I made myself (infrared/laser scanning) in order to create different or unexpected spatial effects. By taking the brightness information or the RGB quantities from the colored vertices of a mesh, I can manipulate its normals and warp its form. Different digital lighting scenarios make the brightness data dynamic, taking it out of a fixed moment in time.