identification>isolation
Two scans were performed: a Lidar scan (for a more accurate-to-reality digital model) and a path-based photogrammetry scan. These registrations were superimposed to understand what new digital-only geometries were being produced by the photogrammetry scan’s misregistrations. Regions were selected for further architectural articulation based on: locating areas of extreme difference between the photogrammetry and Lidar model, and the region’s ability to host an architectural intervention consideringfactors such as safety, accessibility, existing program, and circulation.
The array of points produced by the photogrammetry scan resulted from a broad range of conflicting environmental and software factors, mainly pertaining to the mechanics of how the specific photogrammetry software processes digital photographs to produce a digital model. Photogrammetry is dependent on: light and shadow to determine surfaces (from its perception of texture), form, position of perceived objects, relativity, and image metadata (EXIF) such as time taken, aperture, and other camera-based settings. As a result, items such as “wavy” or “undulating” surfaces appear as the software attempts to consolidate its understanding that there is a large continuous surface at that given location but to generate the surface completely requires a degree of differentiation between light and shadow to determine its precise boundaries. Many of the other types of phenomena such as the appearance of “smoky” objects were hypothesized to result from variances in lighting between images of the same object and camera misplacement caused by incorrect correspondence between objects that are similar in appearance but occur at different real-world locations.
(click to move through at your own pace)The array of points produced by the photogrammetry scan resulted from a broad range of conflicting environmental and software factors, mainly pertaining to the mechanics of how the specific photogrammetry software processes digital photographs to produce a digital model. Photogrammetry is dependent on: light and shadow to determine surfaces (from its perception of texture), form, position of perceived objects, relativity, and image metadata (EXIF) such as time taken, aperture, and other camera-based settings. As a result, items such as “wavy” or “undulating” surfaces appear as the software attempts to consolidate its understanding that there is a large continuous surface at that given location but to generate the surface completely requires a degree of differentiation between light and shadow to determine its precise boundaries. Many of the other types of phenomena such as the appearance of “smoky” objects were hypothesized to result from variances in lighting between images of the same object and camera misplacement caused by incorrect correspondence between objects that are similar in appearance but occur at different real-world locations.
path taken through the Kantinen’s superimposed scans
desirable materializations of digital ghosts
Five regions have been highlighted in white. These areas have been selected due to their ideal positions as areas for the construction and testing of an architectural intervention.
what becomes of misplaced transverse views
The photogrammetry registration is highlighted in white and superimposed on the Lidar registration to provide context for comparison. Small triangles and their extended rays designate the processing software’s placement of where it thinks specific images were taken from, the angle, and the relationship between the produced point cloud fragment and its corresponding capture point. The real capture points are marked in blue circles along a dashed line depicting the path taken throughout the spaces. Section markers designate the extents of identified regions for further analysis.