Implications for facade design?
While the pedestrian will only ever perceive the surfaces of a
building normalized to their location in space, they will construct
formal abstractions of the building from past experiences.
As they move through space their understanding of facade
is much deeper than the skin. It includes: vegetation,
sidewalk, stairs, ect. The presence of the building acts on
the public to shape how the movement, and navigation.
Finally, the success of a facade as a navigational device
relies on materials, texture, shape, and form. Stripped of
any of these layers it does not communicate as successfully.

What does computer vision reveal about how we perceive the built environment?
How does human vision model the environment from image?
Human vision is biased to our location in the environment. Therefore we only see surfaces that are normalized to the observer. We are able to differentiate these surfaces by understanding value or
gradation along a surface seen as a subtle spectrum of light and shadow and contour or edge of a surface seen as a sharp contrast between value. We are able to triangulate the depth of these
surfaces through the simultaneous interpretation of two images, one from each eye.
If human vision is limited to the surfaces perceived from a fixed location how to we understand form?
Walking around an object, we are able to hold a loose collection of images in our minds eye to create an abstraction of form. Each of us have a collection of formal abstractions held in our memory.
These forms are based on the past experiences of the built environment. Therefore, while we can only see surfaces normalized to the observer our mind is making inferences using the formal
language to complete these facades with formal abstractions.
How does computer vision, specifically photogrammetry, model the environment from image?
Similar to our walk around an object, the computer can simultaneously analyze a collection of images.
The image in our minds eye is simplified through this process, however, computer vision allows for the transcendence of the space-time limitations allowing for accurate modeling rather than
abstraction. With this photogrammetric modeling it is important to note that the model accuracy is biased by the method of data collection.
What is the difference between a photogrammetric model created by a human and by a drone?
Given fluctuation of the elevation, orientation, and trajectory of the camera, human eyes, through space results in a photogrammetric model that biases the visible faces of buildings. Where the
ground is excluded because it is not in focus when we orient ourselves. In contrast the drone has set parameters for elevation (15ft), orientation (-30Â°), and trajectory (10mph). This generates a very
detailed and accurate photogrammetric. A further difference is that the focus of this model is of the ground rather than the buildings.
What is the difference between human vision and computer vision?
While human vision is limited to the perception of surfaces normalized to the observer we are able to supplement the contour, value, and depth with a collection of formal abstractions from all past
experiences. In contrast, computer vision is limited to the information that it is given; however, it is not limited by space and time and therefore can perceive hundreds of images simultaneously to
generate an accurate holistic understanding of the environment.
What are the implications this has on how we construct the build environment?
It is given that the built environment acts on the pedestrian to change how we move through space. A building placed in an open square will block paths through the center of the space. However,
this study reveals that at the same time this obstacle provides a reference point for gaging distance and functions to guide travelers through space. Therefore, the mass blocks paths while affording
reference for navigation. At the scale of the city this results in a network of more or less clearly defined pathways through the constructed environment, within which the built elements provide
navigation of these passages. In this complex network form alone would not be sufficient enough to provide reference. Therefore, the differentiation of the facade allows for the construction of mental
maps through space.

DALEY

Markdown and Terminal
Learning Markdown and Terminal was the first step towards
better understanding computer vision. It provided another
was for understanding the way we organize our digital
environment. Furthermore, in conjunction with GitHub it
allowed us as a class to quickly share collections of images
and photogrammetric models.

DALEY

Drone Flight
The drone has set parameters for elevation (15ft), orientation
(-30Â°), and trajectory (10mph). This generates a very detailed
and accurate photogrammetric. A further difference is that
the focus of this model is of the ground rather than the
buildings.

DALEY

McNamara Alumni Center
Texture on the surface of the Photoscan photogrammetry
model gives the facade volume. However; when you rotate
the model it is clear that not enough photos were used to
create a model with depth. Here only six photographs were
used to create a quick study. Future modeling would benefit
from additional photographs.

DALEY

MPLS Recreation and Wellness Center
The facade of the Minneapolis Recreation and Wellness
Center is composed of both of mass and texture. Given
that it is defined by the repetition of primary forms, when the
texture is removed in Photoscan it is still easy to recognize
the mesh as being the facade of the Recreation Center.
When asking my peers to identify the mesh this was easily
recognizable.

DALEY

MPLS Recreation and Wellness Center
Here I began to play with the procedure of collection photos
used to generate a photogrammetry model. All the photos
were collected from the same location and facing the same
direction. Here the subject matter was the throng of people
moving through the entrance of the Rec Center. The model
generated is a flat surface with ripples starting to form where
people cross the frame.

DALEY

Weisman Art Museum
The Weisman Art Museum provided an interesting case
study in how computer vision and photogrammetry read
two distinctly different facades of the same building. My
hypothesis here was that the dynamic metal panel west
facade would be difficult for Photoscan to read because
of its formal complexity and reflectivity while the south brick
facade would generate a more accurate model. However,
What I found was that the complex geometry of the west
facade provided a much more clear model in comparison
to the south.

DALEY

Pillsbury Hall
A transversal of Pillsbury Hallâ&#x20AC;&#x2122;s north entrance, with
photographs taken every 5 steps, generated a very accurate
photogrammetric model. When the texture is removed it is
easy to decipher the accurate capture of textures. Here in
the purple mesh one can perceptive the rough sandstone
texture of the buildings heavy masonry blocks. The rigorous
documentation and even north light resulted in a highly
accurate Photoscan model.

DALEY

Human | STSS to Rapson
The model reveals the elements that influence our navigation
of space. It includes; trees, people, flag poles, trash cans,
and other landscape elements. These fixtures play a role
in how we navigate space and can be considered an
extension of the facade of a building as they play an equal
role in guiding our navigation through space.

DALEY

Human | Coffman to Rapson
The model reveals that often we only perceive one face of the
buildings that make up the Northrop Mall. However, during
this experience we understand their mass. This changes
how we imagine the relationship between pedestrian and
facade.

DALEY

Drone | Circle
Similar to the experience we had when a person remained
in a fixed location taking pictures outward. The model
becomes significantly distorted. Warping the elements in
the peripheral of the images.

DALEY

Drone | Line
The drone has set parameters for elevation (15ft), orientation
(-30Â°), and trajectory (10mph). This generates a very detailed
and accurate photogrammetric. A further difference is that
the focus of this model is of the ground rather than the
buildings.