Stochastic Cities: Four Ways of Arranging 3,750 Images of Chicago

María Urigoitia Villanueva

Reviewed by Shannon Mattern

23 May 2018

The next time you hear someone talking about algorithms, replace the term with “God” and ask yourself if the meaning changes. Our supposedly algorithmic culture is not a material phenomenon so much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion. 1

Algoritmos, the Spanish word for algorithms, was the title of one of my mathematics books in middle school. I remember being fascinated by this term whose meaning or power I did not fully grasp: a magic word or spell with which to open a universe of possibility. Twenty years later, in an unexpected twist, the mysteries behind it became the focus of my interest in architecture and design.

According to Google Books Ngram Viewer, use of the word “algorithm” peaked first in 1991 and again in 1996 (around the same year I was using the math book). Then, after a brief decline, its use began to grow, and it has been doing so at a fast pace since 2006. No day goes by when we do not hear about the impacts of algorithms on our lives—most evidently, through our interactions with online platforms, such as Google and Facebook. We live in a time of seemingly limitless trust in these mathematical processes. We allow them to control vast areas of our existence, accepting their outcomes as if absolute truths. We treat algorithms as objective tools, with blind faith in their mathematical certainty—a form of belief subconsciously ingrained in human minds since the Enlightenment through the ideal of a mathesis universalis. However, the process that determines so much information—from recommendations on Netlflix and Amazon, to ideal commuter routes, to our suitability for jobs or relationships—is anything but objective.

Even though the Greeks already warned us about the perils of assuming technological knowledge as superior, technology is still framed as fair and faithful in service to human beings, the right approach to any problem we seek to resolve. However, as Ed Finn has stated, “aside from the most simplistic cases, we will never know how algorithms know what they know.” 2  Therefore, trusting them with running our world seems neither rational nor wise. Yet, what Finn calls the computational space of imagination arises in that very universe of the unknown, a place where we can challenge and reinvent our relationship with technology. Instead of looking for the most optimized results, embracing the space of the unknown allows us to see algorithms not as tools that automate design processes, both affirming and limiting our agency, but rather as collaborative others in the context of creative development. A human-machine feedback loop that sparks imagination and not just the standardization most architectural software allows.

Stochastic Cities explores that computational space of imagination by rearranging an aerial image of Chicago using a combination of computer vision and machine-learning algorithms, not with the intention of finding an ideal configuration but as an experiment to query unexpected associations. The original orthoimage represents the city from Division Street in the north to 28th Street in the south, and from Racine Avenue in the west to the end of Navy Pier in the east. It was cut in 3,750 squares following the orthogonal grid so characteristic of Chicago. Each piece was then processed by a classifier, a neural network trained in this case on the ImageNet dataset, one of the first large-scale image databases available online, that learned to identify to which category each new observation belonged. This classifier “extracted” the multiple features of each square, variables representing the relations between pixels, that were interpreted and grouped by a t-SNE (t-Distributed Stochastic Neighbor Embedding) algorithm. These associations were then projected back onto the original grid.

A t-SNE algorithm is a technique for dimensionality reduction, which is very useful in the visualization of high-dimensional datasets. It consists on reducing the number of random variables that are under consideration—in this case the ones provided by the classifier—by allowing the algorithm to find patterns in the data, bringing to the foreground the principal connections present.  As its name indicates, the process is stochastic—meaning, one of its functions is initialized randomly. As a result, each run could output a unique visualization. 3 In Stochastic Cities, the process was run four times. Although the overall compositions of the resulting images were distinct, the associations between and among the components followed a consistent logic, maybe similar to the one we would obtain if we asked a human who had never seen a city from above to organize those 3,750 images.

The four runs generated similar effects, such as a visible shift in scale, perhaps because the parks were not joined as one, like the water, but rather were scattered along the shore, as hybrid links between natural and built environments. The new images of Chicago appear to be organized by typology, presented as a limitless city that has lost its center. A generic city at first sight, a mix of predictability and arbitrariness in the treatment of metropolitan systems fosters uncanny situations, such as the way piers enter the water in an accentuated embrace, or the consistent reconstruction of McCormick Place as a vortex of turning roads, not to mention the impossible connections of diagonals and curves that generate magical in-betweens paired with seamless juxtapositions of buildings that totally erase the mosaic quality of the grid in the sections they occupy.

What does the algorithm know? Not that McCormick Place is the largest convention center in North America, nor that the Chicago River had its flow reversed. It knows only what it has been taught, and that mix of specificity and alternate logic nurtures an unexpected urbanism. Back and forth, through these operations, the margin becomes blurry between what the computer “sees” and what our eyes read in the outputs. This reflexivity is present in the real dynamics of urbanism as well as in the space of computational imagination, where the relationship between humans and machines becomes that of collaborators in a speculative, generative venture.

Looking again at the new images of Chicago, I discovered a choice the algorithm had made, one I had not noticed before and that surprised me. A little square of water, isolated between roads and away from similar units, suffered that same fate in each of the four iterations: a lonely pond that, given the magical rationality of the images, my mind could not explain. I will surely never know why the algorithm chose to place that unit in that way, but I embrace that. In remaining beyond the scope of my understanding, the mysteries of the algorithm make evident the collaborative nature of our shared work.

María Urigoitia Villanueva, Chicago, Iteration 1, Stochastic Cities (2018).

María Urigoitia Villanueva, Chicago, Iteration 2, Stochastic Cities​ (2018).

María Urigoitia Villanueva, Chicago, Iteration 3, Stochastic Cities​ (2018).

María Urigoitia Villanueva, Chicago, Iteration 4, Stochastic Cities​ (2018).



By Shannon Mattern

Visual artists, writers, and performers have long exploited the generative potentials of rules and codes. Just think of Sol Lewitt, Bernd and Hilla Becher, or the OuliPo. Today, creators like Allison Parrish and Darius Kazemi are using algorithms as creative partners or “collaborative others” in their own practices. But what happens when we apply similar modes of cyborg-production in realms of design that shape the material world we live in—a world that has the potential to determine access to opportunity, public health, and equity? What does it mean to make an algorithmically designed chair or hospital or regional coastal resilience plan? What do we do when our design partner won’t, and can’t, articulate, the logic by which it determines what’s most salient in the landscape?

We don’t quite know how an algorithm understands a city. Its reading seems primarily formal, and its particular brand of formalism is determined by the types of images—satellite imagery, Street View—that have trained it. Does our algorithm see, can it know, on-the-ground human experiences or ecological forces or historical layers of segregation? When we run images of our city grid through a classifier, how does it determine which features are most salient? How does a “dimensionality reduction” algorithm determine which variables are superfluous? The answers to these questions depend upon what our algorithm thinks a city is, and what it’s for. These are questions about teleology, ontology, and politics—which, when so much is at stake in any form of spatial planning, would ideally precede questions about methodology and creative process.



Ian Bogost, “The Cathedral of Computation,” The Atlantic, January 15, 2015. technology/archive/2015/01/the-cathedral-of-computation/384300/


Ed Finn, What Algorithms Want: Imagination in the Age of Computing (Cambridge, MA: The MIT Press, 2017), 185.


For more detailed information on how the t-SNE algorithm works, refer to Laurens van der Maaten’s website and the papers linked there:


María Urigoitia Villanueva is an architect and artist whose practice focuses on establishing new relationships with machine learning systems. She graduated with honors from the Escuela Técnica Superior de Arquitectura de Madrid in 2013 and subsequently worked with Zhubo Design Ltd. (Shenzhen, Guangdong, China) and estudio.entresitio (Madrid, Spain). While at the latter firm, she collaborated on "Between the Earth and the Sky," the winning entry in Colombia's National Museum of Memory competition (2015). A recipient of a "la Caixa" Foundation fellowship, she earned an MFA at the School of the Art Institute of Chicago with a concentration in Design for Emerging Technologies (2018). Email:

Shannon Mattern is a Professor in the School of Media Studies at The New School in New York. Her writing and teaching focus on archives, libraries, and other media spaces; media infrastructures; spatial epistemologies; and mediated sensation and exhibition. She is the author of three books: The New Downtown Library: Designing with Communities (2006), Deep Mapping the Media City (2015), and Code and Clay, Dirt and Data: 5000 Years of Urban Media (2017), all published by the University of Minnesota Press. Mattern has written several dozen journal articles and book chapters and writes a regular, long-form column about urban data and mediated infrastructures for Places, a journal focusing on architecture, urbanism, and landscape. She contributes to public design and interactive projects and exhibitions and, from 2006 to 2009, directed the 600-student Graduate Program in Media Studies at The New School. Email: