Processing post

In response to Christine Mitchell’s question on the balance between archiving vs. media archaeology, I see Shannon’s point how sometimes the historical conditions of playback matter less than other times. Yet Shannon’s other point about the meta-dimension to sound recording to historical subjects, that is, the sonic archival document is at once a recording of a historical event and a record of its own recording practice, becomes important when you’re talking about i.e. race, sound, and the archive. (like they’re doing at my school next month –> http://aihr.uva.nl/content/events/events/2018/12/entanglements-of-race.html)
For example, sound registers a bodily reality in addition to semantic meanings. The idea of the grain of the voice then gains a very direct political dimension. Also, the recording shapes how the sound is recorded in the first place (i.e. a condenser microphone records all sounds in the vicinity or just a single voice or maybe it drowns out high tones or low tones). A study of colonial sound archive shows that racialized listening practices span across the recording, distribution and archiving of the document. Maybe in studying sound and the archive in particular, media archaeology and archiving cannot be separate practices?

I’ve been thinking about some themes that we’ve touch on a couple times: what is and isnt the archivable. I wonder what is the un-meta-able? And where metadata is partially a function of query and index, can we have non-text metadata? I imagine a scenario in which a performance is the subject. It happens on certain date, at a certain place, at a certain time. How does one capture the metadata of mood of an audience? Potentially, I could have a field recording of an audience conversing moments before a performance as metadata. For me, it also comes back to this question of recordings being a capture of a performance or an event, and if these are even inseparable. Are the noises, the coughs, the hums, even the silences a part of performance. Who gets to decide this?

And even if that much is figured out, which parts of an event are to be recorded faithfully, how do we faithfully preserve audio. In the readings, we see degradation of the material the archival medium and thus a degradation on transfer, moving from one material to another.

Makes me think of a piece by a class mate of mine, which was an iteration from the I am Sitting in a Room piece. He recorded his voice, played it back, and recorded the playback almost a dozen times over a variety of mediums (including recording the playback off of a soundsystem in a cafe!). What we get in return are the undulations of not only physical space, but the digital de/encoding process.

Digital Vocal Saturation //2018 by Parsons Senior Thesis

Original Machines + Contexts

Taking up Christine Mitchell’s question, “How important is it to access sound through ‘original’ machines and contexts, whether technological, architectural or other?” and Shannon’s point that while we can emulate certain aspects of an original listening experience, it might be impossible to recreate its climatic conditions, social context, and ‘appropriate modes of listening,’” I wonder what counts as ‘original.’ While, as Shannon notes, this question might not be relevant if differences in recording and playback don’t actually matter for the researcher or listener, I find it interesting that what constitutes an original experience for a given listener might have nothing to do with what constitutes an original experience as understood by the media historian or archivist.

What contributes to someone’s perception of a ‘sonic archival document’ as original? Even if we can’t recreate the social context, could the use of an original machine be sufficient? Is it ever valid to say that there is only one possible original experience for a given recording? Considering that there might be multiple speakers/performers, recorders, and listeners, whose experience is the ‘original’ one we’re trying to emulate?

Data recycling

I found fascinating to read about the history of machine voice recognition. A development that, in the search for an empirical standard of the American English language, eliminates the richness of its speakers’ origins, and subsequently, the history of the land the language is rooted; placing white males as the language standard speakers, while speakers from different backgrounds are unaccounted as they are understood as “unrealistic” or “abnormal”.

As this standardization of speech is the basis of current voice recognition and analysis technologies, it was disturbing to read the trials examples and the legal implications the “standard” language has when used to judge “non-standard” citizens.  People with different backgrounds are not recognized as individuals, according to this methodology, but as interchangeable members of a community.

In the race of developing new exciting products, companies place more attention to technologies than to the used data, often employing existing, biased data without much regard of how it was collected, categorized, and scrubbed. We should stop recycling data.

archive(s) : interstice(s)

Addressing the gaps in time-based media collection and preservation

“Inconvenience has its virtues . . . ”
– Rick Prelinger

The difficulty in time-based media is that it is time-based; moving images and sound are bound to time. A specific duration is required to experience these media, unlike traditional material objects, the intake of which is sometimes satisfied with a passing glance – there is no playback mechanism required. Articulating the archive as an “interstice” is my attempt to circumvent the disparate formations of the archive that seem to hold it in a fixed time and space. Emphasizing the interstitial, I align with Jason Farman’s position “that the delay between call and answer has always been an important part of the message” (Delayed Response, 2018).

Rather than proposing yet another taxonomy of the archive, I wish to interrogate this very tendency to separate and organize notions of the archive. In short, my aim is to elevate the ephemerality and mutability in the archive(s). It is also to challenge the Foucauldian episteme that defines the archive as “the system of [a statement-thing’s] functioning” (original emphasis, Archaeology of Knowledge, p. 129). The archive as interstice acknowledges the multitudes of being and knowing the world as much as it acknowledges the myriad constructions of the archive. In this way, we might recognize the archive as a system that is “malfunctioning” or deviating from presupposed spatial orientations. This framework identifies the interstice (the gap, trace, or memory) as the major stakeholder in time-based media archiving.

I want to begin by grounding this rationale with questions that always seem to present themselves in archival epistemologies: What constitutes “collection” or “preservation”? Is it the encasing or the containment of a material object in space? What constitutes materiality? How is it negotiated in “real” or “virtual” space?

This brings me to Ranganathan’s Colon Classification system. In pivoting away from the definition of the archive as such, my hope is to highlight what falls through the cracks, what fails to encode, or what cannot be arranged according to a specific protocol. The “classification” in Ranganathan’s facets favors multiple variables over taxonomy, emphasizing “universal principles inherent in all knowledge.” [1] This thinking led to the development of the following facets:

Personality—what the object is primarily “about.” This is considered the “main facet.”; Matter—the material of the object; Energy—the processes or activities that take place in relation to the object; Space—where the object happens or exists; Time—when the object occurs

Ranganathan defined an object as “any concept that a book could be written about.” [1] The limitation here is that the Colon Classification system presupposes objects as publications. Considering the flexibility afforded by the system and Ranganathan’s own acknowledgement that “the library is a living organism,” it stands to reason that library classification systems must accommodate the ever-expanding notion of materiality. [1] With this in mind, we might see the facets working for contemporary time-based media. I’ll use Scott Northrup’s Hämeenkyrö Redux as an example:

Personality: Romance; Matter: Digital video; Energy: Mourning, melancholia; Space: Hämeenkyrö; Time: 2018

Obviously, this reverse indexing system (where we begin with the object itself rather than the reference terminology) is quite an undertaking, but the motivation behind this organization is the implementation of a user-centered experience rather than a top-down episteme. We see this user-experience ethos working in the Prelinger Archives, where the “taxonomy” functions as a mutable installation (Figure 1). What the Colon Classification system and the Prelinger Archives demonstrate is that both archival collection and organization, in theory and practice, are interstitial in their methodologies and subject to change.

Figure 1: Prelinger Archives

Turning to the interstitial material quality of time-based media, Shannon observes the meta-dimensions of audio recordings. [2] Identifying the imbricate relationship between the recording device and the subject (or object) being recorded, she writes, “any sonic archival document is archiving the historical event and its own recording.” [2] The interstice between the mechanisms of recording and sound itself actively shapes material experience. Unlike traditional text or still photographs, sound “can suggest the material and volumetric properties of both the recorded sounding subject or object and the space in which that recording occurred.” [2] The suggestion of materiality and space is readily apparent in Alvin Lucier’s I am Sitting in a Room. With the removal of the sound’s source (in this case, Lucier’s voice), we are left with reflections of the original sound. The result is the sonic representation of the space itself. (Link: http://www.ubu.com/sound/lucier.html)

The difficulty here is the performative quality of time-based media. This leads me to Shannon’s remark that “preservation necessarily involves transformation.” [2] Considering the expanse of magnetic tape that currently safeguards our digital data, we must come to grips with the fact that some email threads, like many ephemeral films, will inevitably slip through the cracks. But as Rick Prelinger reminds us, “ephemeral films weren’t meant to be kept in the long run.” [3] Rather than merely acquiescing to the contestation in best practices and arguments over what should or shouldn’t be preserved, we might consider leaning into the imminent, fleeting quality of time-based media. Moving beyond static models, we might begin to acknowledge the archive(s) as a mode of creation in addition to a method for collection and preservation.

Recalling the case study I presented above and earlier in the semester, Scott Northrup’s Hämeenkyrö Redux (Figure 2) calls attention to the “ghosts in the reels.” The digital video operates as a memento to a precedent film, Hämeenkyrö mon amour, which vanished from a faulty hard drive. The following “redux” was produced with archival vestiges: photographs and videos taken with the artist’s iPhone and his sensorial memory embodied in sonic form. Northrup’s voice-over, which recounts a fleeting romantic interaction, is a reification of immaterial loss. While both iterations of Hämeenkyrö exist precariously as digital artifacts, they demonstrate the notion that the archive is inextricably tied to ephemerality. (Link: https://vimeo.com/289864823/90eecd2691)

Figure 2: Hämeenkyrö Redux, Digital Video, 12min. Scott Northrup, 2018.

The interstitial quality of the archive permeates archival research and practices, but also library sciences, media archaeology, and media and cultural studies. Circling back to Jason Farman’s emphasis on delay, I want to end with a final loose thread. Reading Timothy Leonido’s “How to Own a Pool and Like It,” I want to also acknowledge how gaps can be manipulated to operate unethically. Leonido recounts the conviction of Edward Lee King, which was substantiated by speech-recognition technology. [4] The “dubious 99 percent accuracy rate” and the questionable research practices in Lawrence Kersta’s voiceprint trials notwithstanding, how can we trust recording apparati that can be so easily manipulated? A recent example of egregious misuse of technology that leads to misrepresentation:

I think this is why many of us turned to art practice this semester when illuminating the ways in which we might circumvent nefarious infrastructures and data-collection tactics. Speaking from personal experience, to practice art is to embrace inconvenience.

 

Notes:

[1]  Mike Steckel, “Ranganathan for IAs” (October 7, 2002) http://boxesandarrows.com/ranganathan-for-ias/

[2] Christine Mitchell, “Media Archaeology of Poetry and Sound: A Conversation with Shannon Mattern,” Amodern 4 (2015)

[3] Prelinger Archives Part 1,” C-Span (April 11, 2013) {video}

[4] Timothy Leonido, “How to Own Pool and Like It,” Triple Canopy (April 2017)

 

Application Post – Photo Collections

||visuals here||

 

Of the readings this week, I would like to focus specifically on the Diana Kamin article: “Mid-Century Visions, Programmed Affinities: The Enduring Challenges of Image Classification” and compare her insights with a recent example of a large image classification system which has a direct through-line to another broad and pervasive category of photography. I will also highlight my personal user experiences in the exploration of the two online archives described in Nina Lager Vestberg’s article “Ordering, Searching, Finding” as well as the British Library’s Endangered Archive Programme (referenced in Allison Meier’s post “Four Million Images from the World’s Endangered Archives”.)

Before getting into those topics, I would like to quickly comment on the other readings for this week starting with the John Tagg paper, “The Archiving Machine; or, The Camera and the Filing Cabinet.” I did not care for this article! I found his writing to be overly dense and dry and hard to get through. This writing is successful in denoting the major innovation achieved by “the modern vertical file” and its associated cabinet and I don’t wish to trivialize that by any means. However I found the self-seriousness of this writing to be formidable and the conclusion (“one might say that the archive must and must not be the horizon of our future”) to be dissatisfying. (Tagg, 34) After reading it a second time I realized that there is a video on Vimeo of the author delivering this paper word for word which makes it all slightly more digestible but not less annoying. Apologies for this bit of editorial!

However I appreciated the in depth background on Alphonse Bertillon, who served as the director of the identification bureau of the Paris police at the end of the 19th century — his use-case described the impossible task of trying to sort through a collection of over 100,000 photos owned by the police to find a single criminal highlighted the need for and radical innovation of the classification and organizational structure of a “Bertillon” cabinet. Tagg convinces of the massive impact made by the debut of the modern vertical file at the Chicago World’s Fair in 1892 and the broad changes to systems of organization that followed. This emphasis on the need for “a systematic order” for storage/retrieval and classification specifically for collections of photographs is a major theme through this piece and the rest of the readings for this week.

Tagg also briefly lingers on the word and concept of “capture” in terms of systematic/organizational apparti (apparatuses?) but I followed through to its function in photography specifically — each photograph is a captured moment. Vestberg expands on this in her piece by noting that “every photograph is … also a mini-archive within the archive” (476) which, when a photo is housed in a large systematized, organized collection of other photographs, strikes me as very true. The process of “capture” in terms of photography and artistic process versus pure documentation is described in Douglas Crimp’s “The Museum’s Old, the Library’s New Subject” when comparing the works of Picasso and Ansel Adams in the context of the Museum of Modern Art’s fiftieth anniversary. The conversation of “taking a picture” versus “making a picture” (Crimp, 71) inspired me to contemplate when the photograph becomes a data element versus a constructed artwork, or when it can serve as both. This is illustrated by the anecdote that Crimp describes of coming across Ed Ruscha’s photography artbook Twentysix Gasoline Stations in the transportation section of the NYPL. Crimp concludes that “the fact is there is nowhere for [this book] within the present system of classification” (78) which is answered by Anna Sophie Springer who follows from Crimp in her article “Original Sun Pictures: Institutionalization” by stating that Twentysix Gasoline Stations should probably be shelved in the Photography section of the library, as “photography has advanced into a proper, canonical genre.” (Springer, 123)

Moving on to the Diana Kamin article. She successfully illustrates right out of the gate how we are currently drowning in an unorganized mass of digital imagery — “In the year 2015 alone, more photographs were taken than in the history of analog photography combined” and noting that “1.8 billion pictures are uploaded to social media (our contemporary universal image archive) every day” (Kamin, 311) — and the ongoing challenge to classify and categorize photographic imagery. She describes this problem by surveying two approaches to image organization undertaken by two librarians in the mid-20th century. We then meet Bernard Karpel, a librarian from MoMA, and Romana Javitz, who headed the Picture Collection at the NYPL.

Without re-hashing and summarizing every point of the article here, Kamin categorizes Karpel’s proposed system of image classification as a model based in the “discourse of affinities” whereas Javitz’s user-centric approach as housed within the “discourse of the document.” To briefly re-cap:

Karpel’s insane proposal for a visual system of classification that he hoped would become as ubiquitous as the Dewey Decimal system was based on “an ‘objective’ eye that sees [the visual object] without the clouding of context to evaluate only form.” Kamin notes that this perspective was probably bolstered and built up by Karpel’s “long career at MoMA, where aesthetic formalism was the dominant methodological approach for analyzing works of art.” (Practically speaking, this proposed visual system sounds like a complete mess to me — which Javitz rightly argued would discount a broad section of the general public who did not have the specialized knowledge that would be required to communicate inquiries into any collection of objects. Karpel proposed a basis of “aesthetic evaluatives (qualities such as tactility, transparency, and multiplicity) [that] would be arranged along the axes of his primary cataloguing tools: ‘affinity’, ‘polarity’, and ‘sequence’, which would address, respectively, formal consonance, dissonance, or relationality between two or more pictures.”  (314) Understandably, this very complicated system involved duplicate cataloging cards and is totally impenetrable for a common user.

Javitz’s method for image classification was just the opposite. Kamin describes how she was vehemently opposed to hierarchical structures of organization/classification, as illustrated in the description of Javitz avoiding subheadings as much as possible: whereas previously “‘Lakes would be found under ‘F’ – Forms of Land and Water – Lakes’”’, Javitz deemed that Lakes should appear under ‘L.’

It is not hard to understand how this common sense and user-focused approach to visual classification of images won out over the very tricky visual basis of Karpel’s system of affinities. Javitz advocated that photographs be considered as documents first and classified as such. However there remains a lot of space between these two arguments in our current moment with technology. When trying to parse the what’s surely by now 2B + images that are being uploaded every day, machine-abled automatic indexing of images is a problem that many are working to solve. Javitz’s and Karpel’s approaches are both being applied. Kamin notes: “Javitz’s discourse of the document is most frequently encountered in the keywording dominant on the internet (a system in which any image can be tagged with multiple identifiers, or in natural language, by its uploader), while the discourse of affinities is manifest in discussions around pattern recognition and machine vision.” (329)

Both of these approaches are visible when looking at project by Google that I have found myself thinking a lot about during the course of this semester. The Google Image Labeler was an interesting space on the internet while it was initially active between 2006 and 2011, and it has returned since 2016 in a new form that remains relevant in the conversation of indexing and classifying images.

In its original form, the Image Labeler was set up as a game between two users who would be automatically partnered by the software. A time limit would be set and then a series of images would appear on the screen. Each ‘player’ would then submit text describing the contents of the image. Points would accrue when the two players would agree on terms. There were no prizes! However this was an active space online, and it was effective in improving Google’s image search function. This was the company’s clever way to ‘gameify’ the very manual approach to keywording and indexing images that would be required in order to achieve robust search.

The Image Labeler was shut down between 2011 — among other issues there became pervasive abuse between players, spamming the game with words like ‘abrasives, entrepreneurialism, and forbearance, among others — and reemerged in a different form in 2016. To this day you can log-in to the Labeler and lend a human-based approach to visual indexing. The images that confront users today are confusing and not straightforward to answer. (How do I see fog in a photograph?) Machine learning and approaches based on similar visual forms, as Keplar saw and envisioned the world, has come a long way in the intervening years but the current Labeler gives some insight into what types of images the machines are still struggling to classify, and the categories of images that users are most interested in.

Image classification via keywording remains an important and valuable characteristic in the broad world of stock photography. Stock photo collections are larger and more prevalent than ever. These images surround us in the world, and the measure of success for the end user (usually commercial) depends on choosing the right one. In “Re-use Value” from Cabinet Magazine, Jenny Tobias highlights the importance of keywords in the stock image marketplace. Referring to the image at the top of the article, Tobias writes:

According to the original caption, it is also a Portrait of Otto Bettmann—About 2 Years of Age, With an Umbrella.

Little Otto in his skirts could not have known that he would start collecting images as a teenager, earn a Ph.D. at twenty-five, begin compiling a picture history of civilization while curator of rare books in the Prussian State Art Library in Berlin, flee Nazism to the United States in 1935, and shortly thereafter found the Bettmann Archive, a major purveyor of stock photography. Nor did he know that sixty years after its founding, the Archive would be acquired by Microsoft’s Bill Gates for his Corbis image archive, shipped from lower Manhattan to a climate-controlled Pennsylvania mountain for preservation and digitization, and then redistributed over the Internet, where little Otto can be found today, exactly a century after the photographer snapped the shutter.

The article notes that the only transaction on this photo of Otto was for personal use, so not commercial — but the keywords listed here go very deep beyond what is shown in the actual image, and such is the nature of keywording stock images.

As Vestberg writes in her “Ordering, Searching, Finding” about the user experience of navigating the iconographic digital databases of the Warburg and Conway Libraries, keywording is key for researchers and stock hunters alike — and there are challenges: “Since the keywording system for digital files closely follows the iconographic categories of the analogue files, there are a number of elements in any image that may not be included in the metadata because they have not, for whatever reason, been considered iconographically significant in the process of keywording.” (Vestberg, 477).

In the case of the stock photo of Otto Bettman, it seems impossible to be able to feed this image to a computer and have the machine know on its own to tag the image with “Prominent persons.” The central difficulty here is summed up by Vestbeg neatly: “accounting for what a picture shows is never the same as describing what it depicts.” (478)

I was inspired by this last article and its elucidated challenges with searching these archives for images of “arrows” in order to find representations of “Saint Sebastian,” and vice versa. I decided to explore the Warburg, Conway, and the Endangered Archive Programme (just for fun) with the search term (keyword) “Magic” and compare results. (I chose Magic because I love the Warburg system of classification a lot, particularly the subsection of “Magic and Science.”) My findings were varied. The Warburg iconographic database returned 696 results that contained ‘magic’ in the metadata, but there were a lot of duplicate results. The Conway Library surprisingly only had 23 results, and the EAP had just one higher than Warburg with 697 (also containing duplicates.) I have prepared the first ~ 20 results from each of these for you to see, here. I find the visual artifacts from Warburg to be the most aesthetically satisfying, though this journey of searching and finding through the varied organizational structures of these photo collections was more meaningful than the destination of visual results.

 

 

References:

John Tagg, “The Archiving Machine; or, The Camera and the Filing Cabinet,” Grey Room 47 (Spring 2012): 24-37.

Douglas Crimp, “The Museum’s Old, The Library’s New Subject” in On the Museum’s Ruins (Cambridge, MA: MIT Press, 1993): 66-83.

Anna Sophie Springer, “Original Sun Pictures: Institutionalization” in Fantasies of the Library, eds. Anna-Sophie Springer & Etienne Turpin (Berlin: K. Verlag, 2014): 119-31

Diana Kamin, “Mid-Century Visions, Programmed Affinities: The Enduring Challenges of Image Classification,” Journal of Visual Culture 16:3 (2017): 310-36.

Nina Lager Vestberg, “Ordering, Searching, Finding,” Journal of Visual Culture 12:3 (2013): 472-89.

Allison Meier, “Four Million Images from the World’s Endangered Archives,” Hyperallergic (February 23, 2015).

Wikipedia contributors. (2018, August 7). Google Image Labeler. In Wikipedia, The Free Encyclopedia. Retrieved 20:12, November 20, 2018, from https://en.wikipedia.org/w/index.php?title=Google_Image_Labeler&oldid=853941020

Photo Collections

I’m really drawn to the text that draws the two lines of thought in photographic archiving strategies. Firstly, I had only recently become aware of MCAD when discovering my program director is an alum, so its interesting to see it appear again. It also makes me think about how much Midwest design and art (and archiving!) is underrepresented against the larger coastal institutions, whose money is old and vast.

I’m also drawn to the idea of archiving images by their visual semantics. As a technologist, I always think about how machines “see” and process information: They are highly semantic! At their core, computers see one small block of color in a long single line of colors. With some math, they see edges, they see gradients:

The goal of computer vision, CV, is to get context. The largest competition/effort in CV is COCO , or Common Object in Context, it’s right there in the name! So, at least for machine learning, the visual semantic is the base and necessary component for contextual organization. I guess I’m curious if machines could implement the discourse of the document, or is that soley a human task? Could the NYPL scan their magazine clippings and be told what heading it is, especially when they have wild headings like Views from Behind?

Escaping Biases

Wolfgang Ernst concludes the short chapter asigned for today with the sentence: „Instead, digital data banks will allow audio- visual sequences to be systematized according to genuinely signal-parametric notions (mediatic rather than narrative topoi), revealing new insights into their informative qualities and aesthetics.“ (Ernst, p. 29) These ideas remind me strongly of the Linnaean project: to find intrinsic qualities in the object, picture or music piece to fulfill the dream of a „true“ and objective classification system.

But I think the critique of Diana Kamin is very important in this case, that even „under the most machinic of circumstances, the eye of the expert collector is smuggled in, with attendant biases and values“ (Kamin, p.331) due to the curated training sets that are used to enable a machine to “see”. Maybe the underlying question is: Will it ever be possible to escape the biases?

Voluminous problems

Back in 2011, I visited an exhibition by photographer Erik Kessels at Foam, a photography museum in Amsterdam. It was an invitation to wander through rooms full of unordered mounds of printed photographs – every photo that had been uploaded to Flickr within a 24-hour period. At that time, the daily upload was around 1 million images. This was just before the total ubiquity of smart phones, before Facebook acquired Instagram, around the start of photo sharing becoming a core component of communication. As a mechanism for appreciating the volume of images generated, the exhibition was both memorable and formidable.

24hrs_of_photos
Photo: Erik Kessels, http://www.kesselskramer.com/exhibitions/24-hrs-of-photos

According to some stats from earlier this year: 300 million photos are uploaded to Facebook every day, 95 million images and videos are uploaded to Instagram, and a total of 4.7 trillion photos were stored digitally by the end of 2017. And of course, there’s an exponential curve there.

Img_6820
Photo: Erik Kessels, http://www.kesselskramer.com/exhibitions/24-hrs-of-photos

When I think back to those rooms, and how daunting the volume was even then, I viscerally felt Tagg’s comment about “the danger of being entirely submerged if the other cameras follow suit and the stream becomes a deluge.” There is no solution to the challenge of ordering and archiving such image-based data that does not involve some “archiving machine,” whether submission is to the logics of a filing cabinet or photo recognition AI.

Spielgman’s “Words: worth a thousand” seems a quaint account of the problem of ordering pre-digital images. The senior librarian’s comments that the “indexing will become more rational when we go to digital storage” seems a radical simplification of who’s definition of “rational” will have deciding power of ordering.

Perhaps it is the time of the semester, and having to deal with my own problems of “overaccumulation” of information, but surrendering to the convenient tyranny of AI suddenly seems to make sense. Yes, all ordering, codifying, archiving, will “make us ask what we have lost of our being to archival machines” – but there was a “certain lack of precision” in human ordering of pre-digital photographs too. We have never been in control of our data, some machines just give us the sense that we are.

Processing post

What new type of research, or what new type of statements can be made from an algorithmically generated (or processual) archive? That’s definitely something worth thinking about, but when Ernst argues that this means that the audiovisual archive “can for the first time be organized not just by metadata but according to proper media-inherent-criteria – a sonic and visual memory in its own medium” (28) it seems also, for us, like a step back into thinking again the archive as a transparent source.

He places the archival medium then again somehow outside of politics, as some objective mechanism before human perception, as “a genuinely code-mediated look at a well-defined number of information patterns.” (29) I get that the strength of his argument comes from the idea that digitization to some extent strips medium specificity, at least for the machine, and subjects it to code. When both sound and images become binary the archival implication is that human tastes and distinctions become less important. The archival medium becomes the first archaeologist, historian or researcher before its human user. I’m not sure “machine objectivity” is exactly what Ernst is after, but I argue we should be wary not to fall back into “algorithm = objectivity.” Like Kate Crawford’s has shown in the face-recognition bias talk. It’s the same old bias – but now hardcoded.

Application Post

This week’s readings are concerned with the challenges for indexing, storage, and access of photo archives, all of which are complicated by “the constant problem of adequately attending to the different levels of content in any picture.” (Vestberg)

The things depicted are references to pre-existing objects in the world, making every photograph a “mini-archive within the archive,” according to Vestberg. Yet, she adds, they might have little to do with the thing depicted, or in other words: what the photo might mean. “Accounting for what a picture shows is never the same as describing what it depicts” (Vestberg).

This complicates not only the work of professional picture archivists, but also the photo management of anyone with a camera on hand — which today is virtually everyone. The problems illustrated in the readings made me appreciate more the mess that is my own photo archive, which I want to talk about today. (Apologies for taking my own work as an example! It seemed to fit when I first started writing, but at this point I feel embarrassed)

I have owned a camera since 2011 and have since then accumulated over 15,000 pictures, not counting the thousands more taken with mobile phones. This archive consists of mostly “fine art” photography — not in the sense that it’s “fine”, or ”art”, but in that its focus is aesthetics, not personal or documentary. I keep my pictures in Adobe Lightroom, a photo management and editing software. By default, Lightroom displays them in a grid reminiscent of a contact sheet from analog times. Pictures are grouped in folders by import date or date taken, if this metadata is available. These folders — perhaps the digital equivalent of physical vertical files — are listed in the sidebar, like a single filing cabinet slid open.

I have never found a good way to organize my collection in this system. To make a long story short, it’s a mess that I have avoided dealing with. Until, two years ago, I wanted to make a photo book. Figuring that organizing the database myself was a lost cause, and unable to rely on any metadata whatsoever, I decided it’d be an interesting experiment to computer-curate the book: to automate all metadata collection and organize the material purely on computable information.

So I used online computer vision services by Google and Microsoft to generate captions and tags for each image. Then, I collected dominant colors by averaging pixel distributions, and applied the Histogram of Oriented Gradients algorithm to quantify image composition. This way I was able to generate about 850 data points for each picture.

Without knowing, I had tapped into what Kamin calls the “discourse of affinities:” processing and organizing images solely by their formal content rather than their documentary value. At the Minneapolis College of Art and Design in the 60s, preceded by Warburg’s Mnemosyne Atlas in the 1920s, and Malraux’s 1949 Musée Imaginaire, former MoMA librarian Bernard Karpel sought to “force a reorientation of basic library approaches away from the historical and factual formulas to those that can follow the exploitation of the image in semantic and aesthetic terms.” (Kamin)

Kamin notes that “the discourse of affinities is manifest in discussions around pattern recognition and machine vision” today — the very technology I used to complete my project. Here, the generated tags and captions are purely descriptive, as they have to be derived from machine-readable form. And only sometimes, accidentally, they transcend into something bigger: when the computer gets things wrong.

Assuming that at this point we are all somewhat familiar with the politics of machine learning and computer vision I want to point out that they become visible in my project, too, and move on to another aspect that I think is relevant.

One of my frustrations with Adobe Lightroom is the rigid organizing structure that does not at all address the aforementioned “problem of adequately attending to the different levels of content in any picture.” (Vestberg) Generally, it seems that photo storage infrastructure is unable to account for complex interrelations between pictures: In an ideal spatial organizing system you would want a photo of a black cat, for example, to be close to other felines as well as animals in general, but also in the proximity of internet memes, the color black, and folklore.

The vertical filing cabinet in the physical archive, and its digital counterpart in Adobe Lightroom, are all limited in their spatial configuration. There are only so many attributes that can be taken into account simultaneously in adjacent files, folders, cabinets — in the physical world with its three dimensions. However, in geometry it doesn’t matter whether you describe a space in three, four, five, six, or more dimensions. So theoretically, my cat picture could be in the animals section in three dimensions but also next to folklore/religion in an additional dimension.

To imagine a fourth dimension, one would have to be able to picture a fourth axis perpendicular to all other three, and to imagine 800 more is unthinkable. Yet, although we can’t see it, I was able to computationally arrange my own photo collection in 850-dimensional space, considering all 850 criteria at once.

If you glance over the fact that many aspects of a picture’s meaning can’t be derived or represented computationally, a high-dimensional archive possibly enables a superior ordering logic; where all these simultaneous connections become possible, where formal as well as documentary attributes could be considered side-by-side and all at once, if they only can be quantified.

But there is a catch. Rendering this archive visible requires reducing all 850 dimensions again to the two or three we can perceive. To do this, I used a dimensionality reduction algorithm called t-SNE. The algorithm calculates the distances between all pictures in high-dimensional space and arranges them in 2D, trying to achieve a similar distribution.

The result is a map of affinities where pictures that were close together in high-dimensional space are grouped together in 2D as well. Unfortunately, accuracy is impossible. Similarly to how flat maps of our spherical Earth always distort the globe in some way or another (e.g. rendering Greenland or the poles either huge, tiny, or distorted), t-SNE can never account for all of the spatial relationships in hyper-space at once.

Finally, to get the arrangement into book form, I used another algorithm that attempts to compute the shortest path through all points in the arrangement, thereby defining the page order. The sequence is bound as an accordion book, a 95ft-long journey across categories, formal characteristics, and descriptions simultaneously. It’s a new way to traverse the collection, but leaves the viewer unable to see the whole picture where it all made sense.

Slides
https://goo.gl/MTC5W7

Works Cited

Nina Lager Vestberg, “Ordering, Searching, Finding,” Journal of Visual Culture 12:3 (2013): 472-89.

About the politics, philosophy, and aesthetics of image classification, and how historical models prefigured the logics of machine vision: Diana Kamin, “Mid-Century Visions, Programmed Affinities: The Enduring Challenges of Image Classification,” Journal of Visual Culture 16:3 (2017): 310-36.

 

Processing Post

“Javitz sharply criticized his ideas, cautioning that his approach required a subjective appraisal riddled with personal aesthetic bias that would endanger the objective, impartial study of images.”
This was funny to me because if one visits the NYPL Picture Collection, the categories are quite subjective. For example, in an image of the moon in the night sky, how does one determine if the picture should fall under moon, or sky?

The two genealogies of image classification discussed in Kamin’s piece also made me think of the different computer vision techniques used in analyzing images and if they might fall into either category. For example, I might categorize object detection under Javitz’s line of thought, whereas something like the watershed algorithm (which views the image as almost a black and white topographical map seeing brighter areas as elevated points) better fitting with Karpel’s philosophy. Regardless of this binary categorization, I think the notion that there are many, many ways to analyze an image is interesting and also carries over into Computer Vision.

Critlib: The Dilemmas of Meeting Theory and Practice

Critical librarianship, or critlib for short, is a term that has emerged in the last few decades in reference to the application of critical social theory to the practices of librarians, cataloguers, archivists, and others concerned with the storage, classification, and accessibility of knowledge. The spread of the term critlib has also been popularized across social media, specifically Twitter with #critlib, to connect a community of librarians with a particular set of values they bring to their professions. But regardless of whether one actively engages with that online community directly, critlib largely represents the process of problematizing librarianship through theoretically informed practice. Critlib as a term encompasses both the critical theory informing it and the varied practices of rethinking the role and organization of the library.

The emergence of critlib was accompanied by valid criticisms of the uses of critical theory; some argue that it is inaccessible, elitist, and is too convoluted to provide a framework to work from in the day-to-day profession.[1] Others argue that the library needs to be an objective place and that the role of the librarian is to help students learn “information literacy concepts and how to apply those concepts to their tasks… we are not paid to subscribe to some abstraction about oppressive power structures or to apply our skill sets to an ambiguous and amorphous idea of ‘social change.’”[2] However, this stance fails to recognize the already non-neutral position of the library in regard to how its structure makes certain resources accessible or not accessible, the implications of its classifications, and the mainstream emphasis on practicality in the library.

Lua Gregory and Shana Higgins look at the history of the organization and expansion of librarianship and its relationship to the rapidly industrializing and competitive characteristics of the Gilded Age (1870-1900) and the Progressive Era (1890-1920). Melvil Dewey, one of the primary founders of the library system and director of the American Library Association from 1887-1905, often used language to describe librarianship that indicated he saw “business practice as the ideal for the organization and practice of librarianship.”[3] The business practices in mind during this time placed emphasis on efficiency (increased mechanization) and saw the intensifying power of corporate entities.

However, even at these early stages of the organization of the library system, resistance to the proposed corporate model of the library was pushed by the Vice Director at the time, Mary Salome Cutler Fairchild. Fairchild’s design of the class “Reading Seminar” at the New York State Library School “inspired women training to become librarians to think more deeply about the implications of their work for their communities, and the historical and cultural contexts of their work.”[4] Despite many responses to a survey dispersed to alumni that indicated the value and importance of theoretical and philosophical training in librarianship, Dewey’s Handbook of the New York State Library School focused largely on practical matters and efficiency. Here, theory is cast aside for practice, specifically practice that is unengaged with looking carefully at the machinations of the corporate model that standardizes library work into a mechanical process, disconnecting the profession from the communities it is meant to serve.

Considering this legacy of the commodification approach to librarianship and the concurrent response of certain librarians arguing for more attention to theoretical work, the modern American library system has been faced with the dilemma of reconciling theory and practice since the early formations of its aims and organization. This dilemma, as Emily Drabinski explains, is to be expected:

“If we understand action and discourse as both produced by  and productive of the present, the coincidence of critical and compliance perspectives makes analytic sense. The kairos of contemporary critical approaches is not generic, but emerges from and alongside a kairos of compliance that it contests and resists…Critical perspectives on information literacy instruction represent a reaction against a kairos of compliance.”[5]

Drabinski uses the Greek term Kairos here to refer to qualitative time, marrying ordinal time with social, political, and historical context to a sense of the present.

In 2014, the Framework for Information Literacy for Higher Education (Framework) was offered as a critical alternative to the Information Literacy Competency Standards for Higher Education (Standards), which were in place since 2000. The Framework aimed to emphasize the importance of local and contextual learning outcomes measured by local and contextual tools, where the Standards provided a general set of performance indicators and data reporting tools. Though the Framework has been lauded for making room for the specificities of community context, providing flexibility in the assessment of a library’s value, the actual implementation of the Framework has been a complex process fraught with uncertainties. The Standards, with its generalized approach, provided certain tools that librarians made use of to show the importance of the library in its community in order to secure funding for maintenance.[6] Furthermore, as Alison Hicks points out in “Making the Case for a Sociocultural Perspective on Information Literacy,” the Framework in its effort to provide a contextually based approach has “positioned all disciplinary thinking as emerging from the same core and overarching information literacy concepts rather than, as is the case with a sociocultural perspective, recognizing the individuality and uniqueness of each discipline.”[7] By vaguely alluding to the importance of community knowing without specifying how to engage with it, the Framework also works to homogenize the value of collective and varied experiences in a hazy catch-all.

Critlib as an engagement with both theory and practice is not to be understood as some ideal harmonious meeting of the two, as it clearly comes with its own dilemmas surrounding implementation and engagement. Rather, critlib enables us to consider the way librarianship has been embedded in these dilemmas in the formation of its foundational structure through to the contemporary processes of rethinking the library. Considering the general concern of librarians with accessibility and engagement, critlib aims to meld the self-reflexive thinking of theory with the implementation of effectual practices responsive to community needs.

 

[1] Karen P. Nicholson and Maura Seale, “Introduction,” in The Politics of Theory and the Practice of Critical Librarianship, ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 8.

[2] Eamon Tewell, “The Practice and Promise of Critical Information Literacy: Academic Librarians’ Involvement in Critical Library Instruction,” College and Research Libraries (2017): 37.

[3] Lua Gregory and Shana Higgins, “In Resistance to a Capitalist Past: Emerging Practices of Critical Librarianship,” in The Politics of Theory and the Practice of Critical Librarianship, ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 26.

[4] Gregory and Higgins, 29.

[5] Emily Drabinski, “A Kairos of the Critical: Teaching Critically in a Time of Compliance,” Communications in Information Literacy, 11(1) 2017: 83.

[6] Drabinski, 85.

[7] Alison Hicks, “Making the Case for a Sociocultural Perspective on Information Literacy,” in The Politics of Theory and the Practice of Critical Librarianship, ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 73.

 

Works Cited

Drabinski, Emily. “A Kairos of the Critical: Teaching Critically in a Time of Compliance.”Communications in Information Literacy, 11(1) 2017: 76-94.

Gregory, Lua and Shana Higgins. “In Resistance to a Capitalist Past: Emerging Practices of Critical Librarianship.” in The Politics of Theory and the Practice of Critical Librarianship. ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 21-38.

Hicks, Alison. “Making the Case for a Sociocultural Perspective on Information Literacy.” in The Politics of Theory and the Practice of Critical Librarianship. ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 70-81.

Nicholson, Karen P. and Maura Seale. “Introduction.” in The Politics of Theory and the Practice of Critical Librarianship. ed. Karen P. Nicholson and Maura Seale, Sacramento: Library Juice Press (2017): 1-18.

Tewell, Eamon. “The Practice and Promise of Critical Information Literacy: Academic Librarians’ Involvement in Critical Library Instruction.” College and Research Libraries (2017).

 

Ethics and justice in archiving?

After reading the Caswell interview, I am not sure if ethics and justice are the right language to discuss the archiving of complex and morally ambiguous social issues. She says that for her, social justice and archiving “100% overlap”, and this works when she talks about issues of representation of minority groups. But when the issues get more tricky, archiving for social justice or archiving social injustice becomes difficult to talk about in terms of rights and wrongs. What does it mean to document contemporary German Islamophobia ethically?

Caswell calls for the ethics of care, in part as a response to the abstract “metaphor” of The Archive. While I understand some of the issues archivists and archival scholars must have with this type of theory, it seems to me that archiving for “a more just world” is a lot more abstract than the Foucauldian notion of the archive. To me, it seems a lot more useful to ask what statements, studies, or claims could arise from the way in which Islamophobia is documented, rather than to consider it in terms of social justice.

 

Acknowledging and Addressing Archival Injustices

I am surprised to read in Caswell interview that archivist are resistant to online records, and that records require materiality. Perhaps I am missing a key difference between an archive and a record, but that seems to exclude a vast amount of data. As mentioned, it excludes oral and kinetic records, but does that also exclude databases/online records?! What are the characteristics of a dataset that would make it a record? And in thinking about archiving radical movements, I also struggle to see how we can only stick to materiality where entire political struggles are started and maintained through hashtags.

I’m also interested in the ethics of cataloging radical movements. Like the participants of On Our Backs, should protesters be subject to having their dissidence be preserved? Is there an intersection of anonymity and accuracy + authenticity of archives?

And a last thought on radical digital archives, with the advent of deep fakes, professional trolls, and misinformation, what are the ethics of including, for example, false tweets and misinformation? On one hand, presenting that “data” on equal ground with legitimate data is problematic. On the other, those campaigns should be documented as a part of fighting these struggles.

I guess this week brought more questions than it did answers…

Processing post

The points brought up in all of these readings/talks/interviews support my thinking that the decolonization of archives is much more complex than an ‘undoing’ of archival injustices. It is not simply a matter of repatriation or ownership, a returning of materials to where they come from. The very methodologies used in colonial archiving practice, (for example as Christen brings up, the viewing of indigenous/colonized peoples as a subject of study rather than collaborators), have enduring effects on the categorization, preservation, metadata, and dissemination of these artifacts even in today’s context. Moving forward, I would also like to linger on the question of what non-western archival practices look like. Caswell several times in the interview draws a strong dichotomy between western and non-western archival thought. Particularly with the notion of subjectivity. “Records are supposed to be impartial, which means that the people creating them should have no notion of how they might wind up in an archives in the future.” This is an important distinction because all of these readings argue that archivists should have respect for the intended visibility, distribution and preservation of artifacts during their creation (i.e. the intended illegibility of certain rap lyrics for particular audiences (Doreen St.Felix), or the right to be forgotten).

Mindsets & Toolsets for Self-Archiving

I was struck by the recurring themes of awareness, empowerment and the efforts to provide tools to communities to archive themselves that ran throughout the material for this week. The interview with Michelle Caswell provided several examples of this in her own work and in those of who have inspired her — all stressing the importance “to use the same language [in archival projects] that communities use to describe themselves.” She builds on this in the following article regarding models to employ “radical empathy” and core tenants of social justice in archival practice. These sentiments are expanded upon in Kimberly Christen’s work with Traditional Knowledge licensing and labeling systems for use in the handling of indigenous cultural digital materials. I was particularly interested in the iconography of the TK labeling system that was highlighted in this work — using visual cues to potentially expand the reach of this system through educational/social channels. Burgis Jules, in the “Failure to Care” panel discussion, neatly and succinctly articulated these efforts via his interest in the “usability of data archiving tools as a way to diversify the historical record.”

From the same panel discussion, I am also interested in Doreen St Felix’s comment the griot as a sort of “ghoulish” figure in West African culture/society. This role of musician/historian/storyteller is another example of embodied archival knowledge via the distribution of oral history. I hadn’t previously considered or known about the darker contexts/association of this cultural figure.

Lastly: Evan Hill’s article — focusing on the Mosireen archive project “858” which documents smartphone videos of the Egyptian protest movement in 2011 — makes a keen observation in its conclusion: “We say the internet never forgets, but internet freedom isn’t evenly distributed: When tech companies have expanded into parts of the world where information suppression is the norm, the have proven wiling to work with local censors. Those censors will be emboldened by new efforts at platform regulation in the US and Europe, just as authoritarian regimes have already enthusiastically repurposed the rhetoric of “fake news.””  The subject of intense moderation of major social media and networking platforms is the focus of the highlighted film on this week’s Independent Lens on PBS — The  Cleaners, by Hans Block and Moritz Riesewieck. (I haven’t watched it yet!)

Addressing Archival Injustices at Weeksville Heritage Center

For this presentation, I want to take up Kameelah Janan Rasheed’s question, “Is it okay for things not to exist for future generations?” (41:00). This question was particularly fraught for Janan Rasheed, who grappled with her own “archival impulse,” and it was also situated within a discussion about Blackness and archives: empowering Black people to create, manage, and dictate the terms of their own archives (Jules), protecting Black cultural production from being made legible or knowable through archives (St. Felix), and using illegibility or invisibility as a means of escape from the surveillance of blackness (Browne). For me, this question is central to this week’s readings in that they all, in one way or another, argue that to address the archival injustices that silence, exclude, or overwrite marginalized voices, those voices, those communities must be involved. They must be able to determine which parts of their stories should or should not exist for their own – and others’ – future generations.

For Michelle Caswell, every step of the production and maintenance of archives should reflect this self-determination. Archivists, then, have a set of obligations: “to center those people who have been marginalized in our appraisal decisions moving forward,” to describe records using “the same languages that communities use to describe themselves,” and to take a “survivor-centered approach…which is centering survivors [in] decision processes” about what is done with archived materials, such as digitization” (Cole & Griffith, 24). This suggests that community involvement must be situated not at a superficial level, after the fact of collection or preservation, but from the beginning and throughout.

I think Weeksville Heritage Center (WHC), a “multidisciplinary museum” in Brooklyn, is attempting to operate along these lines. Weeksville was founded by James Weeks in 1838 and would become “one of the largest known independent Black communities in pre-Civil War America” (The Legacy Project). This community flourished through the work of African American entrepreneurs and land investors and because of its residents, who were deeply committed to sustaining their independence (5 of July). Weeksville Heritage Center’s (WHC) mission is to continue this work, to “document, preserve and interpret the history of free African American communities in Weeksville…and to create and inspire innovative, contemporary uses of African American history through education, the arts, and civic engagement” (What We Do). One of their main programs is The Legacy Project, which is described as “[standing] for the freedom and right to know, document, and defend one’s own history,” with the goal of keeping Weeksville’s “legacy alive and vibrant for future generations” (Legacy Project). WHC hosts Legacy Project events at least every month. Some recent examples include “Embodying Archives,” wherein participants were invited to explore individual and collective memories carried “genetically, spiritually, and physically” through “performances, discussions, and communal movement” and “Archives for Black Lives,” which was a day of intergenerational self-documentation where participants were taught strategies for recording their oral histories and digitizing family photographs. 

On November fourteenth, visual artist Elise Peterson will lead a workshop on digital collage. Peterson’s digital collage work incorporates portraits of artists of color within famous Matisse paintings, and some have been displayed on billboards in the United States and Canada. For The Legacy Project, digital collage is a technique “that offers [the] chance to push the visual boundaries of a design, illustration, or art piece,” but it also seems to enact an intervention, whether in a historical narrative, modes of representation, or public spaces. The Project is a manifestation of Weeksville Heritage Center’s interest in supporting “self-reliance, resourcefulness, transformation, collaboration, celebration, and liberation of Black persons in America.”

Elise Peterson.

Here, WHC seems to align with Caswell’s idea of the work of archives as the work of social justice (Cole & Griffith, 23), as well as her argument, drawing from Geoffrey Yeo’s definition of a record as a “‘persistent representation of human activity that travels across space and time,’” that records need not be material but can also be oral, kinetic, etc. (Cole & Griffith, 23).

By giving Black community members access to archival strategies and technologies, The Legacy Project creates a space of self-determination for Black people within archives; giving them a chance to modify archives by filling silences with newly recorded oral histories, for example, or to create new collaborative archives with newly digitized material. This project is explicit in its orientation to future generations – to keeping these spaces open for them and creating a foundation or toolkit for them to use. Importantly, however, only some of the material generated through The Legacy Project is archived at the WHC’s Resource Center for Self-Determination and Freedom. Clips of oral histories recorded at the museum are available to the public through the Center’s digital collections, but The Legacy Project, operating alongside and sometimes with the Resource Center, does not aim solely to generate material for these digital collections nor to make such material universally accessible, rather it gives Black communities and families the resources to preserve their materials for their own purposes. Here, it is clear that Black communities are centered in the appraisal process, both in the sense that the archive is made by and for them and in the sense that their lives are not simply objects of knowledge but that they themselves are subjects involved in knowledge production, entitled to keep their records from the archive in the first place.

To return to Kameelah Janan Rasheed’s question – “Is it okay for things not to exist for future generations?” – while The Legacy Project’s primary focus may not be archiving records for everyone everywhere, it does have an interest in preserving material for the future generations of its own community. Preservation and stewardship are highly personal in this context, and though the archives produced through the Project’s programs might hold importance beyond the family, it is up to that family to decide whether and how to share them. In this, I also see Doreen St. Felix’s point that we have to acknowledge and accept that there are things we might never be able to access or understand; we have to understand that legibility can be oppressive (1:07:39).

Janan Rasheed also asked, “What are the limitations of radical visibility?” but I wonder, what are the limitations of radical invisibility? I am thinking here of Joy Buolamwini’s advocacy for more inclusive code and coding practices, which, for her, would involve modifying existing systems and using more inclusive training sets. After encountering facial recognition software that could recognize her white colleague but not herself, Buolamwini formed the Algorithmic Justice League to combat bias in the design and development of algorithms. Does Buolamwini’s interest in greater inclusivity and visibility imply greater legibility? By contrast, Nabil Hassein has written “against Black inclusion in facial recognition,” arguing “I have no reason to support the development or deployment of technology which makes it easier for the state to recognize and surveil members of my community.”

Is there a productive space between legibility and illegibility? Should there be? In her book Dark Matters: On the Surveillance of Blackness, Simone Browne argued that “protoypical whiteness,” the “cultural logic that informs much of biometric information technology” becomes meaningful only through “dark matter,” or bodies or body parts that confuse biometric technology such as facial recognition (Browne, 162). While she recognizes that the exclusion of “dark matter” from the design processes of these technologies risks reproducing existing inequalities, she also wonders whether there is some benefit to remaining “unknown” or illegible to them (Browne, 163). She points to the potentiality of this illegibility, saying “then and now, cultural production, expressive acts, and everyday practices offer moments of living with, refusals, and alternatives to routinized, racializing surveillance” (Browne, 82).

Robin Rhode, Pan’s Opticon, 2008.

I want to end with these images from Robin Rhode’s Pan’s Opticon (2008), which Browne uses in her analysis and on the cover of her book. In them, Browne argues, the subject’s “ocular interrogation” of the Panopticon and “the architecture of surveillance – corners, shadows, reflections, and light – [covers] the wall with dark matter. … [He] is not backed into a corner, but facing it, confronting and returning unverified gazes” (Browne, 59). For Browne, this kind of looking, which she refers to as “disruptive staring” and which bell hooks has called “Black looks,” is a political and transformative act with, I think, potential for archives and beyond (Browne, 58). 

Sources

Simone Browne. Dark Matters: On the Surveillance of Blackness. Duke University Press (2015) Print.

Bergis Jules, Simone Browne, Kameelah Janan Rasheed, and Doreen St. Felix, “Failures of Care” Panel, Digital Social Memory: Ethics, Privacy, and Representation in Digital Preservation conference, The New Museum, February 4, 2017 {video} (1:08).

Harrison Cole and Zachary Griffith, “Images, Silences, and the Archival Record: An Interview with Michelle Caswell,” disclosure: A Journal of Social Theory 26 (July 2018): 21-7.

Joy Buolamwini, “How I’m Fighting Bias in Algorithms,” TEDxBeaconStreet, November 2017 {video} (8:45).

Kimberly Christen, “Tribal Archives, Traditional Knowledge, and Local Contexts: Why the ‘s’ Matters,” Western Archives 6:1 (2015): 1-19.

Nabil Hassein, “Against Black Inclusion in Facial Recognition,” Digital Talking Drum. 15 August, 2017. Web.