Algorithm Pt. 1
The trick of lenticular film is that an image (appearing holographic, or appearing to move as you move with it) filters through a prismic sheet laid on top of a print, dividing each pixel, syncopated by twos, threes, or fours, into a different viewing-angle. I knew the trick; I wanted to reproduce the trick (but the lenticular sheets I ordered never arrived); I had seen it before. Lenticular images have little to do with “true” or “straight” photography, but I still showed up to Heeso Kwon’s exhibition of spirit photography at Orange County Museum of Art’s Biennial all the same.
Spirit photography--which had its heyday when a consuming public did not know how photography worked and Spiritualism (a cultural obsession with communicating with the dead, among other things) rose to a trend--relies on photography’s credibility: its stable relationship between representation and reality. Photography must be true; photography (in scientific applications) reveals the truth that we cannot see...
The genre of spirit photography, which showed an individual with their deceased ancestor, worked through a sort of double exposure. An individual sat for a portrait. The photographer took the photo, and then ran back to the darkroom. He superimposed another image onto the photo, which was double exposed in a way that looked ghastly. “That’s your cousin!” the photographer would say, and the sitter would, probably, gasp. According to the books I have read, spirit photography took off, until, over and over, many people revealed the magic through rationalizing accounts of spirit photography’s mechanism. “It’s a double exposure!” they said. I like to imagine, at the expense of historical “accuracy,” that a mob of “rational” people with torches and pitchforks threatened to burn down the spirit studios if they did not all get refunds. But the reality, more importantly, is that when the mechanism of the trend was revealed, spirit photography vanished. (It’s worth noting here that spirit photography did not completely vanish, but morphed into “ectoplasm” photography, which featured frame-by-frame depictions of people vomiting, lactating, or leaking, somehow, became depictions of a more material trace of a supernatural reality.)
Importantly: spirit photography was primarily perceived indexically (think of your finger pointing to an event). But, so many thought, if you were able to show the fabrication of this “indexicality,” of photography’s ability to point to something real, then people might lose their belief in the photograph’s truth. (Thinking of Walter Benjamin, Work of Art in the Age of Mechanical Reproduction, Section XI, where he notices that a depiction of a reality immediate and free-from-technology relies on the “thoroughgoing permeation of reality with mechanical equipment.”)
So when Kathy and I walked through the rest of the exhibition (described by one man as “the last generation of material culture”), we noticed that Kwon’s exhibit contained not just lenticularly updated spirit photography, but a note on the artist’s didactic. Adobe Firefly (Adobe’s generative “AI” software) was labelled as a “collaborator” on all of the works. “UGH,” I said. “UGH,” Kathy said. I did not come here for this. So I pointed Kathy over to the lenticular spirit “photographs,” telling her about issues of credibility, and how Kwon’s work seemed to be playful.
If you walked up to the glowing lightbox containing the spirit photo, you would not initially see the image of a ghoulish monster, until you walked to the right. Some groups of people, also walking through the exhibit, needed to tell those in their group to look again: by the time they had passed the image, they could look back and see a creepy, green figure in the middle of an ordinary family photo. Spooky! The photographic work must have been doing something other than straightforward depiction of a spirit (or green alien monster) that, magically and somehow captured lenticularly, was in Kwon’s childhood home.
Lenticular photos, in my mind, are always sort of playful, because the medium, to me, seems somewhat kitschy. It’s such a commercial gimmick (every advertisement for lenticular film that I’ve seen describes its commercial applications) that the disappearance and appearance of an object in a photo, based on the viewer’s position, is difficult to take seriously. The magic of it is thin. But there is one thing that makes these photographs, with silly monsters that appear and disappear, more interesting. Because Kwon is playing in a spirit photography genre that is already-discredited, it is worth noting that the images are old-family photos, shot on film, with that nostalgic on-camera-flash, and, most importantly, a date-stamp imprinted (probably from the original negative) onto the bottom right corner of the images. The photos drip with “authenticity.” Kwon showed these nostalgic, date-stamped photos in a way that felt fabricated: the filmic, amateur, flash-forward style that feels so self-evidently “real” and “authentic” has been altered and produced (lenticularly imaged, with the addition of an AI-monster). Kwon, through play, shows the signifiers of credibility against a genre (which she updates with Adobe’s algorithmic generation) that has already been discredited. Nostalgia is credible, especially with a date-stamp for veracity; the scene that we see is not. The exhibition dealt with credibility.
The room around us contained two spirit photos. The rest, filling the walls, were just “normal” family snapshots scattered up and down the walls. These were supposedly Kwon’s own family photos, and given the hint by the didactic, Kathy and I scrutinized each odd thing for hints of algorithmic generation. Some objects looked odd (a refrigerator should not be wrinkled like that; a ceiling should not be blurry and light like that; that stuffed animal is a monstrous blob animal), but don’t odd things exist in the real world as well? Aren’t there odd artifacts of lenses, lighting, film development, or whatever? (Does AI play on this ambiguity—of your brain trying to make a coherent image, in spite of your own visual doubt?). In spite of the ambiguity of uncanny generation, one element of each image consistently stood out of place. I turned to Kathy, pointing. “Why is the date-stamp in the middle of the photo?” Looking back, we stood in a room full of images where the date-stamp, the signifier of veracity, revealed the borders of fake-and-real.
Some images contained duplicated-and-distorted date imprints, including one with fragments of unintelligible text overlaying a woman’s Adobe-generated butt. If AI were a collaborator, it was a sloppy one. Kwon left the AI slop in. She revealed AI’s mechanism, its algorithmic hand. Kathy and I laughed. “Look,” we said, pointing at the boundaries of real-and-fake (why are these categories so simplistic now? Photos—especially film photos—are real; Photoshop, now, is sort-of real; and AI is definitely fake, apparently). “Ha ha ha,” we said, I think scaring the people around us, outlining the borders of the “original” photo by pointing. Kwon seemed to be making fun of the didactic, parodying the “imaginative” capacity of Adobe’s software, and condemning it to the same fate as spirit photography. Maybe, once the software’s mechanism is revealed, AI will vanish too, and Kwon seemed to be revealing the mechanism (which, if it were done in a more robust, systematic way, would be against Adobe’s terms of use: keep the magic alive).
And there was a bit more trick. The curtains, featured in some of the photos, hung from the walls of the exhibition. Two large, meaty curtains stood in front of us. They were strange. But they were also in the images. The curtains were “real,” the photos were probably “real,” and so the generated part of images extended from photos, well, those must feel “real” as well. But we had seen the trick: there’s no going back. We were told and shown the fabrication of this reality; we had lost our belief in its “truth;” we no longer believed in its depiction of the facts of memory. Adobe’s Firefly algorithm, as a collaborator, seemed unreliable and sloppy, and Kwon, in my mind, seemed to be making fun of it.
(It’s worth noting that some of my favorite photos have little, or a lot, to do with “truth;” and that truth has always been very squirmy in photography; and that just because something is fake and “not true” does not mean it is obselete and bad. We are not fundamentalists here. But photography is good site for thinking through representation, reality, authenticity, veracity, credibility, and all those other things that start to turn religious: belief. It’s also worth noting that Kwon used Adobe’s Firefly generation not for veracity, but for “imagination,” but a few people I talked to did not notice that AI was involved, so, naturally, believed the images to be true. More complicatedly: there’s a difference between believing in the intentionality of an algorithm and believing in the truth of the algorithm’s work, but that is too much to get into here.)
As we left the room, wandering through the artifacts of the “last generation of material culture,” (a form of youth culture, here), the stark contrast between the AI family photos and the punk-rock, independent, zine-like displays presented itself forcefully. DIY zine publications populated the walls; film photography documented “rebellious” youth cultures; a 16mm video displayed youth exploring California; a collection of older painted artworks, from the twentieth century, was curated by high school students; all of these, including the Lime-mp3 player embedded into a bench, leaned towards a more “material” culture than one digitally generated. So when we passed by glass cases of objects from people’s bedrooms, Kathy, who is older than I am, said that the lives displayed in the objects behind the glass cases paralleled her own. Kathy was more “indie,” these kids were more “punk,” but the objects, according to Kathy, pointed to a time when subcultures could exist; when subcultures were “actually underground,” and more specific.
I think, if the AI exhibit says anything, it is this: the Adobe software’s capacity to “imagine” and extend our memory will erode the material cultures that give us a particular, yet somehow culturally unified identity (Kathy, who is indie, identifying both with and against the punk kids). Ominously, the faith in machine learning and generative algorithms contrasts the material DIY culture (REAL LIFE) displayed in the rest of the exhibit.
Although I could not attend the full biennial, I arrived, beforehand, without Kathy, to see the final participants in a workshop of DIY printmakers. Do It Yourself is what the museum seems to be promoting. “Artificial intelligence” is all of the former and none of the latter: Kwon revealed it to be a sham, so please do it yourself.
Unfortunately, there is “going back” to belief after seeing the algorithmic trick played out, according to Trevor Paglen. And he writes that when people see language performed, even in the case of Artificial Intelligence, many people automatically assume an intentionality behind it (Barthes’ influence here: both everywhere and simultaneously non-existent). If something uses words in a way that a user can understand, Paglen explains, then people will believe that thing must have intended to communicate those meanings (intention is often a matter of belief...). Belief in intelligence, belief in intention, belief in the agency of a program is what sticks.
The “founder” of “AI,” Joseph Weizenbaum, who wrote a program called ELIZA in the 1950s, thought he could dispel the belief of an intentionality behind his program. He showed people his computer program, and they talked to it for hours and hours, as if it were something intelligent and intentional. The time was up, though, he decided! He’d reveal what the program was: just a bundle of source code, with no intelligence of its own. Clearly his program was fake, he explained: that it was a machine with no intentions; that it had no agency; that it was just an algorithm. And so he explained his fabrication to people. And they did not believe him. ELIZA was sentient, they maintained. Weizenbaum was wrong, they thought.
(This whole situation reminds me of a doomsday group in Japan. The founder told his followers that he was clearly mistaken: he had gotten the apocalypse wrong and that the religion was bunk. His followers disagreed. The leader killed himself. The rest of the community carried on his movement.)
Even if Heeso Kwon’s work parodied “artificial intelligence,” a museum curator, in Kwon’s didactic, continued to describe AI “acting” as a “collaborator,” with the capacity for imagination. Duchamp described art as religious and here is the evidence: that Kwon’s artistic didactic believes in agency where there is none; and that, according to Weizenbaum’s finding, it will be a belief difficult to dispel, even against Kwon’s parody.