When Olivia Peace originally conceived of “Against Reality” as their senior thesis project for the interactive media and games division at USC, it was designed quite literally to impress.
“Basically the film is playing really, really big on this stretchy material that people can literally press their bodies into, so people would start the experience behind the film, almost literally backstage and I would tell them, ‘Welcome to the dream space. What’s your intention here?’”recalls Peace, chuckling at the ambition. “And with their bodies, they write their intention as the film is playing, and then [they’d] go back around the other side and it’s actually a church [where] I had a pew and the organ music is blaring and little church pamphlets welcoming them to the ‘Against Reality’ church where they can then sit and watch the film.”
In fact no matter what your beliefs are, Peace has objectively created a religious experience with the project, transcendent in its quality and now in its form as well when it is no longer confined to such an elaborate apparatus as a straightforward short film and will start its global journey at the Toronto Film Festival later this week. Still, even without its physically interactive component, “Against Reality” remains as immersive and all-consuming as it ever was, building off of Peace’s desire to chronicle lucid dreams and drawing on ideas involving the collective consciousness of the internet and artificial technology to imagine a natural flow of thoughts unmediated by conventional logic.
Peace offers some gentle guidance with their voice and touches on memories that came back to mind as they worked with what the algorithms were producing, but the transfixing result allows for audiences to put themselves into it as their own experience will now doubt make meaning of the abstractions they see. However, there is an emotional precision that the filmmaker is able to achieve, a skill they brought over from their narrative feature “Tahara” that is among the most promising debuts in recent years. On the eve of “Against Reality” premiering in Toronto, Peace gave an update on her sleep habits and committing their dreams to screen for all to see, as well as plans for a trilogy and finding the technology that could put specific images to their imagination.
How did this come about?
“Against Reality” is a short film, but also an interactive installation and the version that’s going to TIFF is this short film and it’s literally based on a true story of me teaching myself over the course of the pandemic how to lucid dream. After being one of those people where I was always like, “Oh I can never remember my dreams,” I got really into it.
Did you know the technology was available to convey this or was that something you had to find?
I definitely had to find that along the way. I started my career as a filmmaker as an animator, so I figured maybe I’d use some animation to illustrate it. There was a whole point where I [thought] maybe I’d use some 360 video and film something, but then I was introduced to GANS [Generative Adversarial Network], specifically the VQ-GAN and CLIP, which are the generative AI that I used to create the piece. As I went, I’m like, “This is exactly like my dreams where it’s this ever-shifting landscape. Things look very realistic, but once you get up close, there are these little issues with the fidelity,” so I loved it. Then I went about teaching myself the GANS.
Is it true this process involves feeding images into a program and then it scopes out the internet to find images?
It’s this really interesting process that’s meant to look like how we understand ourselves to create new images. It’s trained using these really big data sets that are filled with images everywhere from Wikipedia and Flickr, wherever you’d like to pull from, and then you type in plain text words or whole sentences, if you want, about what it is you’re looking for, so you could say, “I want to make an olive tree with a space background,” give it the parameters and in my case, I was even able to tell it maybe if I would like to move it closer to the tree or move farther away over time and then it goes through all of its image sets, looks at them and tries to make its own version of what it is I just said. So it’s a collaboration between me, the Gans and all of the people that have uploaded things to the internet and labeled them.
When you’re exploring the unconscious, how much control over this do you want when you’re typing in the parameters?
A lot of the images I came away with were really happy accidents where I would intend for one thing and it would interpret it as something else and I would just sculpt a little meaning out of that. There were so many times where that would happen, and the process is like lucid dreaming, which is why I chose it because when I first started my dream practice, it was for me to recover and remember my dreams. I didn’t really expect to be able to get lucid in them. [laughs] That just slowly unfurled over time and when it came to these images, I [thought] maybe I can make something that’s fun and colorful that could just go with my words. I didn’t expect it to be really get under what it is that I’m saying, but that’s just a testament to how many images there are on the internet as well as just the strength of the training sets it works off of. Over time, I was able to learn how to speak to these different algorithms better and better each day, so there are some images on there where I’m like, “Yeah, this is from my first try and some that are closer to my last try” and there’s a huge difference in quality.
Did the flow of the images and the pacing come organically?
It took a while. For the thesis, you have a year-and-a-half to really conceptualize and execute it and it took me till the very last minute. The way that this is even able to play as a film is that I had the GAN iterate on itself, so all these images that you’re seeing I strung together to make this film, and there are thousands of tries to come up with one image. And then I was trying to get them to morph and transition between one another and that was a really cool moment for me.
Then maybe I was living the experience and writing it in my head for the entire time, but one day it was starting to get down to the deadline, I had a midterm where I needed to turn something in. Nothing was coming. I was having horrible writers’ block and then I was driving on the 10 out here in L.A. and suddenly the words were just coming to me, so I literally turned on my voice notes on my iPhone and recorded pretty much the whole audio that you hear. That’s what the narration became. I recorded it one more time a bit quieter and not on the highway. [laughs] – still on the iPhone and that is the audio we used for the piece.
And you had to create the sound from scratch, right? The images weren’t carrying it.
Our sound designer Justin Enoch did an amazing job – I’m so excited to see audiences get to experience this in surround sound and it was all created from Justin and I sending voice notes back and forth. I would imitate the sound that I wanted and they would just do it. I think truly [Justin’s] a genius. There are full sound scapes for every scene and the foundation of everything is that Hammond B5 organ, which is this specific type of organ that I would hear in my childhood a lot. My grandpa is from Oxford, North Carolina, and that’s a big Southern staple in black churches, so I was really happy to get that sound as well.
There’s an anecdote in this about being seen by an eye in the sky pilfering a few grapes at a supermarket when you were young and I couldn’t help but wonder whether you’ve long had these intersecting thoughts about a collective consciousness and technology. Is that crazy to ask?
No, it’s totally not because I think there’s so much of our humanity that’s embedded within these algorithms. A big issue that I ran into as I was trying to work with the GANS was issues of bias. For instance, it was really hard for me to find images of women that I could use – just any type of women because it turns out the algorithm is so bogged down by porn. That’s the majority of images of women that are labeled that data sets are pulling from, and I found a bunch of stuff like that so it was actually really hard for me to render humans because of issues that humans have with other humans.
Yeah, it is. [laughs] And this stuff is moving so fast. I made this film in between April and May of 2022 and already the images I’m able to make for parts two and three are on a whole different level. We’re able to get 4K and now certain GANS are a lot better than others at being able to make people, but at the time, that was a huge issue.
Did you originally envision this as a series or did it feel like there was unfinished business?
I always said if this is “Alice in Wonderland,” this is the part where Alice falls down the hole. This is just the intro – why I decided to teach myself to lucid dream. Part two is going to cover some of my adventures in the dream space, [with] some of the things I was literally able to do and see and uncover while dreaming and then part three is going to cover pitfalls. [laughs]
What are your sleep patterns like these days?
It’s so interesting. I think I was making this, [because] I was getting really stressed out. I was finishing my graduate school program at USC and “Tahara” was coming out. There was all this change happening and I actually stopped being able to remember my dreams during that period of time because of the stress. I graduated, the film came out, and things are going well, so now I’m kind of easing back into my practice again of having a little sleep schedule and going to sleep at a reasonable time. So we’re getting it back. [laughs]