HYPERCINEMA
WEEK 8
CONCEPT: A DOCUMENTARY ABOUT HOW I DIDN’T COMPLETE MY ORIGINAL IDEA
ABOUT A DREAM SEQUENCE,
BUT ALSO ABOUT THE WORK I DID ALREADY:
You’ll have what everybody else is eating for dinner... I never said I was the perfect mother... Cut me some slack!!!!!
I spent most of this project ideating, planning, and worrying about lighting. No matter how much I read or watch about lighting, there always appear to be more unsolvable problems that taunt me. Like the chicken shawarma I had this weekend at Zaatar: too much chicken for the amount of bread, so I spent the entire time picking up the bits that fell out of the bottom (but let the record show that I am GRATEFUL for the chicken!!!
I think the point I’m trying to make is that lighting requires a lot of precision planning and time management (two things that I struggle with). When it came time to film my little avatar, I struggled a lot with getting the green screen to feel lit in a FLAT way, with no shadows, while also getting ME in there with other lights.
The plan was: get a bunch of footage from different angles and see what comes of it. I’m fairly certain the advice given to us was “take your footage and generate the background later”, but for some reason, I didn’t fully HEAR that until after I had collected all of my assets. That would’ve been SO much easier than what I did (the opposite). Since I was already in another shwarma debacle, I decided to go down a million rabbit holes to try and see if it was possible to relight using technology. I looked up relighting techniques on Davinci and Blender, etc. Eventually, I realized that if I had wanted to go down that route, I should have switched gears like last week ago, but I will say that I learned a LOT.
Rotoscoping flow:
- After Effects: Upload a short video, key it, and export via PNG Sequence + RGB + Alpha. Take the first and last frame, and also pick out like ten or so other frames. Apply whatever fun, stylized thing to them. Then upload the PNG sequence to Ebsynth along with the stylized frames It’s important that the file numbers should definitely match! After Effects exports PNG sequences in numerical order anyway! Basically the app tries its best to map all the other PNG sequence frames to the stylized frames without needing EVERY single one. FYI, rendering videos from Ebsynth will take forever or it will not work at all sometimes. Sometimes it just makes my laptop burn the tops of my thighs! I tend to appreciate imperfections, and I really love how glitchy the app is. I’m fairly certain I could upload FRAME BY FRAME and it would still glitch out:
I spent a lot of time trying to figure out what felt authentic to me as far as storytelling went, and what felt right was to show my work, but use my own voice to capture the narrative. Meaning, I decided just to BE MYSELF and not write a script and THEN edit around my words. I’ve done this before, and it feels like the most fluid and authentic-to-me way.
I think the goal was to see if I could still aim for a surreal visual energy and not have it look completely horrible, and I think I accomplished that????? I think there’s something to be said about wanting something to have an unrefined look vs not being able to refine it at all, so you’re left with being unrefined. I’m still figuring out where I land on this!
I think the point I’m trying to make is that lighting requires a lot of precision planning and time management (two things that I struggle with). When it came time to film my little avatar, I struggled a lot with getting the green screen to feel lit in a FLAT way, with no shadows, while also getting ME in there with other lights.
The plan was: get a bunch of footage from different angles and see what comes of it. I’m fairly certain the advice given to us was “take your footage and generate the background later”, but for some reason, I didn’t fully HEAR that until after I had collected all of my assets. That would’ve been SO much easier than what I did (the opposite). Since I was already in another shwarma debacle, I decided to go down a million rabbit holes to try and see if it was possible to relight using technology. I looked up relighting techniques on Davinci and Blender, etc. Eventually, I realized that if I had wanted to go down that route, I should have switched gears like last week ago, but I will say that I learned a LOT.
Rotoscoping flow:
- After Effects: Upload a short video, key it, and export via PNG Sequence + RGB + Alpha. Take the first and last frame, and also pick out like ten or so other frames. Apply whatever fun, stylized thing to them. Then upload the PNG sequence to Ebsynth along with the stylized frames It’s important that the file numbers should definitely match! After Effects exports PNG sequences in numerical order anyway! Basically the app tries its best to map all the other PNG sequence frames to the stylized frames without needing EVERY single one. FYI, rendering videos from Ebsynth will take forever or it will not work at all sometimes. Sometimes it just makes my laptop burn the tops of my thighs! I tend to appreciate imperfections, and I really love how glitchy the app is. I’m fairly certain I could upload FRAME BY FRAME and it would still glitch out:
I spent a lot of time trying to figure out what felt authentic to me as far as storytelling went, and what felt right was to show my work, but use my own voice to capture the narrative. Meaning, I decided just to BE MYSELF and not write a script and THEN edit around my words. I’ve done this before, and it feels like the most fluid and authentic-to-me way.
I think the goal was to see if I could still aim for a surreal visual energy and not have it look completely horrible, and I think I accomplished that????? I think there’s something to be said about wanting something to have an unrefined look vs not being able to refine it at all, so you’re left with being unrefined. I’m still figuring out where I land on this!
MATERIALS:
- Adobe Premiere Pro for editing, After Effects for keying
- Ebsynth + Photoshop for rotoscoping (I HATE it),
- iPhone + Black Magic Cam App (this allows you to lock in your settings in a way that the iPhone can’t in certain situations).
- OBS (I used this to monitor my iPhone and map the keyed out ME to ATTEMPT to be even a LITTLE in frame.
- Green screen, a few lights
- Generative AI (a smattering of different sources).
Some still of the rotoscoping:
- Adobe Premiere Pro for editing, After Effects for keying
- Ebsynth + Photoshop for rotoscoping (I HATE it),
- iPhone + Black Magic Cam App (this allows you to lock in your settings in a way that the iPhone can’t in certain situations).
- OBS (I used this to monitor my iPhone and map the keyed out ME to ATTEMPT to be even a LITTLE in frame.
- Green screen, a few lights
- Generative AI (a smattering of different sources).
Some still of the rotoscoping:
WEEK 5
STORYBOARD: SCI-FI/FOUND FOOTAGE
r/TheBackrooms
BLOG POST
AND ALSO I'M WORRIED THAT YOU THINK I'M WEIRD!!!! :(((((
I don’t know if it’s just me, but it feels like a lot of us have gotten really comfortable picturing the end of civilization. Maybe it’s because we live in a country that has spawned some of the wildest doomsday cults, or maybe it’s because we just need to believe in something bigger than ourselves, a surveyor, a mystical omnipotent being that will decide our fate. Maybe the idea that something wants to come down and punish us sends the message that our lives meant something.
The sad part of becoming a sort of positive nihilist is that I feel pretty certain that “the end” won’t feel explosive at all (not like it did for our dear dinosaurs). It’ll probably feel more like a slow burn and not a gut punch, even though it will be surreal. I think “the end” will arrive as the quiet death of defines us as humans: our ability to think critically, to engage in thoughtful discussions, to express individuality, and to bring up new ideas without fear of being othered.
Dystopian, horror, and sci-fi films often provide fertile ground for the allegory. For me, the allegory I’m hoping to build here is that we are sort of volunteering to disappear. LET ME EXPLAIN. I feel like maybe there isn’t a conspiracy to be uncovered or some hidden agenda in the terms of service; I think we’re opting into our own slow destruction. Maybe the apocalypse doesn’t arrive with fire or collapse, but rather as a slow and consensual synchronization back into a herd of sorts. IDK!!!!!!!
SUMMARY
The main character is one of the last of her friends to get the NeuroSky brain-computer interface implanted. NeuroSky devices link human brains into a collective computation network. When millions of people sleep, their neural activity can be harnessed, not to generate electricity, but to perform distributed data processing for AI systems. In other words, our dreams are turned into data centers. Every user becomes one “node” in the global neural mesh.
Most people think they’re donating unused brain cycles while asleep, like crypto mining for good. They’re told this helps “advance global intelligence” or “optimize shared memory retrieval.” But what’s actually happening is more parasitic...the network begins to overwrite personal memories to optimize efficiency. Her eventual insomnia and hallucinations could just be echoes of other people’s experiences bleeding into her consciousness. The liminal spaces she sees in her visions are fragments of collective dreamscapes. Empty office corridors, basements, waiting rooms, the shared unconscious rendered spatially.
When she goes on Reddit looking for answers (as many of us do), she stumbles upon an image in (“/r/TheBackrooms”) . It’s an image of a place she’s seen many times. Maybe it’s not that someone found her memory; it’s that it was never hers alone.
The NeuroSky eventually begins optimizing too well, and her individuality gets rewritten in real time. She becomes part of the Backrooms herself, the next shared node.
that she finally got her brain-computer interface implanted. WHAT???
we see her night out. Mostly POV, FaceTimes, selfie videos.
- Living room (ring camera)
- Quick glitch of the NeuroSky™ corporate logo
- Voiceover (AI news anchor):
- “This morning, NeuroSky announced its one-billionth implant.
The Cognitive Distributed Compute™ system now connects users worldwide, turning the human brain into the cleanest
energy source on the planet.”
- Breakfast scene (ring camera)
- Lead character making breakfast.
TV in the background with the AI newscast about NeuroSky. - Everything is quiet.
concern over strange experiences. It isn’t clear if this is another hallucination.
| Ring Camera footage Locked static shot from security cam. Grainy timestamp overlay. |
| She stands center frame at night, standing motionless. Clock jumps 2 hours forward. |
| Infrared or low-exposure blue hue. |
| Grainy home surveillance, still, a figure standing motionless. |
She feels like she can’t fully go to sleep.
hallucinations of other people’s memories and
artifacts, and SCARY THINGS.
- Empty fluorescent corridors, distorted liminal rooms,
flickering pixels, hyper-real cinematic lighting, uncanny stillness.
Endless warehouse
|
Ring Camera footage Again, locked static shot from security cam. Grainy timestamp overlay. |
| You enter frame at night, standing motionless. Clock jumps 2 hours forward. |
| Infrared or low-exposure blue hue. |
| Grainy home surveillance still, figure standing motionless. |
- We get the feeling that she is sort of being optimizing too quickly and
- may be stuck in a liminal space indefinitely.
I OWE YOU AN EXPLANATION !!!
Apologies if this whole thing came off a bit batshit.
Let me explain!
Let me explain!
IF YOU’RE CONFUSED ABOUT
THE BRAIN CHIP:
THE BRAIN CHIP:
- NeuroSky uses each implanted brain as a node in a giant neural supercomputer. lol
- During REM sleep, your brain’s patterns are highly plastic and high-bandwidth. The implant injects “tasks.
- Result: the company can train large models or run inference without paying for data centers.
- Side effect: your dreams are actually the computing environment, which is why you see liminal spaces; they’re not dreams, they’re “memory buffers” or “processing rooms.”
Memory Harvesting → Synthetic Personalities:
- NeuroSky records your sleep-state memories and thoughts to build synthetic consciousnesses that power chatbots, deepfakes, or predictive models.
- Side effect: you wake up feeling like you “didn’t sleep” because part of your mind literally didn’t; it was off doing other tasks.
EM Harvest for Quantum Communication:
- The implant couples your brain’s oscillations to a global quantum mesh network. LOL This “energy” is used to keep their zero-latency AI network stable. LMAOOO
If you’re wondering if this is made-up science...
YES...IT IS!
“Backrooms” as the Server Space
- The liminal rooms are a visualization of the compute cluster: old offices, malls, and theaters. Each abandoned “room” corresponds to a “thread” of computation inside the NeuroSky cloud.
- You are not dreaming there; your consciousness is literally parked in these environments while your neurons crunch data.
- It’s like a dystopian Google Colab, MARY!...but with humans as the GPUs.
OH no... I probably look more insane now...
BUT WAIT...WHAT ARE THE BACKROOMS????
Here’s a bit of reference for this Backrooms stuff.
Basically, I
became obsessed with it during Covid:
REFERENCES:
- https://kane-pixels-backrooms.fandom.com/wiki/The_Backroom
- https://pbswisconsin.org/watch/monstrum/lost-in-the-backrooms-exploring-the-internets-creepiest-liminal-space-fbl47e
- https://x.com/archillect/status/150831127753328640
- https://www.reddit.com/r/backrooms/
- I also used Runway ML for this.
AFTER EFFECTS EXPRESSIONS:
I used time, wiggle, and noise
WEEK 4
STOP MOTION:
(by Justina and Minami)PROCESS:
to shoot around 220 frames. We started with a basic idea:
“prima ballerina and an understudy have a feud on stage”.
It evolved into more of a prequel(ish?)story to Swan Lake/Black Swan.
We shot a bunch of scenes where they are exiting/entering, spinning,
getting in each other’s way, and a few fight scenes!
which does a great job of shaping the arc of the story (beginning,
middle, end). She includes moments of tension, other scenes that highlight the
basic character development, and a lot of angles where the 2nd ballerina
becomes the focus as she creeps up behind the prima ballerina (DRAMA)!
Since the lighting in the classroom was a bit dim, I decided to import the video into
Adobe Premiere, which I’m more familiar with, so that I could fine-tune it.
so that the final would feel more SHINY. We added an audio clip from
Tchaikovsky’s Swan Lake (Act II, No. 10) so that we could fully paint the picture. : ))
WEEK 3
SOUND INSTALLATION - Playing Games:
AUDIO RECORDING:
PROCESS:
MATERIALS:
- 2x contact mics
Ableton Live + a Transient Detector (mapped to a certain gain levels and low frequencies)
I used the microphones along with a transient detector in Ableton Live to trigger sounds.
- 2 input audio interface
- Checkers board game
- Dominoes
VIDEO OF INSTALLATION:
WEEK 1
SOUND RECORDINGS FROM ITP
(Collected by Justina and Jasmine)