Wondering if more people here are involved in editing 360 stereoscopic videos for VR headsets. Would be nice to exchange some tips regarding workflow but also do’s and don’ts in terms of storytelling. For example, what I noticed is that many VR films transition from one shot to another via fade to black, but my impression is that a traditional cut is not as disruptive as you might think and can be perfectly acceptable.
And is anyone working with ambisonic audio? For that to work I assume the video would need to be projected onto a skybox in Unity of Unreal, which is a route I’m planning to take anyway as it opens up the possibility of parenting graphics and subtitles to the camera, which is not possible when only playing an exported video in a headset.
We do quite some 360 video studies combined with EEG/Neuro data. I wrote a small tool to trigger videos using a TCP connection, we added support for 360 video with ambisonics earlier this year.
Its been developed as a simple-quick tool for our research department, and needs some love(and extra visual design), but it does what it needs to do.
Its originally developed to communicate with EPrime (3.x), but the application can be controlled by any software that can connect to and send TCP signals.
We play the files on an inverted unlit sphere in Unity, as its more flexible than a skybox. No subtitle support yet, but that would be an interesting addition to the existing application.
We also setup an export type in Premiere to output 360 video files with ambisonics audio, as the default Premiere / MediaEncoder settings were incompatible with Unity’s playback. I can send you the details if that helps.
Feel free to reply here or drop me a dm if you want to chat about this. The link to the Git Repo is:
Hi Wilco,
Thank you for your detailed reply (I didn’t get a notification so my reaction is unfortunately a bit late). Interesting to read! So you do the positioning of the audio in Adobe Premiere? I will look into that.
Right now I have a stereoscopic video projected onto a skybox in Unity and that works pretty well (next thing on the list are play and pause buttons). I will check out the sphere method. What is the reason you find this to be more flexible? Thanks for the Github link as well, I might have a look to see how you have set it up.
For our current project we only got ambisonics files from the Rode sound library library.soundfield.com
But you could edit audio using specialist plugins and software as well, then add it to the video. I know theres a plugin by Rode, there used to be one by Facebook/Oculus Research, and the most popular one I found was Reaper 360.
Unfortunately I havent found the time to look into this workflow yet, so for now I only added the audio from the Rode library to existing 360 videos.
We like using the sphere as it gives a bit more freedom when swapping between videos, fotos, all those things. It also gives us a bit more freedom with rotation and user-interactions (something we dont have at the moment, but would like to add in the future).
That said, I dont think one is per-se better than the other,.
We also dont give any controls to the user, its all TCP based, as the user is a silent observer, while the researcher controls the playback.
Regarding this specific point, please check the settings under https://xrif.surf.nl/my/preferences/emails, specifically the setting “Email me when I am quoted, replied to, my @username is mentioned, or when there is new activity in my watched categories, tags or topics”
Thanks for the tips! I’ll definitely dive into this after the holidays. I’m familiar with Reaper so that sounds like a good starting point for me, and I will also check the Rode plugin.