Workshop at TVX 2018
360° Video Streaming & Storytelling
From June 26-28, 2018, Fraunhofer FOKUS organizes a workshop at TVX in Seoul, South Korea. It is planned for a full day, starting with a short tutorial (30mins) by the organizers. There will be two main sessions: 360° video streaming and 360° storytelling. After each session, additional time for discussions is reserved.
We encourage short papers (maximum 4 pages) and recommend demos or videos to accompany the presentation. If you want to contribute to the 360° Video Streaming & Storytelling Workshop at TVX 2018, please submit your short paper via EasyChair, using the SIGHCHI template.
The aim of the workshop is to promote the exchange of the latest advances in 360° video streaming & storytelling from both research and industry perspective.
What we provide:
A 16K equirectangular 360° video will be provided here and is encouraged to be used by contributors to the workshop in order to have comparable results. High resolution 360° videos are rare; therefore, we expect a lot of contributions take on this challenge to produce a high-resolution field of view.
The video frames are available here for download. Please refer to the README.txt available in the download folder for more detail.
What is the expected scope:
360° streaming covers the content preparation (e.g. pre-rendering, tiling etc.), delivery and consumption of 360° video material. For 360° streaming itself, standards such as MPEG DASH and W3C WebVR play an important role. 360° video can be consumed on any device, whether it’s a head-mounted display (HMD), TV set or a second-screen device. We encourage contributions that consider different device types and challenges that come with it. For example, streaming latency and user input are different for a HMD than for a TV set. 360° streaming contributions shall also consider 360° streaming and quality metrics for comparison between different solutions. 360° storytelling starts with the recording of the video using an array of cameras and useful viewing directions and ends with the way that the 360° video is presented to the user. 360° storytelling can for example be enhanced with interactive overlays or voiceovers. The workshop highly appreciates submissions that address this challenge and propose innovative storytelling concepts, tools and players to guide and direct the viewer in a 360° video on the different kind of playback devices and by considering the different input capabilities of these devices (motion, remote control, keyboard, mouse, device orientation, etc.). Innovative application scenarios are also welcomed.
Topics of interest:
- 360° streaming technologies, architecture and standards
- 360° playback (TV, HMD, mobile, desktop)
- 360° camera recording
- 360° quality metrics (Bandwidth, Storage, Processing)
- 360° interactive storytelling
Major platforms such as YouTube and Facebook have introduced 360° video streaming. Currently, most 360° videos offer 4k resolution for equirectangular source content which results in SD (standard definition) Field of View (FoV) and limits the immersive experience for the user. Bandwidth limitations, end device constraints and lack of higher resolution 360° cameras prevent FoV with better quality to be delivered. At least 16k 360° equirectangular video is required to enable 4k FoV. Even if 16k 360° content is available, the challenge lies in the efficient delivery and the smooth playback and rendering of the FoV on various end devices already available today. These include besides Head Mounted Displays (HMD), existing end devices such as TVs, smartphones and tablet.
Currently deployed 360° video solutions stream the full video to the client. Since the user is only looking at a small part of the 360° video at any point in time, a lot of bandwidth is wasted. Furthermore, these solutions are currently capped at 4k for the original 360° video. As a result, the calculated Field of View (FoV) limits the immersive experience for the user. An important part to 360° solutions is the transformation of a projected 360° video to a FoV, which essentially is the 2D viewport of the user. In order to guarantee smooth transitions between FoVs, e.g. when the user is equipped with a Head Mounted Displays (HMD) and moves his head, most solutions rely on client-side transformation (CST). Technologies for CST depend on the targeted devices and platforms. However, measurements have shown that this works well for projected 360° videos at 4k resolution - for higher resolution 360° videos (e.g. 16k) CST takes too much time and prevents smooth change of FoVs. Another approach is the server-side transformation (SST) which performs the transformation on the server and streams only the requested FoV to the client. This enables 360° streaming of 4k FoVs even on devices with limited capabilities, but on the other hand it adds new challenges on latency and scalability.
Besides efficient delivery and smooth playback of high resolution 360° videos, there is currently no sufficient support for directing viewer attention which is required in order to tell a story (direction). Otherwise viewers get lost and spend limited time actually watching the content. In addition, this lack of control over the viewer's gaze and focus of attention is frustrating content creators, leading them to dismiss the medium as serious storytelling and communication platform.