360° video with Picture-In-Picture videos
Student Project (1-2 participants)
An equirectangular 360° video is rendered into fields of view using a projection on a sphere which results in a correct visible geometry for the visible image. Now we want to add picture-in-picture (PiP) videos into a 360° video. This can be done in three different ways:1. Put the small PiP video on top of the rendered field of views
–> Issue: results in HUD like experience where you can see the PiP independently from the viewing angle
2. Put the video on top of the 360° video before rendering
–> Results in distorted PiP views, due to the sphere
–> Issue: the PiP stays at a certain “place” (angle) in the 360° video
3. Place the PiP videos on a rectangle inside the sphere and then start rendering
–> Eliminates the issues from 1 and 2
Options 1 and 2 are already done.
- Implement the rendering for option 3 using our 360° video renderer
- openGL, C++, node.js
Student Project (1-2 participants)
WebVR is a W3C specification which supports developers building virtual reality (VR) applications using pure web technologies. The API includes interfaces for accessing device sensors and control the head-mounted display. WebVR is already implemented in many browsers like Chrome and Firefox. There are also existing WebVR frameworks like A-Frame from Mozilla which makes the development of VR application more easier.
- Evaluate WebVR Specification and related Frameworks
- Develop a VR application using WebVR and one of the identified Framework
- Evaluate you Implementation on different browsers that support WebVR
Interactive 360° Video and Storytelling
Student Project (2-3 participants)
360° videos require interaction with the user by default. For example touch inputs be used on mobile clients to navigate in the video. On the other side, motion sensors which detect head movements are used to navigate in the 360° on head-mounted displays (HMD). On TV, the remote control (arrow keys) can be used instead. Apart from the default navigation in the video, additional interactions are very helpful in 360° to point the user to move to other views in the video or also to allow content creators to define different stories and paths in the video.
- Develop a concept for Interactive 360° video and Storytelling for three device types: Mobile, TV and HMD
- Develop a proof-of-concept prototype for a specific video as a showcase
Extending a 360° Video Cloud Playout
Student Project (1-3 participants), Bachelor Thesis
The Fraunhofer FOKUS 360° Video Cloud Streaming system (read more here) enables high quality 360° video experience on low capability devices, such as Hybrid TVs (HBBTV), or in cases of constrained network connectivity e.g. on mobile devices. VR-Glasses are a hot topic right now and we believe bringing this immersive experience to Smart-TVs allows to address a much wider audience, since TVs are much more widespread.
This solution allows content providers and broadcasters to provide an innovative video experience to traditional TV screens. Viewers can experience video content with freely selectable views on their primary video viewing device – the large TV screen. Enriching this technology with a wide range of features addressing viewers as well as broadcasters helps acceptance and spreading of this technology.
- Native 360° Video Player
Standard Video Player implementations use their own buffering strategies to ensure smooth video playback. In our 360° video streaming case this behavior is not wanted because it adds up the delay between user input and reaction in the video. Hence a player with low delay and therefore full buffer control is necessary. This is possible either native on a Smart TV or web based with MSE support (and using an open source video player).
- Record user path
Record and playback the viewing path a user has chosen during his session.
- Pausing a 360° Video
Currently it is not possible to pause a 360° Video due to limitations with the buffering strategies and general streaming challenges. Your task is to allow the user to pause the video and restart playback without major latencies. As an extension to this function, a user can even change the (paused) perspective.
- Cube maps instead equirectangular video
As source videos we use equirectangular videos. These are mapped on to a sphere and being rendered in openGL. The calculations for a sphere are much more complex than for a cube. Therefore we want to experiment with cubemaps to make the rendering process more efficient.
- Programming Languages: C/C++
Related FAME Projects:
Virtual Reality and 360° Video Analytics: User Tracking and View-Field Prediction for Future Video Presentations
Student Project (1-3 participants), Master Thesis
The Fraunhofer FOKUS 360° Video Cloud Streaming solution enables high quality 360° video experience on low capability devices, such as Hybrid TVs (HBBTV), or in cases of constrained network connectivity e.g. on mobile devices. In 360° video the full spherical image of any direction of view is available in every moment while the spectator can freely change her individual perspective of view. Thus for a high quality partial view on the scene the necessary source video material becomes quite large.
This solution allows content providers and broadcasters to provide an innovative video experience to traditional TV screens. Viewers can experience video content with freely selectable views on their primary video viewing device – the large TV screen. The capability of view analytics allows content producers and advertisers to get specific information what viewers are interested in and what part of the scene they are watching, allowing detailed feedback about home user interests that wasn't previously available outside of limited lab trials.
You shall realize the following tasks:
- View behavior analysis per user (persist the viewed camera angle in a 360° video etc.)
- Reporting of clustered viewing behaviors on a video player with the help of heat map overlays
- Integration into the existing FAME infrastructure
- Live- and on-Demand-Reporting for 360° Live-Streams
- Creation of Cam-Tracks: e.g. average viewing hot spots or tracking shots
- Filtering User-Groups: e.g. by IP (region), User-Agent (device) etc.
- Prediction and recommendation of current and future camera tracks in real time
- Good programming/ prototyping skills in HTML/ JS as well as in server technologies of your choice
- Optional: High-Level Understanding of Data Mining/ User Profiling
- Creative ideas, analytical skills and autonomous acting
- Video Delivery Technologies
- Predictive Data Mining
Related FAME Projects: