Web App with advanced statistics and predictions for sports fantasy
So called fantasy games are getting more and more popular with sports fans. They usually come with advanced statistics to help players make decisions, e.g. who to line up, buy, sell etc. Sometimes these statistics are limited or lack certain features (e.g. predictions). As part of this project, you will develop additional helper tools for a fantasy game.
- Set-up a dataset with fantasy sports data
- Aggregate/transform the data
- Build a Web App to visualize the data
- Predict data based on historical data and third-party sources
- Web development
- Enthusiastic sport fan
Supervisor: Stefan Pham
Real-Time Streaming Analytics UI for CMCD and SAND
Real-Time streaming analytics enable content providers to identify problems on certain platforms or devices. Specifications like Server and Network assisted DASH (SAND) and Common-Media-Client-Data(CMCD) define the format and the types of metrics send from a client to a metric server. A user interface build on top of the collected data helps evaluating and structuring the massive amount of information.
Understand specifications: DASH, SAND and CMCD
- Summarize metrics
- Build a real-time analytics UI, e.g. using Grafana/Kibana
- Basic understanding and interest in media streaming
Supervisor: Stefan Pham
The goal of this project is to add a new reference UI that allows the configuration of the various dash.js settings. The existing reference UI can be found here: http://reference.dashif.org/dash.js/nightly/samples/dash-if-reference-player/index.html.
Understand the principles behind adaptive streaming
- Get familiar with the dash.js API
- Examine the existing reference UI and its features
- Implement a new reference UI supporting all of the parameters from Settings.js
- Use an up-to-date framework like React.js or Vue.js
Interest in media streaming
- Knowledge in web development
- Ideally: Skills in web design
Video streaming content differs in terms of complexity and requires title-specific encoding settings to achieve a certain visual quality. With per-title encoding, however, several test encodes are needed, which results in a high amount of storage and bitrates used. The Deep Encode project utilizes machine learning models and provides encoding setting predictions in order to avoid the computationally heavy test encodes.
- Understand the basic principles of encoding and per-title encoding
- Develop an evaluation framework that compares the performances between different machine learning models
- How can the machine learning models be improved in order to enhance video quality predictions?
- Interest in video encoding and media streaming
- Familiarity with machine learning
- Basic Python or R skills
- Video encoding/streaming overview: https://github.com/leandromoreira/digital_video_introduction
- Per-title encoding: https://websites.fraunhofer.de/video-dev/per-title-encoding/
- VMAF: https://streaminglearningcenter.com/blogs/collection-of-vmaf-resources.html
Immersive Video Conferencing
This project is about evaluating and developing enhanced video conferencing application based on open source frameworks and Web technologies. Different approaches for video communication should be explored, developed and evaluated. The target applications are intended to run in Web browsers on desktop and mobile. Therefore, Web technologies like WebRTC and other HTML5 related APIs need to be considered as a foundation for the video conferencing application.
Furthermore, the video conferencing tools should easily be integrated into learning management systems such as Moodle (at TU known as ISIS) or ILIAS and linked to given learning media. Therefore, open standards and specifications for learning environments should be used (e.g. Learning Tools Interoperability [LTI] or computer-managed instruction [cmi5]).
The list of tasks below describes dedicated features that can be implemented in different groups. Each group will work only on 1-2 tasks and not on all of them.
- Develop Multiparty Video Conferencing Application based on open source (Web) technologies and frameworks
- Integration of Video Conferencing Tools in learning management systems (LMS) as well as linking the learning media with the video conference (e.g., in terms of presentation slides etc.)
- Explore and evaluate different interfaces for the integration of video services in LMS
- Explore and evaluate different topologies like Mesh, MCU (Multipoint Conferencing Unit) and SFU (Selective Forwarding Unit)
- Explore and evaluate different visualisation modes (Grid, VR/AR, Hybrid, ...) of participants while taking device capabilities into consideration (device type, screen size, ...)
- Explore and evaluate multiscreen and multi-device capabilities by allowing to distribute the video conference of multiple devices/screens (Example: connect mobile browser to desktop browser or TV and split the view on multiple screens).
- Explore concepts for better organisation of video conferences and develop a prototype (can be a mockup) to evaluate the approach (Examples: Integrate Meeting Agenda, Time boxing, auto-generate minutes, better coordination across participants in large meetings, )
- Explore and evaluate AI techniques to enhance video conferencing experience while maintaining privacy (Examples: Virtual Backgrounds, Video filters, Subtitles from voice, Noise Reduction)
- Open Source Video Conferencing tools:
- MediaStream Recording: https://www.w3.org/TR/mediastream-recording/
- Canvas API: https://html.spec.whatwg.org/#the-canvas-element
- WebXR API: https://www.w3.org/TR/webxr/
- A-Frame: https://aframe.io/
- LTI: http://www.imsglobal.org/activity/learning-tools-interoperability
- cmi5: https://xapi.com/cmi5/