Category Archives: Topics

Game Server Technologies

Games and other interactive multimedia applications have mostly been designed using a single thread, with some specific workloads offloaded to other threads. For grater scaling, the world is simply split.

Under the current industry paradigm a multi-player game typically consists of multiple clients connecting to a server, with only loose coupling between multiple servers.

This project challenges the current paradigm by investigating how interactive multimedia applications can be distributed to multiple threads and even multiple machines in a cluster.

The vision is an environment where workloads can be distributed freely among any number of nodes in a server cluster as well as utilizing spare resources among any of the client computers, while still maintaining a consistent state. This last is much more difficult in pure peer-to-peer approaches, where none of the nodes can be trusted.

Audiovisual integration of impaired streams

To understand the combined human experience of audio and video, this research focuses on audiovisual perception and the factors that contribute to weaken the integration between the two modalities. With the popularity of online services that provide entertainment on the go, we consider how potentially related impairments will influence the perceptual integration. These impairments include latency, loss, active adaptation, asynchrony and displacement, but we also consider factors such as reverberation, speech intelligibility, and content.

An-intro-to-perceptionWe pursue new knowledge about humans’ perception of audiovisual quality in the context of video streaming and immersive audiovisual systems. Despite the long tradition of objective assessment of auditory, as well as visual quality, in telecommunication systems, this body of research has covered only a limited set of situations and conditions. Moreover, objective metrics are currently unable to represent the full range of perceptual mechanisms that are at work to process audiovisual content.

In recent years, we have defined common vocabularies of quality attributes and we have worked to understand their relations. We have also looked into each attribute’s importance for scalable video, audiovisual content, and live mixed-reality performances based on multi-modal content. Furthermore, as a step towards more consistent subjective quality evaluation, we have developed robust and cost-efficient assessment methods for audiovisual quality perception.

An-intro-to-perception-img2

With the increased relevance of 3D technology in both entertainment and communication platforms, we have moved into a new perceptual reality where several new challenges confront the human senses. Due to the limitations dictated by the available network and hardware resources of a multimedia system, the quality of the delivered content is likewise limited. Thus, with the exception of large-scale productions (like Hollywood movies), an increasing number of immersive applications depend on imperfect devices for capturing and rendering. Moreover, they often rely on immediate communication of content between end-users. This is the context for the work that we have set out to perform.

So far, our studies on human audiovisual perception have yielded important insights into perceptual diversity and how human attributes can influence audiovisual processing and temporal sensitivity. In 3D-Sense, we aim to extend on this work and explore how audiovisual sensory information is combined by perceptual system when both audio and video contribute with spatial cues in three dimensions. 3D-Sense unites two research areas, cognitive psychology, with focus on human cognitive and perceptual processes, and multimedia systems, which develops algorithms and protocols to provide the best user experience.

Mixed Reality Film Production

Few major films have been produced recently without months of post production to correct mistakes, improve quality, or create large parts of their content. This hampers the influence of creative film makers because the end product is in the hands of artistic technicians, who need to interpret and implement the directors’ vision long after filming has ended. 

This way of cooperating was described in the following way:

A director tries

to imagine something that he’s not quite sure how it will look and then tries to explain it to someone who implements it three months from now.

The PreP project is committed to emancipate film makers, allowing them to create the mixed reality films they want without losing control over them. We propose a seemingly simple, yet fundamental change in the production process: to move most virtual parts of film scenes out of post production and onto the set, where they can evolve and take part in the creative vision of a director and his crew. Key to this vision is the seamless integration of content and meta data of a film production in a single unified system and the ability to collaboratively access and manipulate this data in real time, e.g. in a live preview of the final scene, using real time tracking or existing post production tools. This allows the crew to quickly produce a new draft of a scene’s look right there on set and creates an immediacy in working together by instantly seeing and manipulating the same image.

Our approach enables crews to experiment on set to explore the full depth of their creative vision. It reduces costs by avoiding the need to repeat filming of scenes months after the original. It smoothens cooperation between director and post production, because scenes can be grasped visually and an understanding can be established on set. It provides a context for the director’s and actors’ imagination and gives a better spatial understanding of the final scene without needing to build large parts of a set.

Spiderwick 1

Photo by Sean Devine (CC/Wikipedia).

Time-dependent networking (TDN)

The design of the Internet has many integral parts that has a negative effect on the perceived latency for the end user. In our work on time-dependent networking, we seek to reduce the latencies for time-dependent applications without redesigning the existing Internet infrastructure.

Many interactive applications produce signalling traffic upon events generated by the users or other communicating systems. This makes the traffic patterns likely to be different from what we see in bulk traffic generated by moving large quantities of data. The time-dependent applications often generate packets with high intertransmission time and small packet sizes, what we call “thin streams”. One of our main fields of focus is to study how such streams fare in the internet, how competing traffic adversely affect the latency for such streams and how we can reduce the experienced latency for such traffic.

For reliable protocols like TCP, the way retransmissions are handled affects the recovery latency for thin streams. We are working on reducing the experienced latency from recovery by making TCP mechanisms that are more attuned to thin-stream traffic. We also work on applying redundancy as a way of reducing latency.

Our ultimate goal is that to make incremental changes to the current Internet architecture that will reduce the experienced latency, aiming for near-immediate response for time-dependent applications. Currently, we are pursuing this topic through the RITE EU project (www.riteproject.eu) and the TimeIn project.