Category Archives: Open source

RATS at the Festival of Lights 2019

RATS is short for Real-time Adaptive Three-sixty Streaming, a software that was mainly developed by Trevor Ballardt while he stayed at TU Darmstadt and worked for the MAKI SFB. RATS was used in a test case of the H2020 project 5Genesis, which is preparing to showcase 5G during the Festival of Lights 2020 in Berlin. The test was conducted at Humboldt University during the Festival of Lights 2019.

Fraunhofer Fokus, one of the local partners in Berlin, wrote a piece to summarize the test from the 5Genesis perspective. We contributed video streaming and talk about it in this article.

FoL 2019 at the Humboldt University (© Magnus Klausen 2019)

The original idea of RATS was to use NVenc to convert an input stream from a 360 camera into a set of tiles in real time, which could be encoded at several qualities on the server before stitching them into a set of tiled H.265 videos. These H.265 videos would form a succession of qualities suitable for the orientation of the 360 camera. This idea was published in a demo paper at ACM MMSys 2019, and the intended application in 5Genesis as a short paper at ACM Mobicom’s S3. The code for RATS can be found on Github.

However, 5Genesis is also about dense user populations that access a live video feed, and such density can only be achieved if users can stream to their mobile phones without installing any additional software. The RATS idea would work perfectly for this if mobile phones’ browsers supported H.265. Unfortunately, Android phones do not.

So, instead of tiling in the sensible manner, we modified a clone of ffmpeg (a clone with minor modifications that is required for RATS). ffmpeg can be configured to use NVenc for encoding video streams in H.264, and it can also generate fragmented MPEG4 with suitable HLS and DASH manifest files. In case of DASH, MPD (manifest) files can the form of templates, which removes the needs for clients to download updates even in case of live streams, while HLS clients require updates. Instead of merging tiles after compressing them separately, we used Gaussian filtering on tile-shaped regions of the video to reduce the coding complexity. An arbitrary number of these version can be generated in parallel, using our new ffmpeg CUDA module for partial blurring.

The camera that we installed at the FoL 2019 was a surveillance camera with a fisheye lens (actually a panomorph lens, but close enough to fisheye to make our life easy), while we settled onto VideoJS for panorama display in our cross-platform web pages which should show the FoL videos on arbitrary browsers. It was a bit irritating that the current version of VideoJS has lost Fisheye projection support while it has gained both DASH and HLS support.

Consequently, we had project our panorama from fisheye to equirectangular projection. We followed two approaches. In one, we added a reprojection module into ffmpeg that uses CUDA to make the conversion before streaming, followed by a configuration of VideoJS that allowed to project only one half of a sphere, since a fisheye camera records only a single hemisphere. In the other, we extended VideoJS to support fisheye lenses directly. While the first piece of code may be more generally useful, we found that the single conversion of the second approach (which will be published in a master thesis next year) provides better visual quality.

TADA Tool

 

The problem of overbuffering in today’s Internet (termed as bufferbloat) has  recently drawn a great amount of attention from the research community. This  has led to the development of various active queue management (AQM) schemes.  The last years have seen a lot of effort to show the15210144673_b37c806986_z benefits of AQMs over simple  tail-drop queuing and to encourage deployment. Yet it is still unknown to what  extent AQMs are deployed in the Internet. We have developed an active measurement tool, called TADA (Tool for Automatic Detection of AQMs), that can detect if the bottleneck router on a particular communication path uses AQM. Our detection technique is based on analyzing the patterns of queue delays and packet losses. The tool is composed of a Sender process running at the sender machine and a Receiver process running at the receiver machine. The tool uses UDP for sending constant bit rate (probing) streams and TCP for a control channel between the endpoints.

The sourcecode can be found at: https://bitbucket.org/mkargar/tada-tool_automatic_detection_aqm/src

Redundant Data Bundling in TCP

Redundant Data Bundling (RDB) is a mechanism for TCP that aims to reduce the per-packet latency for traffic produced by interactive applications. The master’s thesis Taming Redundant Data Bundling: Balancing fairness and latency for redundant bundling in TCP[1] presents a new implementation in the Linux kernel, along with a detailed set of experiments that show the benefits of RDB. The thesis builds on the original work on RDB[2], and addresses some unresolved issues making the new implementation a candidate for widespread deployment. The paper Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP[3] presented at LCN2015 describes the RDB mechanism in detail.

Paper Abstract

Time-dependent applications using TCP often send thin-stream traffic, characterised by small packets and high intertransmission-times. Retransmissions after packet loss can result in very high delays for such flows as they often cannot trigger fast retransmit. Redundant Data Bundling is a mechanism that preempts the experience of loss for a flow by piggybacking unacknowledged segments with new data as long as the total packet size is lower than the flow maximum segment size. Although successful at reducing retransmission latency, this mechanism had design issues leaving it open for abuse, effectively making it unsuitable for general Internet deployment. In this paper, we have redesigned the RDB mechanism to make it safe for deployment. We improve the trigger for when to apply it and evaluate its fairness towards competing traffic. Extensive experimental results confirm that our proposed modifications allows for inter-flow fairness while maintaining the significant latency reductions from the original RDB mechanism.

Thin streams and latency reducing mechanisms

Applications like online games, remote control systems, high-frequency trading and sensor networks provide a hugely increased utility when their per-packet latencies are at their lowest. They have in common that they produce mostly traffic with thin-stream characteristics, consisting of small packet payloads and high inter-transmission times (ITTs). Table 1 shows some examples of traffic properties for applications producing thin streams.

Application Payload size (B) Packet inter-arrival time (ms) Avg. BW req
avg min max avg med min max 1% 99% pps bps
VNC (from client) 8 1 106 34 8 0 5451 0 517 29.412 17K
Skype (2 users) 236 14 1267 34 40 0 1671 4 80 29.412 69K
SSH text session 48 16 752 323 159 0 76610 32 3616 3.096 2825
Anarchy Online 98 8 1333 632 449 7 17032 83 4195 1.582 2168
World of Warcraft 26 6 1228 314 133 0 14855 0 3785 3.185 2046
Age of Conan 80 5 1460 86 57 0 1375 24 386 11.628 12K

 

Some existing mechanisms in the Linux kernel aimed at reducing latencies for TCP are

  • Thin Linear Timeouts (LT)
  • Modified fast retransmit (mFR)
  • Early Retransmit (ER)
  • Tail Loss Probe (TLP)

These mechanisms are reactive, i.e. they modify how the TCP sender reacts to loss or possible loss by triggering retransmits faster. The problem is that in a best case scenario, they still require at least an RTT of extra delay for the loss signal to reach the sender host before the data is retransmitted. The exception to this is in cases where the TLP packet retransmits the lost packet.

Read more about time-dependent networking (TDN) and earlier work on thin streams and interactive applications.

Redundant Data Bundling

In contrast to the earlier mentioned reactive mechanisms, RDB is a proactive mechanism which tries to prevent unnecessary delays when packet loss occurs. By bundling already sent data with packets containing new data, RDB can be considered a retransmission mechanism that retransmits segments even before any loss signals are present.

An important property of RDB is that it does not produce extra packets on the network, but instead utilizes the “free” space in small TCP packets produced by interactive applications.

ethernet-frame

Figure 1: Example of an Ethernet frame for a TCP packet with 100 bytes payload.

TCP requires that the payload in each packet is sequential, however, a receiver is required to accept any new data in a packet even if some of the data has already been received. RDB takes advantage of this by bundling already sent (but un-ACKed) data segments in packets with new data.

Figure 2 shows an example with four separate data segments (S1-S4) illustrating how RDB organizes the data segments in each packet.

 

frames

Figure 2: Example showing how RDB bundles the data of previously sent packets in packets with new data.

.

Note about “Free” or unused space in Ethernet frames
For small TCP packets with less than 1 * MSS worth of payload, the Ethernet frame will not fill the maximum segment size. “Free” is this context refers to additional space that may be used without increasing the same number of Ethernet frames through the network.

RDB in action

Figure 3 shows a scenario where an application performs write calls to the TCP socket with 365 bytes per call. When two consecutive packets are lost, the lost segments are “recovered” by the third packet which bundles the (new) segments from the previous two packets.

 

rdb_timeline_loss

Figure 3: Packet timeline of a TCP thin stream using RDB.

 

Limiting RDB to thin streams

To ensure only thin streams can use RDB, a mechanism called Dynamic Packets in Flight Limit (DPILF) is used. By specifying a minimum allowed ITT, a maximum packets in flight limit is calculated dynamically based on the current RTT measurements. Figure 4 shows how the DPIFL is calculated for the minimum ITT limits 5, 10, 20 and 30 ms, with different RTTs.

 

Figure 4: Example of DPIFL for different minimum ITT values

Figure 4: Example of DPIFL for different minimum ITT values

For comparison, a Static Packets in Flight Limit (SPILF) of 3 is included in figure 4 to show how the thin stream test currently used in the Linux kernel does not adjust the limit based on RTT measurements.

Experiments and results

A wide range of tests have been performed to evaluate the effects of RDB in regards to reduced latencies, as well as fairness towards competing network traffic. The experiments have been performed in a testbed with Linux (Debian) hosts as depicted in figure 5.

 

Testbed setup

Figure 5: Testbed setup

Tests with uniform loss rate

A set of tests were performed where netem on BRIDGE2 was configured with a uniform loss rate. These tests illustrate the difference in latency between regular TCP and RDB with different bundling rates. In figure 6 we see the result from one of the test sets with a uniform loss rate of 10%. The plot shows the ACK latency for three separate tests, where each test started 20 different TCP streams. One test where only regular TCP was used, and the two other tests with 10 regular TCP streams, and 10 RDB streams, where a limit was imposed on when any redundant data could be bundled. For RDB SPIFL 3, each TCP stream was allowed to bundle only when there were three or fewer packets in flight (PIFs). With an ITT of 30 ms, and a network delay configured with 75 ms in each direction (RTT of 150 ms), each stream would have 5 PIFs as long as it is not congestion limited.

Figure 6: Experiment with uniform loss

Figure 6: Experiment with uniform loss

 

Comparing the latency of the different stream types, we see a significant difference when RDB is enabled. The RDB streams that were allowed to bundle while PIFs <= 7 have a significantly better ACK latency, where 90% of the TCP segments have no extra delay. For the regular TCP streams we see that almost 60% of the TCP segments have a higher ACK latency than the ideal 150 ms, even when only 10% of the packets are lost. This is due to head-of-line blocking causing delays not only for the lost segment, but for every segment transmitted after the lost segment, until a retransmission arrives at the receiver side.

Latency tests with cross traffic over a shared bottleneck

In these experiments a set of RDB-enabled thin streams and regular TCP streams are tested while competing against greedy TCP flows over a shard bottleneck. Each of the following plots show the results from two test runs, one where 20 TCP streams (TCP Reference) compete against 5 greedy TCP flows, and another where 10 RDB-enabled thin streams and 10 TCP thin streams compete against 5 greedy TCP flows. The shared bottleneck is configured with a rate limit of 5Mbit and a pfifo queue of one bandwidth-delay product (63 packets).

Figure 7: Experiment with no bundling limit

Figure 7: Experiment with no bundling limit

 

 

The RDB streams plotted in figure 7 are allowed to bundle any available previous segments as long as the payload does not exceed one MSS. Looking at the latencies we see a significant difference between the RDB streams and the TCP streams, however, comparing the latencies to the TCP reference values we see that the increased bandwidth due to the redundant data causes extra queuing delays which may not be desirable.

Figure 8: Experiment with bundling limit

Figure 8: Experiment with bundling limit

 

 

 

Figure 8 shows the results from a test with the same setup as in figure 7, except for a limitation on the redundancy level for the RDB streams where they are allowed to bundle only one previous segment with each packet. The results show that with the reduced redundancy level, the RDB streams still have a much lower latency that the competing TCP streams, while avoiding the extra queuing delay found in the previous tests.

 

Conclusion

The RDB mechanism uses a proactive approach to reduce latencies for time-dependent traffic over TCP. By continually bundling (retransmitting) data instead of waiting for any loss signals before triggering a retransmission, RDB is able to avoid the extra delay of at least 1 RTT needed for a loss signal to reach the sender host. This results in significantly reduced latencies for thin stream traffic that experience sporadic packet loss.

Main benefits of RDB:

  • RDB is backwards compatible TCP mechanism that requires sender side modifications only. This means that e.g. a Linux server can enable RDB on connections to unmodified TCP receivers. In this case, only the data sent from the server to the client will benefit from RDB, and the data in the reverse path will use regular TCP.
  • RDB does not produce extra packets on the network, and achieves the reduced latencies by using the packets that are already scheduled for transmission.
  • Reduces latencies for RDB enabled streams without unreasonable negative effects on competing traffic .

References

[1] B. R. Opstad, “Taming redundant data bundling” Master’s thesis, University of Oslo, May 2015. (Download PDF)
[2] K. Evensen, A. Petlund, C. Griwodz, and P. Halvorsen, “Redundant bundling in TCP to reduce perceived latency for time-dependent thin streams” Comm. Letters, IEEE, vol. 12, no. 4, pp. 324–326, April 2008.
[3] B. R. Opstad, J. Markussen, I. Ahmed, A. Petlund, C. Griwodz, and P. Halvorsen, “Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP”, IEEE Conference on Local Computer Networks (LCN), Clearwater Beach, Florida, USA, Oct. 2015. (Download PDF)

OpenSea – A search based classification tool

OpenSea contains software for experimenting with image recognition and classification based on global image features. It is able to classify single images but it also can extract frames from videos and classify them in real time. The software is easy to handle and can for example be used as a simple but strong baseline for evaluating classifiers.

This video https://www.youtube.com/watch?v=gb2BqMuZ2h0 is a demonstration of how the software works and how it can be used in a medical scenario.

OpenSea can be accessed via https://bitbucket.org/mpg_projects/opensea

Kenneth Klette Jonassen’s proposed CDG patch featured in lwn.net

Screenshot 2015-06-05 14.07.20Master student Kenneth Klette Jonassen has spent his thesis exploring congestion controls for low latency and has ported the Caia Delay-Gradient (CDG) algorithm from FreeBSD to Linux. His RFC posting of the CDG code to the netdev list got a lot of attention and was featured on the LWN kernel update last week. His thesis documents significant differences between the behaviour of CDG in Linux and FreeBSD, most likely due to differences in default behaivour (like Linux implementing PRR) and the large difference in timer granularity between the two systems. Kenneth also did work where he updated and improved the Linux kernel’s RTT measurement code for SACK blocks.

Kenneth’s work is research going out to the public for further use and development. We’ll try to follow up on that!

OpenVQ – an objective quality assessment tool

OpenVQ is a video quality assessment toolkit. It is the result of a two-person master thesis that we initiated to fix the general trend of excusing video quality estimation with PSNR by providing a tool that comes closer to the standardized objective metrics of VQEG.

It can be accessed here: https://bitbucket.org/mpg_code/openvq

OpenVQ provides anyone interested in video quality assessment with a toolkit that a) provides ready to use video quality metric implementations; and b) makes it easy to implement other video quality metrics.

Version 1 of OpenVQ contains the following metrics:

  • OPVQ – The Open Perceptual Video Quality metric
  • PSNR – Peak signal-to-noise-ratio (full reference)
  • SSIM – Structural similarity index (full reference)

While everybody knows PSNR and SSIM, we need to explain that OPVQ is an interpretation of ITU.T J.247 without temporal alignment (because we are aware of several patents covering temporal alignment). It provides good results for resolutions up to VGA. It was written without access to any implementation of PEVQ and without access to the datasets that were used to perform the VQEG’s competition that led to the standardization of J.247. It was also necessary to interpret the formal descriptions contained in J.247 loosely, because essential formulas were flawed.
OPVQ does perform well when it is tested on the datasets
IRCCyN datasets, and we hope that independent parties confirm a good performance.

Bagadus pipeline is open source

The Bagadus system is a panorama capturing system that has been put to good use in capturing football events for immediate as well as off-line analysis, as well as the enjoyment of its users. The software allows you use a multi-camera array to produce cylindrical panoramas like the one shown in the picture in real-time.
2013-11-28-thumb

At the conclusion of the project that funded the iAd center, which has inspired and partly funded this work, we are now making the source code of the capturing system for Bagadus available under the BSD 4-clause license.

You can find the code at: http://mlab.no/blog/Files/bagadus-31oct2014.tgz.

Image transparency overlay with GNU Octave using YCbCr

So, let’s say someone sends you a Matlab file displaying interesting image regions using the AlphaData figure property. You, a free software hippie, are using Debian 7.5 and Octave 3.6.2.

Trying to run the file with Octave, it spits out this message:

error: get: unknown image property AlphaData

You spend the next 15 mins Googling and find that there are some references to alphadata in Octave’s source code and documentation. However, after examining the figure struct, you conclude that your version is probably too old and that you have to be smart.

A nice workaround for this problem is to convert the two images from RGB to the YCbCR color space. And, if you’re familiar with the YCbCr color space, you probably understand what you need to do next and can stop reading at this point.

For the readers that don’t know the YCbCr color space, by doing the above we separate the color components from the luma component, i.e., the black-and-white (gray scale) version of the image. And so, by combining the color components from one image and the luma component from the other image, you get a nice transparent overlay effect.

Convert the YCbCr image back to RGB and et voilà, right? Nope, there’s yet another problem. Octave 3.6.2 only has rgb2ycbcr() and not ycbcr2rgb(). Quick fix, grab some new files (ycbcr2rgb.m, rgb2ycbcr.m, ycbcrfunc.m) from Octave’s repository, prefix them to not shadow system functions, and included them in your project. Slow fix, read the YCbCr Wikipedia article and write your own YCbCr to RGB function.

You can find example source code for the quick fix at Bitbucket.