Category Archives: Thesis

Vamsidhar Reddy Gaddam defended his PhD

 

SONY DSC

On Thursday 2 June, Vamsidhar Reddy Gaddam defended his PhD thesis “Next Generation Broadcasting System for Arena Sports: A Football Stadium Scenario”. Vamsi has worked on the Bagadus system which is used at the Alfheim and Ullevaal stadiums.

Read more info from the university and from Simula. The PhD thesis can be found here.

Vamsi is currently an engineer in the Media Processing Group at ARM focusing on mobile graphics.

 

Redundant Data Bundling in TCP

Redundant Data Bundling (RDB) is a mechanism for TCP that aims to reduce the per-packet latency for traffic produced by interactive applications. The master’s thesis Taming Redundant Data Bundling: Balancing fairness and latency for redundant bundling in TCP[1] presents a new implementation in the Linux kernel, along with a detailed set of experiments that show the benefits of RDB. The thesis builds on the original work on RDB[2], and addresses some unresolved issues making the new implementation a candidate for widespread deployment. The paper Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP[3] presented at LCN2015 describes the RDB mechanism in detail.

Paper Abstract

Time-dependent applications using TCP often send thin-stream traffic, characterised by small packets and high intertransmission-times. Retransmissions after packet loss can result in very high delays for such flows as they often cannot trigger fast retransmit. Redundant Data Bundling is a mechanism that preempts the experience of loss for a flow by piggybacking unacknowledged segments with new data as long as the total packet size is lower than the flow maximum segment size. Although successful at reducing retransmission latency, this mechanism had design issues leaving it open for abuse, effectively making it unsuitable for general Internet deployment. In this paper, we have redesigned the RDB mechanism to make it safe for deployment. We improve the trigger for when to apply it and evaluate its fairness towards competing traffic. Extensive experimental results confirm that our proposed modifications allows for inter-flow fairness while maintaining the significant latency reductions from the original RDB mechanism.

Thin streams and latency reducing mechanisms

Applications like online games, remote control systems, high-frequency trading and sensor networks provide a hugely increased utility when their per-packet latencies are at their lowest. They have in common that they produce mostly traffic with thin-stream characteristics, consisting of small packet payloads and high inter-transmission times (ITTs). Table 1 shows some examples of traffic properties for applications producing thin streams.

Application Payload size (B) Packet inter-arrival time (ms) Avg. BW req
avg min max avg med min max 1% 99% pps bps
VNC (from client) 8 1 106 34 8 0 5451 0 517 29.412 17K
Skype (2 users) 236 14 1267 34 40 0 1671 4 80 29.412 69K
SSH text session 48 16 752 323 159 0 76610 32 3616 3.096 2825
Anarchy Online 98 8 1333 632 449 7 17032 83 4195 1.582 2168
World of Warcraft 26 6 1228 314 133 0 14855 0 3785 3.185 2046
Age of Conan 80 5 1460 86 57 0 1375 24 386 11.628 12K

 

Some existing mechanisms in the Linux kernel aimed at reducing latencies for TCP are

  • Thin Linear Timeouts (LT)
  • Modified fast retransmit (mFR)
  • Early Retransmit (ER)
  • Tail Loss Probe (TLP)

These mechanisms are reactive, i.e. they modify how the TCP sender reacts to loss or possible loss by triggering retransmits faster. The problem is that in a best case scenario, they still require at least an RTT of extra delay for the loss signal to reach the sender host before the data is retransmitted. The exception to this is in cases where the TLP packet retransmits the lost packet.

Read more about time-dependent networking (TDN) and earlier work on thin streams and interactive applications.

Redundant Data Bundling

In contrast to the earlier mentioned reactive mechanisms, RDB is a proactive mechanism which tries to prevent unnecessary delays when packet loss occurs. By bundling already sent data with packets containing new data, RDB can be considered a retransmission mechanism that retransmits segments even before any loss signals are present.

An important property of RDB is that it does not produce extra packets on the network, but instead utilizes the “free” space in small TCP packets produced by interactive applications.

ethernet-frame

Figure 1: Example of an Ethernet frame for a TCP packet with 100 bytes payload.

TCP requires that the payload in each packet is sequential, however, a receiver is required to accept any new data in a packet even if some of the data has already been received. RDB takes advantage of this by bundling already sent (but un-ACKed) data segments in packets with new data.

Figure 2 shows an example with four separate data segments (S1-S4) illustrating how RDB organizes the data segments in each packet.

 

frames

Figure 2: Example showing how RDB bundles the data of previously sent packets in packets with new data.

.

Note about “Free” or unused space in Ethernet frames
For small TCP packets with less than 1 * MSS worth of payload, the Ethernet frame will not fill the maximum segment size. “Free” is this context refers to additional space that may be used without increasing the same number of Ethernet frames through the network.

RDB in action

Figure 3 shows a scenario where an application performs write calls to the TCP socket with 365 bytes per call. When two consecutive packets are lost, the lost segments are “recovered” by the third packet which bundles the (new) segments from the previous two packets.

 

rdb_timeline_loss

Figure 3: Packet timeline of a TCP thin stream using RDB.

 

Limiting RDB to thin streams

To ensure only thin streams can use RDB, a mechanism called Dynamic Packets in Flight Limit (DPILF) is used. By specifying a minimum allowed ITT, a maximum packets in flight limit is calculated dynamically based on the current RTT measurements. Figure 4 shows how the DPIFL is calculated for the minimum ITT limits 5, 10, 20 and 30 ms, with different RTTs.

 

Figure 4: Example of DPIFL for different minimum ITT values

Figure 4: Example of DPIFL for different minimum ITT values

For comparison, a Static Packets in Flight Limit (SPILF) of 3 is included in figure 4 to show how the thin stream test currently used in the Linux kernel does not adjust the limit based on RTT measurements.

Experiments and results

A wide range of tests have been performed to evaluate the effects of RDB in regards to reduced latencies, as well as fairness towards competing network traffic. The experiments have been performed in a testbed with Linux (Debian) hosts as depicted in figure 5.

 

Testbed setup

Figure 5: Testbed setup

Tests with uniform loss rate

A set of tests were performed where netem on BRIDGE2 was configured with a uniform loss rate. These tests illustrate the difference in latency between regular TCP and RDB with different bundling rates. In figure 6 we see the result from one of the test sets with a uniform loss rate of 10%. The plot shows the ACK latency for three separate tests, where each test started 20 different TCP streams. One test where only regular TCP was used, and the two other tests with 10 regular TCP streams, and 10 RDB streams, where a limit was imposed on when any redundant data could be bundled. For RDB SPIFL 3, each TCP stream was allowed to bundle only when there were three or fewer packets in flight (PIFs). With an ITT of 30 ms, and a network delay configured with 75 ms in each direction (RTT of 150 ms), each stream would have 5 PIFs as long as it is not congestion limited.

Figure 6: Experiment with uniform loss

Figure 6: Experiment with uniform loss

 

Comparing the latency of the different stream types, we see a significant difference when RDB is enabled. The RDB streams that were allowed to bundle while PIFs <= 7 have a significantly better ACK latency, where 90% of the TCP segments have no extra delay. For the regular TCP streams we see that almost 60% of the TCP segments have a higher ACK latency than the ideal 150 ms, even when only 10% of the packets are lost. This is due to head-of-line blocking causing delays not only for the lost segment, but for every segment transmitted after the lost segment, until a retransmission arrives at the receiver side.

Latency tests with cross traffic over a shared bottleneck

In these experiments a set of RDB-enabled thin streams and regular TCP streams are tested while competing against greedy TCP flows over a shard bottleneck. Each of the following plots show the results from two test runs, one where 20 TCP streams (TCP Reference) compete against 5 greedy TCP flows, and another where 10 RDB-enabled thin streams and 10 TCP thin streams compete against 5 greedy TCP flows. The shared bottleneck is configured with a rate limit of 5Mbit and a pfifo queue of one bandwidth-delay product (63 packets).

Figure 7: Experiment with no bundling limit

Figure 7: Experiment with no bundling limit

 

 

The RDB streams plotted in figure 7 are allowed to bundle any available previous segments as long as the payload does not exceed one MSS. Looking at the latencies we see a significant difference between the RDB streams and the TCP streams, however, comparing the latencies to the TCP reference values we see that the increased bandwidth due to the redundant data causes extra queuing delays which may not be desirable.

Figure 8: Experiment with bundling limit

Figure 8: Experiment with bundling limit

 

 

 

Figure 8 shows the results from a test with the same setup as in figure 7, except for a limitation on the redundancy level for the RDB streams where they are allowed to bundle only one previous segment with each packet. The results show that with the reduced redundancy level, the RDB streams still have a much lower latency that the competing TCP streams, while avoiding the extra queuing delay found in the previous tests.

 

Conclusion

The RDB mechanism uses a proactive approach to reduce latencies for time-dependent traffic over TCP. By continually bundling (retransmitting) data instead of waiting for any loss signals before triggering a retransmission, RDB is able to avoid the extra delay of at least 1 RTT needed for a loss signal to reach the sender host. This results in significantly reduced latencies for thin stream traffic that experience sporadic packet loss.

Main benefits of RDB:

  • RDB is backwards compatible TCP mechanism that requires sender side modifications only. This means that e.g. a Linux server can enable RDB on connections to unmodified TCP receivers. In this case, only the data sent from the server to the client will benefit from RDB, and the data in the reverse path will use regular TCP.
  • RDB does not produce extra packets on the network, and achieves the reduced latencies by using the packets that are already scheduled for transmission.
  • Reduces latencies for RDB enabled streams without unreasonable negative effects on competing traffic .

References

[1] B. R. Opstad, “Taming redundant data bundling” Master’s thesis, University of Oslo, May 2015. (Download PDF)
[2] K. Evensen, A. Petlund, C. Griwodz, and P. Halvorsen, “Redundant bundling in TCP to reduce perceived latency for time-dependent thin streams” Comm. Letters, IEEE, vol. 12, no. 4, pp. 324–326, April 2008.
[3] B. R. Opstad, J. Markussen, I. Ahmed, A. Petlund, C. Griwodz, and P. Halvorsen, “Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP”, IEEE Conference on Local Computer Networks (LCN), Clearwater Beach, Florida, USA, Oct. 2015. (Download PDF)

Kenneth Klette Jonassen’s proposed CDG patch featured in lwn.net

Screenshot 2015-06-05 14.07.20Master student Kenneth Klette Jonassen has spent his thesis exploring congestion controls for low latency and has ported the Caia Delay-Gradient (CDG) algorithm from FreeBSD to Linux. His RFC posting of the CDG code to the netdev list got a lot of attention and was featured on the LWN kernel update last week. His thesis documents significant differences between the behaviour of CDG in Linux and FreeBSD, most likely due to differences in default behaivour (like Linux implementing PRR) and the large difference in timer granularity between the two systems. Kenneth also did work where he updated and improved the Linux kernel’s RTT measurement code for SACK blocks.

Kenneth’s work is research going out to the public for further use and development. We’ll try to follow up on that!

Håkon Kvale Stensland successfully defended his PhD

Stensland-Håkon-KvaleOn 27th February 2015, Håkon Kvale Stensland defended his PhD thesis“Processing Multimedia Workloads on Heterogeneous Multicore Architectures“. The defense took place Simula Research Laboratory, Fornebu.

 
 

Abstract:

Processor architectures have been evolving quickly since the introduction of the central processing unit. For a very long time, one of the important means of increasing performance was to increase the clock frequency. However, in the last decade, processor manufacturers have hit the so-called power wall, with high heat dissipation. To overcome this problem, processors were designed with reduced clock frequencies but with multiple cores and, later, heterogeneous processing elements. This shift introduced a new challenge for programmers: Legacy applications, written without parallelization in mind, gain no benefits from moving to multicore and heterogeneous architectures. Another challenge for the programmers is that heterogeneous architecture designs are very different with respect to caches, memory types, execution unit organization, and so forth and, in the worst case, a programmer must completely rewrite the application to obtain the best performance on the new architecture.

Multimedia workloads, such as video encoding, are often time sensitive and interactive. These workloads differ from traditional batch processing workloads with no real-time requirements. This work investigates how to use modern heterogeneous architectures efficiently to process multimedia workloads. To do so, we investigate both simple and complex workloads on multiple architectures to learn about the properties of these architectures. When programing multimedia workloads, it is very important to know how the algorithms perform on the target architecture. In addition, achieving high performance on heterogeneous architectures is not a trivial task, often requiring detailed knowledge about the architecture. We therefore evaluate several optimizations so we can learn how best to write programs for these architectures and avoid potential pitfalls. We later use the knowledge gained to propose a framework design and language called Parallel Processing Graph (P2G). The P2G framework is designed for multimedia workloads and supports heterogeneous architectures. To demonstrate the feasibility of the framework, we construct a proof-of-concept implementation. Two simple workloads show that we can express multimedia workloads in the system. We also demonstrate the scalability of the designed solution.