Category Archives: Technology

C2Tag, a robust and accurate fiducial marker system for image-based localization from challenging images

Our papers on C2Tags has been accepted for publication in CVPR 2016.

C2Tags are a new approach to visual marker tracking that has been designed for localization in challenging environments, such as the film sets in our H2020 project POPART.

Like a few earlier papers, C2Tags consist of concentric rings, whose position in space has been reconstructed before tracking occurs. The new contribution of C2Tags lies in the detection algorithms, which enable it to tolerate considerable partial occlusion and intense motion blur, and still locate and subsequently identify the marker.

Examples of C2Tag resilience

Examples of C2Tag resilience

For POPART, where we use the C2Tags to track film camera movement on film sets where natural feature detectors fail or faster processing is required, this ability to handle fast motion and occlusion is a major game changer.

Paper Abstract

Fiducials offer a reliable detection and identification of images of known

planar figures in a view. They are used in a wide range of applications, especially when a reliable reference is needed to, e.g., estimate the camera movement in cluttered or textureless environments.

A fiducial designed for such applications must be robust to partial occlusions, varying

distances and angles of view, and fast camera movements.

In this paper, we present a new fiducial system, whose markers consist of concentric

circles: relying on the their geometric properties, the proposed system

allows to accurately detect the position of the image of the circles’ common center.

Moreover, the different thickness of its rings can be used to encode the information

associated to the marker, thus allowing the univocal identification of the marker.

We demonstrate that the proposed fiducial system can be detected in very

challenging conditions and the experimental results show that it outperforms other recent fiducial systems.

Redundant Data Bundling in TCP

Redundant Data Bundling (RDB) is a mechanism for TCP that aims to reduce the per-packet latency for traffic produced by interactive applications. The master’s thesis Taming Redundant Data Bundling: Balancing fairness and latency for redundant bundling in TCP[1] presents a new implementation in the Linux kernel, along with a detailed set of experiments that show the benefits of RDB. The thesis builds on the original work on RDB[2], and addresses some unresolved issues making the new implementation a candidate for widespread deployment. The paper Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP[3] presented at LCN2015 describes the RDB mechanism in detail.

Paper Abstract

Time-dependent applications using TCP often send thin-stream traffic, characterised by small packets and high intertransmission-times. Retransmissions after packet loss can result in very high delays for such flows as they often cannot trigger fast retransmit. Redundant Data Bundling is a mechanism that preempts the experience of loss for a flow by piggybacking unacknowledged segments with new data as long as the total packet size is lower than the flow maximum segment size. Although successful at reducing retransmission latency, this mechanism had design issues leaving it open for abuse, effectively making it unsuitable for general Internet deployment. In this paper, we have redesigned the RDB mechanism to make it safe for deployment. We improve the trigger for when to apply it and evaluate its fairness towards competing traffic. Extensive experimental results confirm that our proposed modifications allows for inter-flow fairness while maintaining the significant latency reductions from the original RDB mechanism.

Thin streams and latency reducing mechanisms

Applications like online games, remote control systems, high-frequency trading and sensor networks provide a hugely increased utility when their per-packet latencies are at their lowest. They have in common that they produce mostly traffic with thin-stream characteristics, consisting of small packet payloads and high inter-transmission times (ITTs). Table 1 shows some examples of traffic properties for applications producing thin streams.

Application Payload size (B) Packet inter-arrival time (ms) Avg. BW req
avg min max avg med min max 1% 99% pps bps
VNC (from client) 8 1 106 34 8 0 5451 0 517 29.412 17K
Skype (2 users) 236 14 1267 34 40 0 1671 4 80 29.412 69K
SSH text session 48 16 752 323 159 0 76610 32 3616 3.096 2825
Anarchy Online 98 8 1333 632 449 7 17032 83 4195 1.582 2168
World of Warcraft 26 6 1228 314 133 0 14855 0 3785 3.185 2046
Age of Conan 80 5 1460 86 57 0 1375 24 386 11.628 12K

 

Some existing mechanisms in the Linux kernel aimed at reducing latencies for TCP are

  • Thin Linear Timeouts (LT)
  • Modified fast retransmit (mFR)
  • Early Retransmit (ER)
  • Tail Loss Probe (TLP)

These mechanisms are reactive, i.e. they modify how the TCP sender reacts to loss or possible loss by triggering retransmits faster. The problem is that in a best case scenario, they still require at least an RTT of extra delay for the loss signal to reach the sender host before the data is retransmitted. The exception to this is in cases where the TLP packet retransmits the lost packet.

Read more about time-dependent networking (TDN) and earlier work on thin streams and interactive applications.

Redundant Data Bundling

In contrast to the earlier mentioned reactive mechanisms, RDB is a proactive mechanism which tries to prevent unnecessary delays when packet loss occurs. By bundling already sent data with packets containing new data, RDB can be considered a retransmission mechanism that retransmits segments even before any loss signals are present.

An important property of RDB is that it does not produce extra packets on the network, but instead utilizes the “free” space in small TCP packets produced by interactive applications.

ethernet-frame

Figure 1: Example of an Ethernet frame for a TCP packet with 100 bytes payload.

TCP requires that the payload in each packet is sequential, however, a receiver is required to accept any new data in a packet even if some of the data has already been received. RDB takes advantage of this by bundling already sent (but un-ACKed) data segments in packets with new data.

Figure 2 shows an example with four separate data segments (S1-S4) illustrating how RDB organizes the data segments in each packet.

 

frames

Figure 2: Example showing how RDB bundles the data of previously sent packets in packets with new data.

.

Note about “Free” or unused space in Ethernet frames
For small TCP packets with less than 1 * MSS worth of payload, the Ethernet frame will not fill the maximum segment size. “Free” is this context refers to additional space that may be used without increasing the same number of Ethernet frames through the network.

RDB in action

Figure 3 shows a scenario where an application performs write calls to the TCP socket with 365 bytes per call. When two consecutive packets are lost, the lost segments are “recovered” by the third packet which bundles the (new) segments from the previous two packets.

 

rdb_timeline_loss

Figure 3: Packet timeline of a TCP thin stream using RDB.

 

Limiting RDB to thin streams

To ensure only thin streams can use RDB, a mechanism called Dynamic Packets in Flight Limit (DPILF) is used. By specifying a minimum allowed ITT, a maximum packets in flight limit is calculated dynamically based on the current RTT measurements. Figure 4 shows how the DPIFL is calculated for the minimum ITT limits 5, 10, 20 and 30 ms, with different RTTs.

 

Figure 4: Example of DPIFL for different minimum ITT values

Figure 4: Example of DPIFL for different minimum ITT values

For comparison, a Static Packets in Flight Limit (SPILF) of 3 is included in figure 4 to show how the thin stream test currently used in the Linux kernel does not adjust the limit based on RTT measurements.

Experiments and results

A wide range of tests have been performed to evaluate the effects of RDB in regards to reduced latencies, as well as fairness towards competing network traffic. The experiments have been performed in a testbed with Linux (Debian) hosts as depicted in figure 5.

 

Testbed setup

Figure 5: Testbed setup

Tests with uniform loss rate

A set of tests were performed where netem on BRIDGE2 was configured with a uniform loss rate. These tests illustrate the difference in latency between regular TCP and RDB with different bundling rates. In figure 6 we see the result from one of the test sets with a uniform loss rate of 10%. The plot shows the ACK latency for three separate tests, where each test started 20 different TCP streams. One test where only regular TCP was used, and the two other tests with 10 regular TCP streams, and 10 RDB streams, where a limit was imposed on when any redundant data could be bundled. For RDB SPIFL 3, each TCP stream was allowed to bundle only when there were three or fewer packets in flight (PIFs). With an ITT of 30 ms, and a network delay configured with 75 ms in each direction (RTT of 150 ms), each stream would have 5 PIFs as long as it is not congestion limited.

Figure 6: Experiment with uniform loss

Figure 6: Experiment with uniform loss

 

Comparing the latency of the different stream types, we see a significant difference when RDB is enabled. The RDB streams that were allowed to bundle while PIFs <= 7 have a significantly better ACK latency, where 90% of the TCP segments have no extra delay. For the regular TCP streams we see that almost 60% of the TCP segments have a higher ACK latency than the ideal 150 ms, even when only 10% of the packets are lost. This is due to head-of-line blocking causing delays not only for the lost segment, but for every segment transmitted after the lost segment, until a retransmission arrives at the receiver side.

Latency tests with cross traffic over a shared bottleneck

In these experiments a set of RDB-enabled thin streams and regular TCP streams are tested while competing against greedy TCP flows over a shard bottleneck. Each of the following plots show the results from two test runs, one where 20 TCP streams (TCP Reference) compete against 5 greedy TCP flows, and another where 10 RDB-enabled thin streams and 10 TCP thin streams compete against 5 greedy TCP flows. The shared bottleneck is configured with a rate limit of 5Mbit and a pfifo queue of one bandwidth-delay product (63 packets).

Figure 7: Experiment with no bundling limit

Figure 7: Experiment with no bundling limit

 

 

The RDB streams plotted in figure 7 are allowed to bundle any available previous segments as long as the payload does not exceed one MSS. Looking at the latencies we see a significant difference between the RDB streams and the TCP streams, however, comparing the latencies to the TCP reference values we see that the increased bandwidth due to the redundant data causes extra queuing delays which may not be desirable.

Figure 8: Experiment with bundling limit

Figure 8: Experiment with bundling limit

 

 

 

Figure 8 shows the results from a test with the same setup as in figure 7, except for a limitation on the redundancy level for the RDB streams where they are allowed to bundle only one previous segment with each packet. The results show that with the reduced redundancy level, the RDB streams still have a much lower latency that the competing TCP streams, while avoiding the extra queuing delay found in the previous tests.

 

Conclusion

The RDB mechanism uses a proactive approach to reduce latencies for time-dependent traffic over TCP. By continually bundling (retransmitting) data instead of waiting for any loss signals before triggering a retransmission, RDB is able to avoid the extra delay of at least 1 RTT needed for a loss signal to reach the sender host. This results in significantly reduced latencies for thin stream traffic that experience sporadic packet loss.

Main benefits of RDB:

  • RDB is backwards compatible TCP mechanism that requires sender side modifications only. This means that e.g. a Linux server can enable RDB on connections to unmodified TCP receivers. In this case, only the data sent from the server to the client will benefit from RDB, and the data in the reverse path will use regular TCP.
  • RDB does not produce extra packets on the network, and achieves the reduced latencies by using the packets that are already scheduled for transmission.
  • Reduces latencies for RDB enabled streams without unreasonable negative effects on competing traffic .

References

[1] B. R. Opstad, “Taming redundant data bundling” Master’s thesis, University of Oslo, May 2015. (Download PDF)
[2] K. Evensen, A. Petlund, C. Griwodz, and P. Halvorsen, “Redundant bundling in TCP to reduce perceived latency for time-dependent thin streams” Comm. Letters, IEEE, vol. 12, no. 4, pp. 324–326, April 2008.
[3] B. R. Opstad, J. Markussen, I. Ahmed, A. Petlund, C. Griwodz, and P. Halvorsen, “Latency and Fairness Trade-Off for Thin Streams using Redundant Data Bundling in TCP”, IEEE Conference on Local Computer Networks (LCN), Clearwater Beach, Florida, USA, Oct. 2015. (Download PDF)

Kenneth Klette Jonassen’s proposed CDG patch featured in lwn.net

Screenshot 2015-06-05 14.07.20Master student Kenneth Klette Jonassen has spent his thesis exploring congestion controls for low latency and has ported the Caia Delay-Gradient (CDG) algorithm from FreeBSD to Linux. His RFC posting of the CDG code to the netdev list got a lot of attention and was featured on the LWN kernel update last week. His thesis documents significant differences between the behaviour of CDG in Linux and FreeBSD, most likely due to differences in default behaivour (like Linux implementing PRR) and the large difference in timer granularity between the two systems. Kenneth also did work where he updated and improved the Linux kernel’s RTT measurement code for SACK blocks.

Kenneth’s work is research going out to the public for further use and development. We’ll try to follow up on that!

New Collaboration with AHO

Lately we started to collaborate with Kjetil Nordby from the Ocean Industries Concept Lab of the Architecture and Design Hoyskolen (AHO).

One of their projects is the Ulstein Brige Vision. The visualisation show
how multimodal interaction may be applied on future offshore service vessels.
The concepts are developed as part of the Ulstein Bridge Concept research project.
Videos about their work:

http://vimeo.com/48453519

http://vimeo.com/48441245

The MPG Demos @ ACM MMSys 2015

This year our group was able to get several demos accepted at the ACM MMSys Conference 2015 in Portland. The demos are:

Scaling Virtual Camera Services to a Large Number of Users. In this demo we show how a PTZ camera system can be used without consuming much bandwidth. For reducing the bandwidth usage, we reduce the quality adaptively in the regions where the data is not required to be present. Video: http://goo.gl/tQwDT1

Energy Efficient Video Encoding Using the Tegra K1 Mobile Processor. This demonstration shows how hardware and software configuration impacts the running power usage of a live video encoder. The encoder, Codec 63, is architecturally similar to H.264 and Google’s VP8 and runs on a Tegra K1 mobile processor. Participants can offload video processing to a GPU, change CPU and GPU operating frequency, migrate between CPU clusters and turn off CPU cores. The effects of these settings in terms of achieved frame-rate, power usage and energy per encoded frame is displayed live. How energy-efficiently can you encode yourself? Video: http://goo.gl/Ngt58L

How much delay is there really in current games?. This demonstration uses a typical gaming setup wired to an oscilloscope to show how long the total, local delay is. Participants can also bring their own computers and games so that they can measure delays in the games or other software.

Expert Driven Semi-Supervised Elucidation Tool for Medical Endoscopic Videos. In this demo we present a semi-supervised annotation tool for medical experts. The tool should help to collect medical data for machine learning and computer vision approaches. Therefore we combine lightweight and time efficient manual annotations with object tracking algorithms. Video: http://goo.gl/364OCA

tux-base

Multimedia and Healthcare

In the last month our group could published several papers about research that can help to improve the way how doctors use multimedia technology in their work. The last accepted publication is about an elucidation tool for endoscopic videos which will be presented at the ACM MMSys Conference 2015 in Portland. An overview about our work in this area can be found at:  http://scholar.google.com/citations?user=Vd_ApDoAAAAJ&hl=en

.surgery-hierarchy2Screenshot_2014-06-24-10-26-05

 

 

New hardware from Nvidia in the lab!

Nvidia has kindly sent ut a couple of Jetson TK1 development boards!

The Jetson boards uses the new Tegra K1 SoC from Nvidia with what Nvidia call a 4-Plus-1 quad-core ARM Cortex-A15. This means that you have four high-performance ARM cores, and one low-power companion core. The really exiting news with the Tegra K1 is the GPU. The chip now has a 192-core full Kepler SMX, and the board supports CUDA, OpenCL and OpenGL. The same APIs as a standard Nvidia desktop GPU. For more information about the Tegra K1, check this web-page at Nvidia.

Jetson TK1 development kit from Nvidia

 

The boards come with Nvidia’s Linux for Tegra (L4T) distribution which is based on Ubuntu. More specifications about the board can be found here and here.

For access to the boards, contact Håkon.

« Older Entries