Abstract: HTTPS encrypted traffic can leak information about underlying contents through various statistical properties of traffic flows like packet lengths and timing, opening doors to traffic fingerprinting attacks. Recently proposed traffic fingerprinting attacks leveraged Convolutional Neural Networks (CNNs) and recorded very high accuracies undermining the state-of-the-art mitigation techniques. In this paper, we methodically dissect such CNNs with the objectives of building further accurate and scalable traffic classifiers and understanding the inner workings of such CNNs to develop effective mitigation techniques. By conducting experiments with three datasets, we show that website fingerprinting CNNs focus majorly on the initial parts of traces instead of longer windows of continuous uploads or downloads. Next, we show that traffic fingerprinting CNNs exhibit transfer-learning capabilities allowing identification of new websites with fewer data. Finally, we show that traffic fingerprinting CNNs outperform RNNs because of their resilience to random shifts in data happening due to varying network conditions.
Abstract: While end-to-end encryption brings security and privacy to the end-users, it makes legacy solutions such as Deep Packet Inspection ineffective. Despite the recent work in machine learning-based encrypted traffic classification, these new techniques would require, if they were to be deployed in real enterprise-scale networks, an enhanced flow sampling due to sheer volume of data being traversed. In this paper, we propose a holistic architecture that can cope with encryption and multi-Gbps line rate with sampling and sketching flow statistics, which allows network operators to both accurately estimate the flow size distribution and identify the nature of VPN-obfuscated traffic. With over 6000 video traffic traces, we show that it is possible to achieve 99% accuracy for service provider classification even with sampled possibly inaccurate data. We also deploy our solution at an operational enterprise-scale network leveraging kernel bypassing to demonstrate its capability to efficiently sample live traffic for analytics.
Abstract—The vulnerability of traditional blockchains have been demonstrated at multiple occasions. Various companies are now moving towards Proof-of-Authority (PoA) blockchains with more conventional Byzantine fault tolerance, where a known set of n permissioned sealers, among which no more than t are Byzantine, seal blocks that include user transactions. Despite their wide adoption, these protocols were not proved correct.
In this paper, we present the Cloning Attack against the two mostly deployed PoA implementations of Ethereum, namely Aura and Clique. The Cloning Attack consists in one sealer cloning its pair of public-private keys into two distinct Ethereum instances that communicate with distinct groups of sealers. To identify their vulnerabilities, we first specified the corresponding algorithms. We then deploy one testnet for each protocol and demonstrate the success of the attack with only one Byzantine sealer. Finally, we propose counter-measures that prevent an adversary from double spending and introduce the necessary number of sealers needed to decide a block depending on n and t for both Aura and Clique to be safe.
Abstract: For network analysts, understanding how traffic flows through a network is crucial to network management and forensics such as network monitoring, vulnerability assessment and defence.
In order to understand how traffic flows through a network, network analysts typically access multiple, disparate data sources and mentally fuse this information.
Providing some sort of automated support is crucial for network management. However, information about the quality of the network data sources is essential in order to build analyst’s trust in automated tools.
This paper presents SydNet, a novel Linked Data quality assessment framework which allows analysts to define quality dimensions and metrics which provide an accurate reflection of the quality of the data sources.
The SydNet architecture also provides a number of novel fusion heuristics which can be used to fuse data from various network data sources.
We demonstrate the utility of the SydNet architecture using CAIDA longitudinal topological data from the last 24 months and we demonstrate that our approach was able to detect dataset quality anomalies that would require further investigation.
Counterfeit apps impersonate existing popular apps in attempts to misguide users. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation. In this paper, we propose a novel approach of combining content embeddings and style embeddings generated from pre-trained convolutional neural networks to detect counterfeit apps. We present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-10,000 apps. Under conservative assumptions, we were able to find 2,040 potential counterfeits that contain malware in a set of 49,608 apps that showed high similarity to one of the top-10,000 popular apps in Google Play Store. We also find 1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third party advertisement libraries.
Blockchain has become one of the most attractive technologies for applications, with a large range of deployments such as production, economy, or banking. Under the hood, Blockchain technology is a type of distributed database that supports untrusted parties. In this paper we focus Hyperledger Fabric, the first blockchain in the market tailored for a private environment, allowing businesses to create a permissioned network. Hyperledger Fabric implements a PBFT consensus in order to maintain a non forking blockchain at the application level.
We deployed this framework over an area network between France and Germany in order to evaluate its performance when potentially large network delays are observed. Overall we found that when network delay increases significantly (i.e. up to 3.5 seconds at network layer between two clouds), we observed that the blocks added to our blockchain had up to 134 seconds offset after 100\textsuperscript{th} block from one cloud to another.
Thus by delaying block propagation, we demonstrated that Hyperledger Fabric does not provide sufficient consistency guaranties to be deployed in critical environments. Our work, is the fist to evidence the negative impact of network delays on a PBFT-based blockchain.
Abstract: The proliferation of smart devices has led to an exponential growth in digital media consumption, especially mobile video for content marketing. The vast majority of the associated Internet traffic is now end-to-end encrypted, and while encryption provides better user privacy and security, it has made network surveillance an impossible task. The result is an unchecked environment for exploiters and attackers to distribute content such as fake, radical and propaganda videos.
Recent advances in machine learning techniques have shown great promise in characterising encrypted traffic captured at the end points. However, video fingerprinting from passively listening to encrypted traffic, especially wireless traffic, has been reported as a challenging task due to the difficulty in distinguishing retransmissions and multiple flows on the same link. We show the potential of fingerprinting videos by passively sniffing WiFi frames in air, even without connecting to the WiFi network. We have developed Multi-Layer Perceptron (MLP) and Recurrent Neural Networks (RNNs) that are able to identify streamed YouTube videos from a closed set, by sniffing WiFi traffic encrypted at both Media Access Control (MAC) and Network layers. We compare these models to the state-of-the-art wired traffic classifier based on Convolutional Neural Networks (CNNs), and show that our models obtain similar results while requiring significantly less computational power and time (approximately a threefold reduction).
Abstract: Recently, several works conjectured the vulnerabilities of mainstream blockchains under several network attacks. All these attacks translate into showing that the assumptions of these blockchains can be violated in theory or under simulation at best. Unfortunately, previous results typically omit both the nature of the network under which the blockchain code runs and whether blockchains are private, consortium or public.
In this paper, we study the public Ethereum blockchain as well as a consortium and private blockchains and quantify the feasibility of man-in-the-middle and double spending attacks against them. To this end, we list important properties of the Ethereum public blockchain topology, we deploy VMs with constrained CPU quantum to mimic the top-10 mining pools of Ethereum and we develop full-fledged attacks, that first partition the network through BGP hijacking or ARP spoofing before issuing a Balance Attack to steal coins. Our results demonstrate that attacking Ethereum is remarkably devastating in a consortium or private context as the adversary can multiply her digital assets by 200, 000× in 10 hours through BGP hijacking whereas it would be almost impossible in a public context.
Abstract: Covert channel communication allows two entities (usually referred as Alice and Bob) to communicate secretly, whether a third party (aka Warden) is able to monitor the traffic or not. Recently, MPTCP has been introduced to augment the endpoint communication while leveraging multi-homing capabilities of end-devices. This multi-network capability would logically augment TCP/IP features for covert channelling. In this paper, we introduce and discuss different possible storage covert channels in MPTCP and how TCP/IP techniques could be extended for MPTCP. Through the introduction of a new method for the estimation of covert channel capacity in MPTCP, we show that storage covert channel with MPTCP increases the capacity of TCP, but it fails to significantly enhance the undetectability of it. As a result, we expect MPTCP covert channels will attract hackers and researchers’ community very soon.
A long body of research work has led to the conjecture that highly efficient I/O processing at user-level would necessarily violate protection. In this paper, we debunk this myth by introducing DLibOS , a new paradigm that consists of distributing a library OS on specialized cores to achieve performance and protection at the user-level. Its main novelty consists of leveraging network-on-chip to allow hardware message passing, rather than context switches, for communication between different address spaces.
To demonstrate the feasibility of our approach, we implement a driver and a network stack at user-level on a Tilera many-core machine. We define a novel asynchronous socket interface and partition the memory such that the reception, the transmission and the application update isolated partitions. The main drawback is perhaps the incompatibility with the BSD interface, however, our high performance results of 4.2 and 3.1 million requests per second obtained on a webserver and the Memcached applications, respectively, confirms the relevance of our design decisions. Finally, we compare DLibOS against a non-protected user-level network stack and show that protection comes at a negligible cost.
Abstract: Multi-cloud promises to substantially improve fault- tolerance, by tolerating disasters affecting one provider. Un- fortunately, multi-cloud solutions are premature and none of them are fully fledged. Their main impediment is the lack of network services: to date, it remains impossible for a customer to setup and control a multi-cloud network limiting drastically its possibilities. Moreover, manually inter-connecting multiple clouds from various providers is challenging: each cloud provider may offer dissimilar services and incompatible APIs.
In this paper, we present the first reconfigurable inter- cloud network, called Stratosphere. Stratosphere combines recent achievements in the context of container deployment and software defined networking (SDN) to build an SDN-based IP overlay of software containers across providers. Stratosphere aims at dynamically re-routing traffic based on service guarantees, congestion, or failures. We evaluate Stratosphere by reconfiguring the network between the major cloud providers, namely Amazon EC2, Microsoft Azure, and Google Cloud. The comparison against the Docker Swarm baseline indicates that this unique reconfiguration feature presents an overhead of only 1% when not used but can improve bandwidth significantly when used.
Abstract: Leveraging multi-path transmission in an energy efficient manner is of great importance for mobile devices in heterogeneous wireless networks. Recently, Multi-path TCP (MPTCP) has been introduced as a potential solution that could leverage this path diversity, but making it energy efficient not only depends on the end-user’s observed interface capacity but also on the other competitors’ decision. We discuss about the paradox of energy saving in MPTCP for mobile devices. Then we propose, hereafter, a new algorithm to enhance the MPTCP energy efficiency in a resource-shared wireless network context by exploiting a newly introduced Q-learning framework. Based on large scale simulation, we demonstrate that our proposed algorithm could save up to 36%, energy compared to vanilla MPTCP.
Dynamic Adaptive Streaming over HTTP (DASH) is one of the most popular ways to stream videos at present. In this work, we propose a DASH player energy-aware plugin (eDASH) for mobile devices which help reduce the battery consumption of the device. The eDASH player utilises a novel bitrate and video brightness adaptation algorithm to determine the next chunk to download. This algorithm utilises an energy-aware QoE model which factors in power consumption of the device in conjunction with existing bitrate adaptation logic to determine the next chunk. We also propose a new DASH architecture which could be easily integrated with the existing one. Macro-benchmarking of energy consumption of a mobile device while streaming and playing back video is conducted to obtain energy profiles of various video qualities. This energy data is then used along with real world network traces to drive simulations to evaluate energy savings that could be achieved using eDASH. We observe that up to 45% energy savings could be achieved with minimal reduction in QoE. We also find that up to 80% data transfer savings could also be achieved with an eDASH client.
Abstract: Over the past few years, a number of black-hat marketplaces have emerged that facilitate access to reputation manipulation services, including the sale of fake Facebook likes, fraudulent search engine optimization (SEO), and bogus Amazon reviews. In order to deploy effective technical and legal countermeasures, it is important to understand how these blackhat marketplaces operate: what kind of services are offered? who is selling? who is buying? what are they buying? who is more successful? why are they successful? To this end, this paper presents a detailed micro-economic analysis of a popular online black-hat marketplace, namely, SEOClerks.com. As the website provides non-anonymized transaction information, we set to analyze selling and buying behavior of individual users, propose a strategy to identify key users, and study their tactics as compared to other (non-key) users. We find that key users: (1) are mostly located in Asian countries, (2) are focused more on selling black-hat SEO services, (3) tend to list more lower priced services, and (4) sometimes buy services from other sellers and then sell at higher prices. Finally, we discuss the implications of our findings with respect to designing robust countermeasures as well as devising effective economic and legal intervention strategies against marketplace operators and key users.
Abstract: Multipath forwarding consists of using multiple paths simultaneously to transport data over the network. While most such techniques require endpoint modifications, we investigate how multipath forwarding can be done inside the network, transparently to endpoint hosts. With such a network-centric approach, packet reordering becomes a critical issue as it may cause critical performance degradation. We present a Software Defined Network architecture which automatically sets up multipath forwarding, including solutions for reordering and performance improvement, both at the sending side through multipath scheduling algorithms, and the receiver side, by re-sequencing out-of-order packets in a dedicated in-network buffer. We implemented a prototype with commonly available technology and evaluated it in both emulated and real networks. Our results show consistent throughput improvements, thanks to the use of aggregated path capacity. We give comparisons to Multipath TCP, where we show our approach can achieve a similar performance while offering the advantage of endpoint transparency.
Abstract: SDN efficiency is driven by the ability of controllers to process small packets based on a global view of the network. The goal of such controllers is thus to treat new flows coming from hundreds of switches in a timely fashion. In this paper, we show this ideal remains impossible through the most extensive evaluation of SDN controllers. We evaluated five state-of-the-art SDN controllers and discovered that the most efficient one spends a fifth of his time in packet serialization. More dramatically, we show that this limitation is inherent to the object oriented design principle of these controllers. They all treat each single packet as an individual object, a limitation that induces an unaffordable per-packet overhead. To eliminate the responsibility of the hardware from our results, we ported these controllers on a network-efficient architecture, Tilera, and showed even worse performance. We thus argue for an in-depth rethinking of the design of the SDN controller into a lower level software that leverages both operating system optimizations and modern hardware features.
Abstract: Many engineering students at third-level institutions across the world will not have the advantage of using real-world experimentation equipment, as the infrastructure and resources required for this activity are too expensive. This paper explains how the FORGE (Forging Online Education through FIRE) FP7 project transforms Future Internet Research and Experimentation (FIRE) testbed facilities into educational resources for the eLearning community. This is achieved by providing a framework for remote experimentation that supports easy access and control to testbed infrastructure for students and educators. Moreover, we identify a list of recommendations to support development of eLearning courses that access these facilities and highlight some of the challenges encountered by FORGE.
Abstract:Remote labs and online experimentation offer a rich opportunity to learners by allowing them to control real equipment at distance in order to conduct scientific investigations. Remote labs and online experimentation build on top of numerous emerging technologies for supporting remote experiments and promoting the immersion of the learner in online environments recreating the real experience. This paper presents a methodology for the design, delivery and evaluation of learning resources for remote experimentation. This methodology has been developed in the context of the European project FORGE, which promotes online learning using Future Internet Research and Experimentation (FIRE) facilities. FORGE is a step towards turning FIRE into a pan-European educational platform for Future Internet. This will benefit learners and educators by giving them both access to world-class facilities in order to carry out experiments on e.g. new internet protocols. In turn, this supports constructivist and self-regulated learning approaches, through the use of interactive learning resources, such as eBooks.
Abstract: The controller placement problem (CPP) is one of the key challenges of software defined networks to increase performance. Given the locations of switches, CPP consists of choosing the controller locations that minimize the latency between switches and controllers. In its current form, however, CPP assumes a fixed traffic and no existing solutions adapt the placement to the load. In this paper, we introduce the dynamic controller placement problem that consists of (i) determining the locations of controller modules to bound communication latencies, and of (ii) determining the number of controllers per module to support the load. We propose, LiDy, a solution that combines a controller placement algorithm with a dynamic flow management algorithm. We evaluate the latency and the controller utilization of LiDy on sparse and dense regions. Our results show that, in all settings, LiDy achieves a higher utilization than the most recent controller placement solution.
Abstract: Cloud services are becoming centralized at several geo-replicated datacentres. These services replicate data within a single datacentre to tolerate isolated failures. Unfortunately, the effects of a disaster cannot be avoided, as existing approaches migrate a copy of data to backup datacentres only after data have been stored at a primary datacentre. Upon disaster, all data not yet migrated can be lost.
In this paper, we propose and implement SDN-KVS, a disaster-tolerant key-value store, which provides strong disaster resilience by replicating data before storing. To this end, SDN-KVS features a novel communication primitive, SDN-cast, that leverages Software Defined Network (SDN) in two ways: it offers an SDN-multicast primitive to replicate critical update request flows and an SDN-anycast primitive to redirect request flows to the closest available datacentre. Our performance evaluation indicates that SDN-KVS ensures no data loss and that traffic gets redirected across long distance key-value store replicas within 30 s after a datacentre outage.
Abstract: Facebook pages offer an easy way to reach out to a very large audience as they can easily be promoted using Facebook’s advertising platform. Recently, the number of likes of a Facebook page has become a measure of its popularity and profitability, and an underground market of services boosting page likes, aka like farms, has emerged. Some reports have suggested that like farms use a network of profiles that also like other pages to elude fraud protection algorithms, however, to the best of our knowledge, there has been no systematic analysis of Facebook pages’ promotion methods.
This paper presents a comparative measurement study of page likes garnered via Facebook ads and by a few like farms. We deploy a set of honeypot pages, promote them using both methods, and analyze garnered likes based on likers’ demographic, temporal, and social characteristics. We highlight a few interesting findings, including that some farms seem to be operated by bots and do not really try to hide the nature of their operations, while others follow a stealthier approach, mimicking regular users’ behavior.
Abstract: This paper studies the feasibility and benefits of greening Web servers by using ultra-low-power micro-computing boards to serve Web content. Our study focuses on the tradeoff between power and performance in such systems. Our premise is that low-power computing platforms can provide adequate performance for low-volume Websites run by small businesses or groups, while delivering a significantly higher request per Watt. We use the popular Raspberry Pi platform as an example low-power computing platform and experimentally evaluate our hypothesis for static and dynamic Web content served using this platform. Our result show that this platform can provide comparable response times to more capable server-class machines for rates up to 200 requests per second (rps); however, the scalability of the system is reduced to 20 rps for serving more compute-intensive dynamic content. Next, we study the feasibility of using clusters of low-power systems to serve requests for larger Websites. We find that, by utilising low-power multi-server clusters, we can achieve 17x to 23x more requests per Watt than typical tower server systems. Using simulations driven by parameters obtained from our real-world experiments, we also study dynamic multi-server policies that consider the tradeoff between power savings and overhead cost of turning servers on and off.
Abstract: This paper presents the Forging Online Education through FIRE (FORGE) initiative, which aims to transform the Future Internet Research and Experimentation (FIRE) testbed facilities, already vital for European research, into a learning resource for higher education. From an educational perspective this project aims at promoting the notion of Self-Regulated Learning (SRL) through the use of a federation of highperformance testbeds and at building unique learning paths based on the integration of a rich linked-data ontology. Through FORGE, traditional online courses will be complemented with interactive laboratory courses. It will also allow educators to efficiently create, use and re-use FIRE-based learning experiences through our tools and techniques. And, most importantly, FORGE will enable equity of access to the latest ICT systems and tools independent of location and at low cost, strengthening the culture of online experimentation tools and remote facilities.
Abstract: A new tool and web portal are presented for deployment of High Performance Computing applications on distributed heterogeneous computing platforms. This tool relies on the decentralized environment P2PDC and the OMF and OML multithreaded control, instrumentation and measurement libraries. Deployment on PlanetLab of a numerical simulation application is studied. A first series of computational results is displayed and analyzed.
Abstract: One-click file hosting systems (1-CFHS) have become a prominent means to exchange files across the Internet. Studies have previously identified that a lot of the hosted content is infringing on its owner’s copyright, and some of the most well know 1-CFHSs have been taken offline as a result of this. In this paper, we present a pilot study of how links to, and copies of, such content are exchanged via online forums. We have crawled and parsed pages from four of the most prominent sites over a period of a few months in order to extract URLs to these items. These URLs have then been periodically tested until they became unavailable in order to derive the lifespan of these copies on various 1-CFHS. We find that URLs are mostly posted once, presumably by their creators, and that unauthorised content on 1-CFHSs has an availability expectancy of about 40 days before being taken down.We propose an initial simple life-and-death model for such content in the form of a Markov chain. We also show that the 1-CFHS market is still unstable, with most of the past leader services having disappeared from the current charts.
Abstract: In this paper, we introduce the Moana network infrastructure. It draws on well-adopted practices from the database and software engineering communities to provide a robust and expressive information-sharing service using hypergraph-based network indirection. Our proposal is twofold. First, we argue for the need for additional layers of indirection used in modern information systems to bring the network layer abstraction closer to the developer’s world, allowing for expressiveness and flexibility in the creation of future services. Second, we present a modular and extensible design of the network fabric to support incremental architectural evolution and innovation, as well as its initial evaluation.
Abstract: This paper discusses the advantages of using real experiments in networking lectures as opposed to simulation and tcpdump labs. Indeed, we claim that with the inclusion of networking to numerous curriculums the way to illustrate and assess in these courses need to evolve to better take advantage of the on-going research without limiting the top of the class students. In particular we identified five key challenges that needed to be addressed to improve networking education and bring it closer to reality. For that we present the Internet Remote Emulation Experiment Laboratory (IREEL) an e-learning platform designed and developed for the last 4 years. This platform allows the student to configure real network and application characteristics in order to illustrate key concepts of the lecture. In this context, we allow many improvements for labs or assignment in networking courses. IREEL has been previously used in introductory courses to networking and received very good rating by the student for the understanding of general and specific concept of the lecture.
Abstract: We present the Lab Wiki, an executable paper platform primarily designed but not limited to networking experiment-based research. The LabWiki leverages the current state of the art tools for the orchestration of experiments in the networking community and propose a new approach to execute and reproduce experiments. We demonstrate the usability of the LabWiki through an example at the boundary between network and high performance computing researches.
Abstract: This article presents a comprehensive summary and recommendations towards the use of IREEL, an e-learning platform designed for network studies in CSE courses, based on our hands-on experience in a large hybrid undergraduate/postgraduate course at the UNSW. We found that the tool was well received by the students for understanding key concepts, especially when compared to legacy tools used in labs. Furthermore we show that our tool was able to handle a very large number of experiments in a relatively short amount of time.
Abstract: Whilst dealing with topics that are more and more influenced by physical properties of the underlying media, the networking community still lacks a culture of rigorous result verification. Indeed, as opposed to most of the science and engineering fields there are very few benchmarks to test protocols against. Furthermore, in most publications the authors do not give the community access to the raw results or details of the performed experimental procedures. Therefore it is impossible to accurately reproduce their experiments. We propose to solve this problem by extending the state of the art experiment tool OMF with a public portal. This portal, while providing the experimenter with access to experimental resources, also provides the community with a system for comprehensive experiment description and result verification. The collection of both the measurement set and the experiment’s description is done in a transparent manner for the experimenter, who can decide to publish them via the portal once the research is mature enough.
Abstract: Networking researchers using testbeds containing mobile nodes face the problem of measurement collection from partially disconnected nodes. We solve this problem efficiently by adding a proxy server to the Orbit Measurement Library (OML) to transparently buffer measurements on disconnected nodes, and we give results showing our solution in action. We then add a flexible filtering and feedback mechanism on the server that enables a tailored hierarchy of measurement collection servers throughout the network, live context-based steering of experiment behaviour, and live context-based control of the measurement collection process itself.
Abstract: This papers presents an e-learning platform that improves the current state of the art by successfully integrating four features. Firstly, it provides a web interface incorporating lecture notes, labs instruction and results. This remote interface also allows the teacher to easily implement new experiments using a high level description language. Secondly, the proposed architecture will provide a low deployment cost without limiting the experimental scope. Thirdly, the new platform can take advantage of many existing and emerging testbeds. Finally, we introduce a new framework for teaching and learning network concepts. Thus a student using this new tool during an introductory course will embrace a less difficult path to perform more advanced studies on currently widely deployed testbed.
Abstract: This paper deals with high performance Peer-to-Peer computing applications. We concentrate on the solution of large scale numerical simulation problems via distributed iterative methods. We present the current version of an environment that allows direct communication between peers. This environment is based on a self-adaptive communication protocol. The protocol configures itself automatically and dynamically in function of application requirements like scheme of computation and elements of context like topology by choosing the most appropriate communication mode between peers. A first series of computational experiments is presented and analyzed for the obstacle problem.
Abstract: Data and service delivery have been historically based on a ”network centric” model, with datacentres being the focal sources. The amount of energy consumed by these datacentres has become an emerging issue for the companies operating them. Thus, many contributions have proposed solutions to improve the energy efficiency of current datacentre architecture and deployments. A recently proposed approach argues for removing the datacentres from the delivery architecture. Their functionalities will instead be distributed at the edge of the network, directly within operator-managed home devices, such as Home Gateways, or Set-Top-Box (STB). This paper presents a study of the overall energy consumption required by such a community of STBs in order to provide the same services as datacentres. This paper also investigates a possible distributed algorithm to further reduce this overall energy consumption. This algorithm will be deployed over a managed peer-to-peer network of STBs. It will make optimized decisions and instruct unused STBs to switch Off to save energy without altering the general Service Level Agreement. We demonstrate the potential benefit of such an algorithm through an off-line scheduling. Finally, we propose a service-delivery model that allows us to integrate the service availability in the energy optimization problem. The combination of these two models is the first step in the development of our energy optimisation distributed algorithm.
Abstract: Networking testbeds are playing an increasingly important role in the development of new communication technologies. Testbeds are traditionally built for a particular project or to study a specific technology. An alternative approach is to federate existing testbeds to a) cater for experimenter needs which cannot be fillled by a single testbed, and b) provide a wider variety of environmental settings at different scales. These heterogenous settings allow the study of new approaches in environments similar to what one finds in the real world.
This paper presents OMF, a control, measurement, and management framework for testbeds. It describes through some examples the versatility of OMF’s current architecture and gives directions for federation of testbeds through OMF. In addition, this paper introduces a comprehensive experiment description language that allows an experimenter to describe resource requirements and their configurations, as well as experiment orchestration. Researchers would thus be able to reproduce their experiment on the same testbed or in a different environment with little changes. Along with the efficient support for large scale experiments, the use of testbeds and support for repeatable experiments will allow the networking field to build a culture of cross verification and therefore strengthen its scientific approach.
Abstract: TFRC protocol has not been designed to enable reliability. Indeed, the birth of TFRC results from the need of a congestion controlled and realtime transport protocol in order to carry multimedia traffic. Historically, and following the anarchical deployment of congestion control mechanisms implemented on top of UDP protocol, the IETF decided to standardize such protocol in order to provide to multimedia applications developers a framework for their applications. In this paper, we propose to design a reliable rate-based transport protocol based on TFRC. This design is motivated by finding an alternative to TCP where its oscillating behaviour is known to be counterproductive over certain networks such as VANET. However, we found interesting results partly inherited from the smooth behaviour of TFRC in the context of wired networks. In particular, we show that TFRC can realize shorter data transfer compare to TCP over a complex and realistic topology. We firstly detail and fully benchmark our protocol in order to verify that our resulting prototype inherits from the good properties of TFRC in terms of TCP-friendliness. As a second contribution, we also propose a ns-2 implementation for testing purpose to the networking community. Following these preliminary tests, we drive a set of non-exhaustive experiments to illustrate some interesting behaviour of this protocol in the context of wired networks.
Abstract: We propose modifications in the TCP-Friendly Rate Control (TFRC) congestion control mechanism from the Datagram Congestion Control Protocol (DCCP) intended for use with real-time traffic, which are aimed at improving its performance for long delay (primarily satellite) links. Firstly, we propose an algorithm to optimise the number of feedback messages per round trip time (RTT) rather than use the currently standard of at least one per RTT, based on the observed link delay. We analyse the improvements achievable with proposed modification in different phases of congestion control and present results from simulations with modified ns-2 DCCP and live experiments using the modified DCCP Linux kernel implementation. We demonstrate that the changes results in improved slow start performance and a reduced data loss compared to standard DCCP, while the introduced overhead remains acceptable.
Abstract: Pervasive communications are increasingly sent over mobile devices and personal digital assistants. This trend has been observed during the last football world cup where cellular phones service providers have measured a significant increase in multimedia traffic. To better carry multimedia traffic, the IETF standardized a new TCP Friendly Rate Control (TFRC) protocol. However, the current receiver-based TFRC design is not well suited to resource limited end systems. We propose a scheme to shift resource allocation and computation to the sender. This sender based approach led us to develop a new algorithm for loss notification and loss rate computation. We demonstrate the gain obtained in terms of memory requirements and CPU processing compared to the current design. Moreover this shifting solves security issues raised by classical TFRC implementations. We have implemented this new sender-based TFRC, named TFRC$_{light}$, and conducted measurements under real world conditions.
Abstract: The datagram congestion control protocol (DCCP) has been proposed as a transport protocol which supports real-time traffic. In this paper, we focus on the use of DCCP/CCID3 (Congestion Control ID 3) over a DiffServ/AF class. This class of service is used to build services that provide only a minimum throughput guarantee without any delay or jitter restrictions. This minimum throughput guarantee is called the target rate. In this context, the throughput obtained by DCCP/CCID3 mainly depends on RTT and loss probability. As a result, the application does not always get the negotiated target rate. To cope with this problem, we propose to evaluate a simple adaptation of the CCID3 congestion control mechanism, allowing the application to reach its target rate whatever the RTT value of the application’s flow is. As this adaptation can be seen as an extension to the DCCP with CCID3 congestion control, we call it gDCCP for guaranteed DCCP. Results from simulations are presented to illustrate the improvements of the proposed modification in various situations. Finally, we investigate the deployment of this proposal in terms of security
Abstract: This paper deals with the improvement of transport protocol behaviour over the DiffServ assured forwarding (AF) class. The assured service (AS) provides a minimum throughput guarantee that classical congestion control mechanisms, like window-based in TCP or equation-based in TCP-friendly rate control (TFRC), are not able to use efficiently. In response, this paper proposes a performance analysis of a QoS aware congestion control mechanism, named gTFRC, which improves the delivery of continuous streams. The gTFRC (guaranteed TFRC) mechanism has been integrated into an enhanced transport protocol (ETP) that allows protocol mechanisms to be dynamically managed and controlled. After comparing a ns-2 simulation and our implementation of the basic TFRC mechanism, we show that ETP/gTFRC extension is able to reach a minimum throughput guarantee whatever the flow’s RTT and target rate (TR) and the network provisioning conditions
Abstract: This study addresses the end-to-end congestion control support over the DiffServ Assured Forwarding (AF) class. The resulting Assured Service (AS) provides a minimum level of throughput guarantee. In this context, this paper describes a new end-to-end mechanism for continuous transfer based on TCP-Friendly Rate Control (TFRC) originally proposed in [11]. The proposed approach modifies TFRC to take into account the QoS negotiated. This mechanism, named gTFRC, is able to reach the minimum throughput guarantee whatever the flow’s RTT and target rate. Simulation measurements show the efficiency of this mechanism either in over-provisioned or exactly-provisioned network. In addition, we show that the gTFRC mechanism can be used in the same DiffServ/AF class with TCP or TFRC flows.
Abstract: The emergence of Internet and new kind of architecture, like peer-to-peer (P2P) networks, provides great hope for distributed computation. However, the combination of the world of systems and the world of networking cannot be done as a simple melting of the existing solutions of each side. For example, it is quite obvious that one cannot use synchronized algorithms for global computing over large area network. We propose a non-exhaustive view of problems one could meet when he aims at building P2P architecture for global computing systems, which use asynchronous iterative algorithms. We also propose generic solutions for particular problems linked to both computing and networking sides. These problems involve the initialization of the computation (and its dual the conclusion), the task transparency over P2P network, and the routing in such networks. Finally a first computational experiment is presented for an asynchronous auction algorithm applied to the solution of the shortest path problem.