Anda di halaman 1dari 14

Computers, IEEE Transactions on

1. A Branch-and-Bound Algorithm for Solving the Multiprocessor Scheduling Problem with Improved Lower Bounding Techniques:

Abstract
In branch-and-bound (B&B) schemes for solving a minimization problem, a better lower bound could prune many meaningless branches which do not lead to an optimum solution. In this paper, we propose several techniques to refine the lower bound on the makespan in the multiprocessor scheduling problem (MSP). The key idea of our proposed method is to combine an efficient quadratic-time algorithm for calculating the Fernndez's bound, which is known as the best lower bounding technique proposed in the literature with two improvements based on the notions of binary search and recursion. The proposed method was implemented as a part of a B&B algorithm for solving MSP, and was evaluated experimentally. The result of experiments indicates that the proposed method certainly improves the performance of the underlying B&B scheme. In particular, we found that it improves solutions generated by conventional heuristic schemes for more than 20 percent of randomly generated instances, and for more than 80 percent of instances, it could provide a certification of optimality of the resulting solutions, even when the execution time of the B&B scheme is limited by one minute. Design and Evaluation of a Proxy Cache for Peer-to-Peer Traffic

Abstract
Peer-to-peer (P2P) systems generate a major fraction of the current Internet traffic, and they significantly increase the load on ISP networks and the cost of running and connecting customer networks (e.g., universities and companies) to the Internet. To mitigate these negative impacts, many previous works in the literature have proposed caching of P2P traffic, but very few (if any) have considered designing a caching system to actually do it. This paper demonstrates that caching P2P traffic is more complex than caching other Internet traffic, and it needs several new algorithms and storage systems. Then, the paper presents the design and evaluation of a complete, running, proxy cache for P2P traffic, called pCache. pCache transparently intercepts and serves traffic from different P2P systems. A new storage system is proposed and implemented in pCache. This storage system is optimized for storing P2P traffic, and it is shown to outperform other storage systems. In addition, a new algorithm to infer the information required to store and serve P2P traffic by the cache is proposed. Furthermore, extensive experiments to evaluate all aspects of pCache using actual implementation and real P2P traffic are presented.

Computational Biology and Bioinformatics:


Robust Feature Selection for Microarray Data Based on Multicriterion Fusion

Abstract
Feature selection often aims to select a compact feature subset to build a pattern classifier with reduced complexity, so as to achieve improved classification performance. From the

perspective of pattern analysis, producing stable or robust solution is also a desired property of a feature selection algorithm. However, the issue of robustness is often overlooked in feature selection. In this study, we analyze the robustness issue existing in feature selection for high-dimensional and small-sized gene-expression data, and propose to improve robustness of feature selection algorithm by using multiple feature selection evaluation criteria. Based on this idea, a multicriterion fusion-based recursive feature elimination (MCFRFE) algorithm is developed with the goal of improving both classification performance and stability of feature selection results. Experimental studies on five gene-expression data sets show that the MCF-RFE algorithm outperforms the commonly used benchmark feature selection algorithm SVM-RFE. Image-Based Surface Matching Algorithm Oriented to Structural Biology

Abstract
Emerging technologies for structure matching based on surface descriptions have demonstrated their effectiveness in many research fields. In particular, they can be successfully applied to in silico studies of structural biology. Protein activities, in fact, are related to the external characteristics of these macromolecules and the ability to match surfaces can be important to infer information about their possible functions and interactions. In this work, we present a surface-matching algorithm, based on encoding the outer morphology of proteins in images of local description, which allows us to establish point-topoint correlations among macromolecular surfaces using image-processing functions. Discarding methods relying on biological analysis of atomic structures and expensive computational approaches based on energetic studies, this algorithm can successfully be used for macromolecular recognition by employing local surface features. Results demonstrate that the proposed algorithm can be employed both to identify surface similarities in context of macromolecular functional analysis and to screen possible protein interactions to predict pairing capability. An Improved Heuristic Algorithm for Finding Motif Signals in DNA Sequences

Abstract
The planted (l, d)-motif search problem is a mathematical abstraction of the DNA functional site discovery task. In this paper, we propose a heuristic algorithm that can find planted (l, d)signals in a given set of DNA sequences. Evaluations on simulated data sets demonstrate that the proposed algorithm outperforms current widely used motif finding algorithms. We also report the results of experiments on real biological data sets.

Computer Vision, IET


Iris matching using multi-dimensional artificial neural network

Abstract
Iris recognition is one of the most widely used biometric technique for personal identification. This identification is achieved in this work by using the concept that, the iris patterns are statistically unique and suitable for biometric measurements. In this study, a novel method of recognition of these patterns of an iris is considered by using a multidimensional artificial neural network. The proposed technique has the distinct advantage of using the entire resized iris as an input at once. It is capable of excellent pattern recognition properties as the iris texture is unique for every person used for recognition. The system is trained and tested using two publicly available databases (CASIA and UBIRIS). The proposed approach shows significant promise and potential for improvements, compared with the other conventional matching techniques with regard to time and efficiency of results. Real-time tracking using A* heuristic search and template updating

Abstract
Many vision problems require fast and accurate tracking of objects in dynamic scenes. In this study, we propose an A* search algorithm through the space of transformations for computing fast target 2D motion. Two features are combined in order to compute efficient motion: (i) Kullback??Leibler measure as heuristic to guide the search process and (ii) incorporation of target dynamics into the search process for computing the most promising search alternatives. The result value of the quality of match computed by the A* search algorithm together with the more common views of the target object are used for verifying template updates. A template will be updated only when the target object has evolved to a transformed shape dissimilar with respect to the actual shape. The study includes experimental evaluations with video streams demonstrating the effectiveness and efficiency for real-time vision based tasks with rigid and deformable objects. Integral image compression based on optical characteristic

Abstract
The large amount of image data from the captured three-dimensional integral image requires to be presented with adequate resolution. It is therefore necessary to develop compression algorithms that take advantage of the characteristics of the recorded integral image. In this study, the authors propose a new compression method that is adapted to integral imaging. According to the optical characteristics of integral imaging, most of the information of each elemental image is overlapped with that of its adjacent elemental images. Thus, the method is to achieve image compression by taking a sample from the elemental image sequence for every m elemental image to get image compression. Experimental results that are presented to illustrate the proposed compression technique prove that the proposed technique can improve the compression ratio of integral imaging.

Image Processing, IEEE Transactions


1. From Tiger to Panda: Animal Head Detection

Abstract
Robust object detection has many important applications in real-world online photo processing. For example, both Google image search and MSN live image search have integrated human face detector to retrieve face or portrait photos. Inspired by the success of such face filtering approach, in this paper, we focus on another popular online photo category-animal, which is one of the top five categories in the MSN live image search query log. As a first attempt, we focus on the problem of animal head detection of a set of relatively large land animals that are popular on the internet, such as cat, tiger, panda, fox, and cheetah. First, we proposed a new set of gradient oriented feature, Haar of Oriented Gradients (HOOG), to effectively capture the shape and texture features on animal head. Then, we proposed two detection algorithms, namely Bruteforce detection and Deformable detection, to effectively exploit the shape feature and texture feature simultaneously. Experimental results on 14 379 well labeled animals images validate the superiority of the proposed approach. Additionally, we apply the animal head detector to improve the image search result through text based online photo search result filtering. A Variational Model for Histogram Transfer of Color Images

Abstract
In this paper, we propose a variational formulation for histogram transfer of two or more color images. We study an energy functional composed by three terms: one tends to approach the cumulative histograms of the transformed images, the other two tend to maintain the colors and geometry of the original images. By minimizing this energy, we obtain an algorithm that balances equalization and the conservation of features of the original images. As a result, they evolve while approaching an intermediate histogram between them. This intermediate histogram does not need to be specified in advance, but it is a natural result of the model. Finally, we provide experiments showing that the proposed method compares well with the state of the art. 3. Nonlocal Mumford-Shah Regularizers for Color Image Restoration

Abstract:
We propose here a class of restoration algorithms for color images, based upon the MumfordShah (MS) model and nonlocal image information. The Ambrosio-Tortorelli and Shah elliptic approximations are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, texture is nonlocal in nature and requires semilocal/non-local information for efficient image denoising and restoration. Inspired from recent works (nonlocal means of Buades, Coll, Morel, and nonlocal total variation of Gilboa, Osher), we extend the local Ambrosio-Tortorelli and Shah approximations to MS functional (MS) to novel nonlocal formulations, for better restoration of fine structures and texture. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, color image super-resolution, and color filter array demosaicing. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones,

especially in image inpainting with large missing regions. We also prove several characterizations of minimizers based upon dual norm Formulations. A MajorizeMinimize Strategy for Subspace Optimization Applied to Image Restoration

Abstract
This paper proposes accelerated subspace optimization methods in the context of image restoration. Subspace optimization methods belong to the class of iterative descent algorithms for unconstrained optimization. At each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative second-order method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize-minimize principle. It leads to a closed-form stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edge-preserving image restoration. A Variational Model for Segmentation of Overlapping Objects With Additive Intensity Value.

Abstract
We propose a variant of the Mumford-Shah model for the segmentation of a pair of overlapping objects with additive intensity value. Unlike standard segmentation models, it does not only determine distinct objects in the image, but also recover the possibly multiple membership of the pixels. To accomplish this, some a priori knowledge about the smoothness of the object boundary is integrated into the model. Additivity is imposed through a soft constraint which allows the user to control the degree of additivity and is more robust than the hard constraint. We also show analytically that the additivity parameter can be chosen to achieve some stability conditions. To solve the optimization problem involving geometric quantities efficiently, we apply a multiphase level set method. Segmentation results on synthetic and real images validate the good performance of our model, and demonstrate the model's applicability to images with multiple channels and multiple objects. Robust Principal Component Analysis Based on Maximum Correntropy Criterion

Abstract
Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can

estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur. Image Segmentation Using Fuzzy Region Competition and Spatial/Frequency Information

Abstract
This paper presents a multiphase fuzzy region competition model that takes into account spatial and frequency information for image segmentation. In the proposed energy functional, each region is represented by a fuzzy membership function and a data fidelity term that measures the conformity of spatial and frequency data within each region to (generalized) Gaussian densities whose parameters are determined jointly with the segmentation process. Compared with the classical region competition model, our approach gives soft segmentation results via the fuzzy membership functions, and moreover, the use of frequency data provides additional region information that can improve the overall segmentation result. To efficiently solve the minimization of the energy functional, we adopt an alternate minimization procedure and make use of Chambolle's fast duality projection algorithm. We apply the proposed method to synthetic and natural textures as well as real-world natural images. Experimental results show that our proposed method has very promising segmentation performance compared with the current state-of-the-art approaches.

Image Processing, IET


H.264 video watermarking with secret image sharing

Abstract
More people are studying digital video stream transference via networks. However, frequent Internet use increases the requirement for copyright protection and security. As a consequence, to prevent video streams that belong to rightful owners from being intentionally or unknowingly used by others, information protection is indispensable. The authors propose a novel method for video watermarking that is specifically designed for H.264 video. For the experiment, a low-energy signal can relatively guard against low-pass filter attacks. Conversely, a high-energy signal in the host signal can relatively guard against the highfrequency noise attack. In view of these facts, the proposed system design embedding algorithm provides high-energy and low-energy blocks. The blocks in the host image frame are divided into two different groups by estimating the block energy. The existing singular value decomposition methods were employed to calculate the watermark information. In order to enhance the security, the proposed system also employs torus automorphisms to encrypt the watermark. To achieve better robustness, the encrypted results use secret image sharing technology embedded into different I-frames in the video stream. Rotation, scaling, and translation resilient watermarking for images

Abstract
Traditional watermarking schemes are sensitive to geometric distortions, in which synchronisation for recovering embedded information is a challenging task because of the disorder caused by rotation, scaling or translation (RST). The existing RST-resistant watermarking methods still have limitations with respect to robustness, capacity or fidelity. In this study, the authors address several major problems in RST-invariant watermarking. The first point is how to take advantage of the high RST resilience of scale-invariant feature transform (SIFT) features, which show good performance in terms of RSTresistant pattern recognition. Since many keypoint-based watermarking methods do not discuss cropping attacks, the second issue discussed in this study is how to resist cropping using a human visual system (HVS), which also helps us to eliminate computational complexity. The third issue is the investigation of an HVS-based watermarking strategy for extracting only feature points in the human attentive area. Lastly, a variable-length watermark synchronisation algorithm using dynamic programming is proposed. Experimental results show that the proposed algorithms are practical and show superior performance in comparison with many existing works in terms of watermark capacity, watermark transparency, and the resistance to RST attacks.

Neural Networks, IEEE Transactions


Improvements on Twin Support Vector Machines

Abstract
For classification problems, the generalized eigenvalue proximal support vector machine (GEPSVM) and twin support vector machine (TWSVM) are regarded as milestones in the development of the powerful SVMs, as they use the nonparallel hyperplane classifiers. In this brief, we propose an improved version, named twin bounded support vector machines (TBSVM), based on TWSVM. The significant advantage of our TBSVM over TWSVM is that the structural risk minimization principle is implemented by introducing the regularization term. This embodies the marrow of statistical learning theory, so this modification can improve the performance of classification. In addition, the successive overrelaxation technique is used to solve the optimization problems to speed up the training procedure. Experimental results show the effectiveness of our method in both computation time and classification accuracy, and therefore confirm the above conclusion further. Observability of Boolean Control Networks With State Time Delays

Abstract
This brief deals with the problem of the observability for the Boolean control networks with time delays in states. First, using semi-tensor product of matrices and the matrix expression of logic, the Boolean control networks with state delays can be converted into discrete time delay dynamics. Then, the observability of the Boolean control networks via two kinds of inputs is investigated by giving necessary and sufficient conditions. Finally, examples are given to illustrate the efficiency of the obtained results.

Feature Selection Using Probabilistic Prediction of Support Vector Regression

Abstract
This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

Networking, IEEE/ACM Transactions


Energy-Efficient Protocol for Cooperative Networks

Abstract
In cooperative networks, transmitting and receiving nodes recruit neighboring nodes to assist in communication. We model a cooperative transmission link in wireless networks as a transmitter cluster and a receiver cluster. We then propose a cooperative communication protocol for establishment of these clusters and for cooperative transmission of data. We derive the upper bound of the capacity of the protocol, and we analyze the end-to-end robustness of the protocol to data-packet loss, along with the tradeoff between energy consumption and error rate. The analysis results are used to compare the energy savings and the end-to-end robustness of our protocol with two non-cooperative schemes, as well as to another cooperative protocol published in the technical literature. The comparison results show that, when nodes are positioned on a grid, there is a reduction in the probability of packet delivery failure by two orders of magnitude for the values of parameters considered. Up to 80% in energy savings can be achieved for a grid topology, while for random node placement our cooperative protocol can save up to 40% in energy consumption relative to the other protocols. The reduction in error rate and the energy savings translate into increased lifetime of cooperative sensor networks. Parametric Methods for Anomaly Detection in Aggregate Traffic

Abstract
This paper develops parametric methods to detect network anomalies using only aggregate traffic statistics, in contrast to other works requiring flow separation, even when the anomaly is a small fraction of the total traffic. By adopting simple statistical models for anomalous and background traffic in the time domain, one can estimate model parameters in real time, thus obviating the need for a long training phase or manual parameter tuning. The proposed bivariate parametric detection mechanism (bPDM) uses a sequential probability ratio test, allowing for control over the false positive rate while examining the tradeoff between detection time and the strength of an anomaly. Additionally, it uses both traffic-rate and packet-size statistics, yielding a bivariate model that eliminates most false positives. The

method is analyzed using the bit-rate signal-to-noise ratio (SNR) metric, which is shown to be an effective metric for anomaly detection. The performance of the bPDM is evaluated in three ways. First, synthetically generated traffic provides for a controlled comparison of detection time as a function of the anomalous level of traffic. Second, the approach is shown to be able to detect controlled artificial attacks over the University of Southern California (USC), Los Angeles, campus network in varying real traffic mixes. Third, the proposed algorithm achieves rapid detection of real denial-of-service attacks as determined by the replay of previously captured network traces. The method developed in this paper is able to detect all attacks in these scenarios in a few seconds or less. Peering Equilibrium Multipath Routing: A Game Theory Framework for Internet Peering Settlements

Abstract
It is generally admitted that interdomain peering links represent nowadays the main bottleneck of the Internet, particularly because of lack of coordination between providers, which use independent and selfish routing policies. We are interested in identifying possible light coordination strategies that would allow carriers to better control their peering links while preserving their independence and respective interests. We propose a robust multipath routing coordination framework for peering carriers, which relies on the multiple-exit discriminator (MED) attribute of Border Gateway Protocol (BGP) as signaling medium. Our scheme relies on a game theory modeling, with a non-cooperative potential game considering both routing and congestions costs. Peering equilibrium multipath (PEMP) coordination policies can be implemented by selecting Pareto-superior Nash equilibria at each carrier. We compare different PEMP policies to BGP Multipath schemes by emulating a realistic peering scenario. Our results show that the routing cost can be decreased by roughly 10% with PEMP. We also show that the stability of routes can be significantly improved and that congestion can be practically avoided on the peering links. Finally, we discuss practical implementation aspects and extend the model to multiple players highlighting the possible incentives for the resulting extended peering framework. Impact of File Arrivals and Departures on Buffer Sizing in Core Routers

Abstract
Traditionally, it had been assumed that the efficiency requirements of TCP dictate that the buffer size at the router must be of the order of the bandwidth-delay (C RTT) product. Recently, this assumption was questioned in a number of papers, and the rule was shown to be conservative for certain traffic models. In particular, by appealing to statistical multiplexing, it was shown that on a router with N long-lived connections, buffers of size O([(C RTT)/(N)]) or even O(1) are sufficient. In this paper, we reexamine the buffer-size requirements of core routers when flows arrive and depart. Our conclusion is as follows: If the core-to-access-speed ratio is large, then O(1) buffers are sufficient at the core routers; otherwise, larger buffer sizes do improve the flow-level performance of the users. From a modeling point of view, our analysis offers two new insights. First, it may not be appropriate to derive buffer-sizing rules by studying a network with a fixed number of users. In fact, depending upon the core-to-access-speed ratio, the buffer size itself may affect the number of flows in the system, so these two parameters (buffer size and number of flows in the system)

should not be treated as independent quantities. Second, in the regime where the core-toaccess-speed ratio is large, we note that the O(1) buffer sizes are sufficient for good performance and that no loss of utilization results, as previously believed.

Network, IEEE
Dynamic measurement-aware routing in practice

Abstract
Traffic monitoring is a critical network operation for the purpose of traffic accounting, debugging or troubleshooting, forensics, and traffic engineering. Existing techniques for traffic monitoring, however, tend to be suboptimal due to poor choice of monitor location or constantly evolving monitoring objectives and traffic characteristics. One way to counteract these limitations is to use routing as a degree of freedom to enhance monitoring efficacy, which we refer to as measurement-aware routing. Traffic sub-populations can be routed (rerouted) on the fly to optimally leverage existing monitoring infrastructures. Implementing dynamic measurementaware routing (DMR) in practice is riddled with challenges. Three major challenges are how to dynamically assess the importance of traffic flows; how to aggregate flows (and hence take a common action for them) in order to conserve routing table entries; and how to achieve traffic routing/rerouting in a manner that is least disruptive to normal network performance while maximizing the measurement utility. This article takes a closer look at these challenges and discusses how they manifest for different types of networks. Through an OpenFlow prototype, we show how DMR can be applied in enterprise networks. Using global iceberg detection and capture as a driving application, we demonstrate how our solutions successfully route suspected iceberg flows to a DPI box for further processing, while preserving balanced load distribution in the overall network. Measurement and diagnosis of address misconfigured P2P traffic

Abstract
Through measurement study, we discover an interesting phenomenon, P2P address misconfiguration, in which a large number of peers send P2P file downloading requests to a ??random?? target on the Internet. Through measuring three large datasets spanning four years and across five different /8 networks, we find address-misconfigured P2P traffic on average contributes 38.9 percent of Internet background radiation, increasing by more than 100 percent every year. To detect and diagnose such unwanted traffic, we design the P2PScope, a measurement tool. After analyzing about 2 Tbytes of data and tracking millions of peers, we found that in all the P2P systems, address misconfiguration is caused by resource mapping contamination: the sources returned for a given file ID through P2P indexing are not valid. Different P2P systems have different reasons for such contamination. For eMule, we find that the root cause is mainly a network byte-order problem in the eMule Source Exchange protocol. For BitTorrent misconfiguration, one reason is that anti-P2P companies actively inject bogus peers into the P2P system. Another reason is that the KTorrent implementation has a byte-order problem. Packet traffic: a good data source for wireless sensor network modeling and anomaly detection

Abstract
The wireless sensor network (WSN) has emerged as a promising technology. In WSNs, sensor nodes are distributedly deployed to collect interesting information from the environment. Because of the mission of WSNs, most node-wide as well as network-wide activities are manifested in packet traffic. As a result, packet traffic becomes a good data source for modeling sensor node as well as sensor network behaviors. In this article, the methodology of modeling node and network behavior profiles using packet traffic is exemplified. In addition, node as well as network anomalies are shown to be detectable by monitoring the evolution of node/network behavior profiles. Experiences of Internet traffic monitoring with tstat

Abstract
Since the early days of the Internet, network traffic monitoring has always played a strategic role in understanding and characterizing users?? activities. In this article, we present our experience in engineering and deploying Tstat, an open source passive monitoring tool that has been developed in the past 10 years. Started as a scalable tool to continuously monitor packets that flow on a link, Tstat has evolved into a complex application that gives network researchers and operators the possibility to derive extended and complex measurements thanks to advanced traffic classifiers. After discussing Tstat capabilities and internal design, we present some examples of measurements collected deploying Tstat at the edge of several ISP networks in past years. While other works report a continuous decline of P2P traffic with streaming and file hosting services rapidly increasing in popularity, the results presented in this article picture a different scenario. First, P2P decline has stopped, and in the last months of 2010 there was a counter tendency to increase P2P traffic over UDP, so the common belief that UDP traffic is negligible is not true anymore. Furthermore, streaming and file hosting applications have either stabilized or are experiencing decreasing traffic shares. We then discuss the scalability issues software-based tools have to cope with when deployed in real networks, showing the importance of properly identifying bottlenecks. Network traffic monitoring, analysis and anomaly detection [Guest Editorial]

Abstract
Modern computer networks are increasingly pervasive, complex, and ever-evolving due to factors like enormous growth in the number of network users, continuous appearance of network applications, increasing amount of data transferred, and diversity of user behaviors. Understanding and measuring such a network is a difficult yet vital task for network management and diagnosis. Network traffic monitoring, analysis, and anomaly detection provide useful tools for understanding network behavior and determining network performance and reliability so as to effectively and promptly troubleshoot and resolve various issues in practice.

Network and Service Management, IEEE Transactions Scheduling Grid Tasks in Face of Uncertain Communication Demands

Abstract
Grid scheduling is essential to Quality of Service provisioning as well as to efficient management of grid resources. Grid scheduling usually considers the state of the grid resources as well application demands. However, such demands are generally unknown for highly demanding applications, since these often generate data which will be transferred during their execution. Without appropriate assessment of these demands, scheduling decisions can lead to poor performance. Thus, it is of paramount importance to consider uncertainties in the formulation of a grid scheduling problem. This paper introduces the IPDT-FUZZY scheduler, a scheduler which considers the demands of grid applications with such uncertainties. The scheduler uses fuzzy optimization, and both computational and communication demands are expressed as fuzzy numbers. Its performance was evaluated, and it was shown to be attractive when communication requirements are uncertain. Its efficacy is compared, via simulation, to that of a deterministic counterpart scheduler and the results reinforce its adequacy for dealing with the lack of accuracy in the estimation of communication demands. Dirichlet-Based Trust Management for Effective Collaborative Intrusion Detection Networks

Abstract
The accuracy of detecting intrusions within a Collaborative Intrusion Detection Network (CIDN) depends on the efficiency of collaboration between peer Intrusion Detection Systems (IDSes) as well as the security itself of the CIDN. In this paper, we propose Dirichlet-based trust management to measure the level of trust among IDSes according to their mutual experience. An acquaintance management algorithm is also proposed to allow each IDS to manage its acquaintances according to their trustworthiness. Our approach achieves strong scalability properties and is robust against common insider threats, resulting in an effective CIDN. We evaluate our approach based on a simulated CIDN, demonstrating its improved robustness, efficiency and scalability for collaborative intrusion detection in comparison with other existing models. Improving Application Placement for Cluster-Based Web Applications

Abstract
Dynamic application placement for clustered web applications heavily influences system performance and quality of user experience. Existing approaches claim that they strive to maximize the throughput, keep resource utilization balanced across servers, and minimize the start/stop cost of application instances. However, they fail to minimize the worst case of server utilization; the load balancing performance is not optimal. What's more, some applications need to communicate with each other, which we called dependent applications; the network cost of them also should be taken into consideration. In this paper, we investigate how to minimize the resource utilization of servers in the worst case, aiming at improving

load balancing among clustered servers. Our contribution is two-fold. First we propose and define a new optimization objectives: limiting the worst case of each individual server's utilization, formulated by a min-max problem. A novel framework based on binary search is proposed to detect an optimal load balancing solution. Second, we define system cost as the weighted combination of both placement change and inter-application communication cost. By maximizing the number of instances of dependent applications that reside in the same set of servers, the basic load-shifting and placement-change procedures are enhanced to minimize whole system cost. Extensive experiments have been conducted and effectively demonstrate that: 1) the proposed framework achieves a good allocation for clustered web applications. In other words, requests are evenly allocated among servers, and throughput is still maximized; 2) the total system cost maintains at a low level; 3) our algorithm has the capacity of approximating an optimal solution within polynomial time and is promising for practical implementation in real deployments. Monitoring the Impact of P2P Users on a Broadband Operator's Network over Time

Abstract
Since their emergence peer-to-peer (P2P) applications have been generating a considerable fraction of the overall transferred bandwidth in broadband networks. Residential broadband service has been moving from one geared towards technology enthusiasts and early adopters to a commodity for a large fraction of households. Thus, the question whether P2P is still the dominant application in terms of bandwidth usage becomes highly relevant for broadband operators. In this work we present an adaption to a previously published method for classifying broadband users into a P2P- and a non-P2P group based on the amount of communication partners ("peers") they have in a dedicated timeframe. Based on this classification, we derive their impact on network characteristics like the number of active users and their aggregate bandwidth. Privacy is assured by anonymization of the data and by not taking into account the packet payloads. We apply our method to real operational data collected 2007 and 2010, respectively, from a major German DSL provider's access link which transported all traffic each user generates and receives. In 2010 the fraction of P2P users clearly decreased compared to previous years. Nevertheless we find that P2P users are still large contributors to the total amount of traffic seen especially in upstream direction. However in 2010 the impact from P2P on the bandwidth peaks in the busy hours has clearly decreased while other applications have a growing impact, leading to an increased bandwidth usage per subscriber in the peak hours. Further analysis also reveals that the P2P users' traffic still does not exhibit strong locality. We compare our findings to those available in the literature and propose areas for future work on network monitoring, P2P applications, and network design. Efficient Network Modification to Improve QoS Stability at Failure

Abstract
When a link or node fails, flows are detoured around the failed portion, so the hop count of flows and the link load could change dramatically as a result of the failure. As real-time traffic such as video or voice increases on the Internet, ISPs are required to provide stable quality as well as connectivity at failures. For ISPs, how to effectively improve the stability of these qualities at failures with the minimum investment cost is an important issue, and they

need to effectively select a limited number of locations to add link facilities. In this paper, efficient design algorithms to select the locations for adding link facilities are proposed and their effectiveness is evaluated using the actual backbone networks of 36 commercial ISPs. Spectral Models for Bitrate Measurement from Packet Sampled Traffic In network measurement systems, packet sampling techniques are usually adopted to reduce the overall amount of data to collect and process. Being based on a subset of packets, they introduce estimation errors that have to be properly counteracted by using a fine tuning of the sampling strategy and sophisticated inversion methods. This problem has been deeply investigated in the literature with particular attention to the statistical properties of packet sampling and to the recovery of the original network measurements. Herein, we propose a novel approach to predict the energy of the sampling error in the real time estimation of traffic bitrate, based on spectral analysis in the frequency domain. We start by demonstrating that the error introduced by packet sampling can be modeled as an aliasing effect in the frequency domain. Then, we derive closed-form expressions for the Signal-to-Noise Ratio (SNR) to predict the distortion of traffic bitrate estimates over time. The accuracy of the proposed SNR metric is validated by means of real packet traces. Furthermore, a comparison with respect to an analogous SNR expression derived using classic stochastic tools is proposed, showing that the frequency domain approach grants for a higher accuracy when traffic rate measurements are carried out at fine time granularity.

Anda mungkin juga menyukai