Problem of Channel Utilization and Merging Flows in Buffered Optical Burst Switching Networks

In the paper authors verify two problems of methods of operational research in optical burst switching. The first problem is at edge node, related to the medium access delay. The second problem is at an intermediate node related to buffering delay. A correction coefficient K of transmission speed is obtained from the first analysis. It is used in to provide a full-featured link of nominal data rate. Simulations of the second problem reveal interesting results. It is not viable to prepare routing and wavelength assignment based on end-to-end delay, i.e. link’s length or number of hops, as commonly used in other frameworks (OCS, Ethernet, IP, etc.) nowadays. Other parameters such as buffering probability must be taken into consideration as well. Based on the buffering probability an estimation of the number of optical/electrical converters can be made. This paper concentrates important traffic constraints of buffered optical burst switching. It allows authors to prepare optimization algorithms for regenerators placement in CAROBS networks using methods of operational research.


Introduction
While Optical burst switching (OBS) networks have been studied for more than 15 years now, there is still some controversy about their viability.Some authors study OBS networks as an alternative to Optical circuit switching (OCS) networks [8], other investigate OBS networks as best fit for some types of traffic (e.g., bursty traffic) or networks (e.g., access networks), see, e.g., [14].OBS is very close to its deploy-ment, some testbeds are operated and papers have been published.Currently authors focus on contention resolution which is the crucial problem of OBS and can occur even under low load.Authors have suggested various types of time slot mechanisms [13], deflection routing and metrics based on priorities.Also this issue was investigated by Coutelen et al. and let to the CAROBS framework.Therein, the authors consider burst concatenation.With the recourse to wavelength conversion throughout signal regeneration can resolve all burst contentions, offering a loss-free OBS framework.CAROBS uses electrical buffering for optical signal regeneration hence burst's end-to-end delay can increase significantly when a node is under high load.
In order to reduce the load, it must be distributed among all nodes in network with proper routing.To distribute the load among nodes in a network a single node behaviour must be evaluated in the first place.There are two main obstacles of node performance in buffered OBS, it is buffering at an intermediate node and medium access delay at an edge node.Both must be verified under different node offered load.
In authors best knowledge there has not been papers on this topic dealing delays in buffered OBS networks.This paper tackles this problem under various link datarates (1, 10, 40 and 100 Gb•s −1 ).

Problem Formulation
In OBS an optical burst is used in order to transmit data.Such a burst can contain a number of payload frames, that can be Ethernet, IP, etc [12].Usually burst's length is around 10 Mb that is approximately 1 ms for 10 Gb•s Using faster technology and longer bursts the link efficiency increases but the impact of contenting bursts on buffering delay as well.Therefore a reasonable tradeoff must be found.As was mentioned, the efficiency of OBS network highly depends on OXC switching speed.The reason is caused by the mandatory burst space between two consecutive bursts [3].Such a space must be greater or equal to the switching speed of OXC.Unless this constraint is respected a piece of a burst might be switched to the same direction as the previously switched burst or might be lost at all [3].
This mandatory space limits maximum throughput (utilization) of a link.Using shorter bursts the delimiting space is used more often for the same offered load, then the link maximal throughput decreases significantly.In the following text we use two very similar terms "Offered load" and "load".The term "Offered load" is used for the first problem which is focused on edge node, i.e. a traffic of data packets offering certain level of load to a node.This term has its roots in traditional telecommunications.The term "load" is mostly used in the analysis of the second problem.It means an offered load to the egress port of a node: For example if node S 1 at Fig. 2, is offered by a load of nominal bandwidth of its egress link, 1 , e.g. 10 Gb•s −1 .Then waiting time in access buffers at edge node can be endless.This problem is depicted on Fig. 1(a).Basically it means that link's bandwidth must be higher than the link's data rate.
The terms link's bandwidth and data rate are very similar.In systems derived from RM-OSI we speak about different speed at different layers.Since OBS is located at the 1st layer of RM-OSI, in the following text, we use term link's data rate (upper layer) in order to denote nominal, usable link's bandwidth at a certain wavelength.Term link's bandwidth (lower layer) in order to denote physical speed of a channel (combination of wavelengths and links).We will not use term modulation speed because we want to generalize problem [5].
The second problem, at this stage of research it is more or less observation, takes place on link data rate level.Needless to say there are some versions of OBS frameworks using timeslots where this problem does not exist.Currently when routing and wavelength assignment (RWA) is performed, all links are fully loaded with respect to Eq. ( 2).When merging of flows occurs α < µ constraint must be satisfied otherwise burst waiting in buffers will be eventually endless.Here α stands for total node's offered load and µ represents node's intensity of service, i.e. how much traffic a node can transmit.The main problem comes from the link usage maximization as a result of other optimizations.The maximal link usage is bounded by link capacity Eq. ( 2).When merging a number of flows offering load α at a node, e.g.M , ( ∈Ln α = α) only thing that can happen is α ≥ µ, which eventually leads to behaviour depicted on Fig. 1(b).We consider capacity of all connected link to a node n have the same capacity.L n is a set of links terminating at node n: where stands for a link in the network, N is the number of flows that are supported by link , φ i denotes required capacity by the flow i and C represents maximal capacity of the link .
In order to avoid this situation, proper evaluation of a node's offered load must be carried out in the first place.The second aspect of buffering problem is the number of optical detectors (O/E), which are expensive thus their amount should be minimal.In other words minimizing the amount of O/E reduces CAPEX and OPEX and increases reliability of buffered OBS network.

Simulations
At this stage of the research, the aforementioned problems were tackled through simulations.The reason is that the OBS switches implementing buffering abilities do not exist nowadays.Simulations were performed using CAROBS models implemented in OMNeT++ [2], [3], [4].Simulations were performed on the same basic network, that is depicted on Fig. 2. Both problems were not simulated at the same time but in the consecutive set of simulations.In the first place the analysis of the first problem was carried out.Results proved the claim from Section 2, that the link of nominal bandwidth does not support flow of the same data rate.Maximal flow data rate with stationary waiting in buffers was found.It lets to correction coefficient K, Tab. 1. Then the correction coefficient was implemented into the simulation models.In the second step simulations of buffering at an intermediate node were carried out.
The traffic was generated such that the payload packets of constant size (100 kb) making flow were supplied to edge nodes S n according to Poisson distribution in order to generate bursts.Such that constant Fig. 1: When channel speed estimation is not optimal, data might persist in electrical buffers.
flow of a nominal required bandwidth was generated.
In simulations we assumed that node M have unlimited electrical storage capacity.Duration of simulations was set to 60 s after a warm-up period.Only one wavelength was used.

Access Delay
Link access delay is a value representing average waiting time of a burst before it is sent on the optical network.The value must be as small as possible.When it is not stationary, see Fig. 1, it means the system is overloaded and can not be used.It happens when the link's offered load is slightly higher than maximal link intensity of service, i.e. link capacity C .Our approach was to gradually increase the offered load and evaluate egress link utilization as is depicted on Fig. 2: Basic network topology used for simulations.
Fig. 3.The results of simulations were normalized in order to be comparable in one graph.One can read that egress link gets saturated before the offered load reaches value 1 erl, see Fig. 3(a).This is a sequel of problem visualized on Fig. 1(a).The original problem depicted on Fig. 1(a) is here extended on Fig. 3(b) with respect to the offered load.If the evaluation interval, 60 s, was longer the values of link access delay would be higher for the non-stationary simulations.Along that visualization, regression analysis was carried out to find the stationary simulations.Results of regression analysis were not depicted due to better readability of graphs.These results highly correlate with the trends on Fig. 3(b).When the value of link access delay increases with an increase of offered load the slope of link access delay is not zero, i.e. is not stationary anymore, i.e. egress link is already saturated.In order to keep the nominal datarate of the link, link's bandwidth must be increased.
In order to find the threshold when the link access delay is not stationary anymore we must formulate null hypothesis H 0 and alternate hypothesis H a .Null hypothesis claims that H 0 is valid when N consecutive simulations meet requirement being stationary.Stationarity is verified by other testing based on linear regression analysis, which is out of scope of this article [6].The H 0 is a criterion of a heuristic analysis.Every time the H a is valid a new simulation is performed, in order to precise.The value of offered load of the new simulation is the average of offered load from last H 0 compliant simulation and the non compliant simulation.Doing so iteratively, correction coefficient K is found.We performed this analysis for OBS network with various nominal link data rates.Results of the analysis are captured in Tab. 1.
Applying these coefficients one can be sure, that further evaluations will not be affected by premature saturation of link caused by limited bandwidth.

Buffering Delay
The evaluation of buffering delay relies on precise evaluation of link utilization of each ingress link.Therefore the previous analysis is necessary in order to achieve  meaningful results.All results from this analysis are gathered at node M , see Fig. 2. Simulations were performed such that the number of merging flows was changed as well as their data rate.The load was changed similarly to the previous study.Also various patterns of flows data rates were used, in order to obtain valid results.For better readability of graphs the confidence intervals are omitted.Egress port load was calculated using Eq. ( 1).
Offered load of each source was iteratively increased in order to achieve the egress link load vary from 0,5 to 1,05 erl.Based on this evaluation the values of buffering delay and buffering probability were captured, results are depicted on Fig. 4 and Fig. 5.
The average buffering time of a contenting burst depends on load, see Fig. 4. Particular details of buffering delay, Fig. 4(a), are provided in order to increase its readability.The average waiting time pertain to 50 km of fiber delay line (FDL) which is not negligible.When the buffered burst is sent back onto the optical network the burst is regenerated, as it was a new burst.This approach is very vital in wide area networks where the optical signal can be impaired.On the other hand looking at Fig. 5 one can read that the probability of buffering is very high, i.e. when load is higher than 0,6 erl, there is quite high probability that even not contenting burst is buffered.The reason is that, a contenting burst is buffered and scheduled to be withdrew later, but the later moment might overlap with a new coming burst.Then the new coming burst must be buffered even if is not contenting with other incoming burst.In the worst case two bursts are contenting, egress link is blocked by the withdrew burst, then both bursts must be buffered.It means two O/E conversions must be carried out at the same time.In other words two O/E units must be installed at the node.It increases its price, eventually price of whole buffered OBS network.
Additionally this study supports current trend of deploying faster systems over slower ones, see Fig. 4.There are almost negligible improvements of buffering probability but on the other hand there is a significant difference in buffering delay.Therefore it is vital to use higher datarate links for buffered OBS networks.

Conclusion
The OBS framework has been proved to be reliable for future access or metropolitan networks.Also some real implementations have been presented and are reaching to be commercially deployed by Internet service providers [1].Still, there is a dark site off OBS networks.There are problems on the physical layer when the optical signal can be impaired.Generally this problem arises in geographically extensive installations.In order to avoid optical impairments the optical amplifiers [11], regenerators, etc. must be installed.The drawback of this approach is in increased CAPEX and OPEX of installation.Also there are no traffic models that could be used for regenerators placement [10] optimizations as is usually carried out in OCS.There has been some studies on regenerators placement problem [9] but authors focused on not-buffering OBS framework.
This article tried to tackle this lack of models by the edge and intermediate node observations.We bring a new correction coefficient which allows to define min- imal required bandwidth margin in order not to saturate the link's bandwidth before it is necessary.Along that the models optimizations improve scheduling of egress port when a buffered burst is put back onto the optical network.This coefficient K is of importance, because it defines necessary link bandwidth even for real networks not only for simulations.On top of that the flow behaviour we observed in the analysis of buffering delay and its probability are of high importance for solving optimization algorithms.Since now we have gotten a new constraint representing bursty character of buffered OBS.This constraint allows us to use mature approaches know from OCS optimizations in OBS.Also this constraint allows us to estimate minimal number of O/E block that are needed.
Even though results seems to be optimistic a lot of work has to be done.This analysis was carried out for system using just one wavelength.On contrary using more wavelengths might relax the number of O/E blocks.Also there are not considered optical impairments that are the original motivation of our other analysis.The next logical steps are to carry out multiple wavelength system analysis [7], obtain buffering probabilities.Then construct optimization algorithm in order of minimizing amount of O/E blocks in network.Eventually consider optical impairments.

Software Engineering (CSE) Department at Concordia
University.Her research focuses on mathematical modelling and algorithm design for large-scale optimization problems arising in communication networks, transportation networks and artificial intelligence.Recent studies include the design of the most efficient algorithms for p-cycle based protection schemes, under static and dynamic traffic, and their generalization to the so-called p-structures, which encompass all previously proposed pre-cross-connected pre-configured protection schemes.Other recent studies deal with dimensioning, provisioning and scheduling algorithms in optical grids or clouds, in broadband wireless networks and in passive optical networks.In Artificial Intelligence, contributions include the development of efficient optimization algorithms for probabilistic logic and for automated mechanical design in social networks.In transportation, her recent contributions include new algorithms for freight train scheduling and locomotive assignment.B. Jaumard has published over 300 papers in international journals in Operations Research and in Telecommunications.
Leos BOHAC received the M.S. and Ph.D. degrees in electrical engineering from the Czech Technical University, Prague, in 1992 and 2001, respectively.Since 1992, he has been teaching optical communication systems and data networks with the Czech Technical University, Prague.His research interest is on the application of high-speed optical transmission systems in a data network.He has also participated in the optical research project CESNET -the academic data network provider to help implement a long-haul high-speed optical research network.Currently, he has been actually involved in and led some of the projects on optimal protocol design, routing, high speed optical modulations and industrial network design.

5
Suboptimal link bandwidth results in endless waiting in an access buffer at edge node.Buffering delay [s](b) If the sum of offered loads is equal to egress port intensity of service is equal, occurs endless waiting in contention resolution buffers at an intermediate node.

( a )
The graphical representation of link utilization.The goal of link optimization is the reciprocity of offered load and egress link utilization, i.e. the offered load 1 erl causes link utilization 1.If otherwise the link properties must be optimized.(b)The link access delay should be stationary and polynomially increase with an increase of offered load.

Fig. 3 :
Fig. 3: Graphical representation of egress link utilization and link access delay at network ingress node.

Tab. 1 :
Values of bandwidth correction coefficient K.

Fig. 4 :
Fig. 4: Evaluation of buffering delay and buffering probability at merging node M .4(b) and 4(c) are details of 4(a).It is interesting its proportion to the load, which is caused by non-dropping behaviour,i.e.everything is buffered ergo waiting time increases.

Fig. 5 :
Fig. 5: Evaluation of buffering delay and buffering probability at merging node M .This Figure is bounded together with Fig. 4. Represents buffering probability, which almost linearly increases with the load.
There are OXCs based on MEMS that have switched speed in order of µs, SOA based OXCs have switching speed in order of ns.SOA based OXCs are very often used nowadays.
−1system and shorter for systems using more powerful modulation formats.Second very important parameter is Optical cross-connect (OXC) switching speed in the time domain of OBS.The OXC switching speed highly depends on technology.