Transient Analysis of K-node Tandem Queuing Model with Load Dependent Service Rates

This paper deals with the development and analysis of K-node series and parallel queueing model with load dependent service rates. Here it is assumed that the customers arrive to the initial queue and waiting line for service. After completing the service at first service station they may join one of the (K-1) queues which are parallel and connected to first queue in series. After getting service from the service station they leave the system. Here it is assumed that the service rates in each service station are dependent on number of customers in the queue connected to it .The arrival and service completions in each queue are assumed to follow Poisson processes. Using difference-differential equations the joint probability function of number of customers in each queue are derived. The system performance measures such as average number of customers in each queue ,throughput of each service station ,the probability of idleness of each server ,the waiting time of customer in each queue are derived explicitly. The sensitivity of the model with respect to parameters are analysed through numerical illustration. It is observed that the state dependent service rates has significant influence on performance measures. This model also includes the earlier models as particular cases for specific values of the parameters. This model is useful in analysing the communication networks , transportation systems ,production processes and cargo handling.


Introduction
A Queue is a waiting line of units demanding service at a server facility. Erlang [ ] pioneered the mathematical modelling of queueing system. There after several models have been developed and analysed in order to evaluate the performance of several systems for control and monitoring. Queueing models formulate a prerequisite for design and and development of several systems arising at places like Communication networks, ATM scheduling ,Transportation systems, Production processes etc., ( But in several practical situations after getting service from the first Queue the customer may join one of the several Queues connected to it for service .For example in Communication networks after getting service from the first transmitter the data/voice packets are to be routed to one of the several buffers connected in parallel for forward transmission. This type of scenario is also visible in Production processes such as Glass manufacturing, where the raw material is converted as liquid glass. It is then transferred to several production lines which are parallel for making different types of glass ware.This type of Queueing models may be called as 2-node series and K parallel Queueing systems , referred as forked Queueing models.
Little work has been reported regarding 2-node series and K parallel Queueing models which are useful for analysing several systems more close to the reality. Hence in this article we develop and analyse a forked Queueing model in which 2 nodes are in series and K Queues are in parallel. Here it is assumed that the arrival processes and service processes follow Poisson processes. It is further assumed that the service rate of each service station depends on the number of customers in the Queue connected to it. Using the difference -differential equations the joint probability generating function of the number of customers in each Queue is derived. The performance of the model is analysed by deriving explicit expressions for the system characteristics such as average number of customers in the Queue, Probability of idleness of each service station, Throughput of the nodes, Average waiting time customers in each Queue, Utilisation of each server etc., The sensitivity analysis of the model is carried with a numerical illustration

Queueing Model with Load Dependent Service Rates
In this section we consider queueing model with K buffers and K servers connected as forked network ,the capacity of buffers being infinite .we assume that the customers after getting service through first server may join any of the servers which are parallel and connected to first server in tandem i.e., the customers after getting served at may join second buffer with probability or third buffer with probability or K th buffer with probability .Number of customers arriving at first buffer follows Poisson process with arrival rate (parameter).Similarly number of customers served in servers follows Poisson process with parameters respectively. It is also assumed that service rate in each server is linearly dependent on the content of buffer connected to it. The queue discipline is first come first serve (FCFS) The schematic diagram representing the queuing model is shown in fig.1

Figure 1: K-Server Queueing Network
Let ( ) be the probability that there are costumers in first buffer, customers in second buffer and customers in k th buffer.
Then difference differential equations governing the system are Let be probability generating function of ( ).Multiplying equations (1).. (8)with probability generating function and summing over from 0 to we get Joint Probability generating function of number of customers in first, second,….k th buffers respectively at any time t as

Characteristics of the Model
Putting in (9) we get ( ) Which gives the probability that the queue is empty at any time t.

Performance Analysis of First Buffer
Putting in (9) we get probability generating function of first buffer size distribution as ( ) * , ( )( )-+

Mean number of customers in first buffer is
Variation in number of customers in first buffer is Putting in (11) we get the probability that the first buffer is empty as Throughput of first server is Thp1(t) = Putting in (17) we get the probability that the i th buffer is empty as ( ) Utilization of i th server is Average waiting time of customers in i th buffer (average delay in i th server )is Utilization of first server is ( ) * ( )+

Numerical Illustration
The transient behaviour of the model is studied by computing the performance measures with the following set of values for the system parameters.
Each of the parameters are varied one at a time keeping all other fixed ,the mean number of customers in each buffer is calculated along with mean number of customers ( ) in the entire system and the calculations are recorded in Table1.The corresponding probability for emptiness of each server and also the utilization of servers are calculated for each value of parameters and tabulated in Table2.The throughputs of four servers along with mean waiting times of customers in four buffers are calculated and tabulated in Table3.
From Table1, we observe that as time t increases from 0.1 to 0.5 there is increase in mean number of customers in each buffer. The same phenomenon can be observed with mean number of customers in the entire system. Thus if service rate is increased keeping unchanged the corresponding buffer at first server ( ) gets decreased. Correspondingly the pressure on entire system ( ) also gets decreased. Thus the improvement in performance of one server improves the performance of entire system. Similarly when is increased (t) decreases , is increased (t) decreases etc., On the same lines when the probability ( ) that the customers from first server join second (or third server) increases the buffer at second server ( )( at third server ( ) ) is increasing correspondingly. Table 2 indicates that with respect to time the probability of emptiness has shown sudden decrease initially (t=0.1 only) and decreasing normally thereafter(for t=0.2,0.3,0.4,0.5).Similarly with increase in mean arrival rate the probability of emptiness at each server is decreasing while the utilization of servers is increasing. This clearly indicates that the system is performing according to the requirement. As the number of customers served at each server increases ( ) the system tends towards rest. Thus the probability of emptiness increases while utilization of servers decreases as is expected. The probability of emptiness decreases as the probability of customers joining a particular server while it's utilization gets increased. Thus as increases from 0.1 to 0.5 system emptiness increases from 0.1480 to 0.3479.This has an impact on the fourth server by increasing the probability of emptiness at fourth server from 0.8233 to 0.9200 and decreasing it's utilization from 0.1767 to 0.0800.Similarly with increase of from 0.1 to 0.5 the probability of emptiness of third server decreases and it's utilization increases whereas emptiness of fourth server increases and it's utilization decreases. From Table.3 it is observed that the throughputs , and mean waiting times , , , at each of the four servers have shown increase with increase in time. Similarly an increase in lead to an increase in throughputs as well as mean waiting times. Further we can observe that the increase in service rate at second, third and fourth server leads to increase in throughputs and waiting times except at first server whereas increase in leads to increase in and decrease in .
The probability of joining second server (S 2 ) increases from =0.1 to 0.5 the throughput increases correspondingly from 0.2037 to 0.9610 ,this in turn increase the waiting time from 0.1450 to 0.1537.As this influences on which decreases from 0.7 to 0.3 ,the throughput decreases from 1.5907 to 0.7198 while decreasing in mean waiting time is indicated from 0.1223 to 0.1158.Therefore the data supports the theoretical expectations. Similar changes can also be observed in case of changes in probability of joining third server (S 3 ) after being served at first server (S 1 ).

Sensitivity Analysis
In this section we considered the sensitivity analysis of model with the values of parameters as t=0.1, =15, =12, =14, =11, =13, =0.3 and =0.2.The effect of variation of 15%, 10% and 5% on the performance measures were computed and are given in Table 4.

Steady State Analysis
In this section we study the steady-state analysis of queueing model by computing mean length of queue ,emptiness of server, utilization of server and average waiting time at each server.Joint Probability generating function of number of customers in first,….k th buffer respectively at any time t is In steady state as t→∞ we get ( ) Putting we get Which gives the probability that queue is empty in steady state

Performance analysis of First buffer
Putting in (10) we get probability generating function of first buffer size distribution as

Comparative Study
A comparative study between transient and steady state of developed model is carried for t = 0.1,1 and 3.The difference and percentage of variation in all performance measures are computed and given in Table.5.
FromTable.5 it is observed that there is high significant difference between transient behaviour and steady state behaviour of the model. At t=0.1 the variation in measures is highly significant which can be observed in last column. At t=1the percentage of variation is reduced and some of the measures differ very closely.
It is also observed that as t increases the difference between transient and steady state behaviour become negligible and at t=3 which shows that there is no difference between them. This indicates that the system attains equilibrium after time t=3 units.

Conclusion
In this paper we developed and analysed a Queueing model in which customers arrive to the first Queue and after getting service at first server they may join one of the (K-1) Queues which are in parallel with certain probability. Here it is assumed that the services are dependent on the content of the buffers. The explicit expressions for system characteristics such as average number of customers in the Queue, probability of idleness of each service station, throughput of the nodes, average waiting time customers in each Queue, utilisation of each server. The sensitivity of the model revealed that the arrival rates and load dependent service time distribution parameters have significant influence on performance measures. The proposed model is very useful for scheduling, the Communication networks at LAN,VAN and MAN. The optimal operating policies of the model with suitable cost considerations were also derived, which will be considered else where.