Computer Systems Engineering/Poisson Processes

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Poisson Process[edit | edit source]

  • Named for Siméon-Denis Poisson (1781–1840)
  • Describes a “completely random process”

Example - decay of radioactive material

  • Generally good if events are caused by many independent actions (Often 4 or 5 can give a good approximation)

Properties[edit | edit source]

Average time between events

Average number of events in interval

Memorylessness (very important)


No event in s; find probability that the next event occurs in t:

Remember: non-overlapping time intervals are independent!

One time interval is like any other, which leads to paradox. If we were to pick a point randomly, it is more likely that the point would fall in a large interval than it would fall in a small one (large intervals occupy more space on a line).

For Poisson arrivals, if we pick an interval at random, it will be (on average) twice as large as an average interval.

Events are uniformly distributed

Properties of the Poisson Process[edit | edit source]

Average Interevent Time[edit | edit source]

For the Poisson process the time between events has the exponential distribution. This has the density function The expected time between events is found by integrating by parts:


The variance of the interevent time can be calculated similarly. It is


Hence its standard deviation is


and its coefficient of variation is

	coefficient of variation = 			

Again expanding in a Taylor series for small h gives


The probability of more than 1 event in h is


This establishes the equivalence of properties (ii), (iii) and (iv) for the Poisson process.

Exponential Implies Poisson[edit | edit source]

That the exponential interevent time distribution implies a Poisson distributed number of events in an interval can be shown directly in almost the same way used to show that (iv) implied (ii).

Consider an interval T:

Again


The probability of exactly one event in T is the total probability of one event at y and no events in the interval (T—y) for all possible values of y.

The probability of exactly two events in T is the total probability of one event at y and one event in the interval (T—y) for all possible values of y.

and in general

.

Average Number of Events in an Interval[edit | edit source]

The expected number of events which occur during an interval of length T is

The results are not unexpected but confirm the fact that the para¬meter for the Poisson process (or equivalently the exponential distribution) is the average event rate.

Memoryless Property[edit | edit source]

Supposing a time s has passed since the last event, what is the probability that the time until the next event is greater than t? This is the same as asking the probability that the time between events, X, is greater than s + t given that it is greater than s.

Hence substituting gives

This is the well known memoryless property of the Poisson process. The time distribution until the next event is the same regardless of how much time has passed since the last event, and the average time until the next event is the same as the average interevent time. This property is also a direct conse¬quence of the complete randomness of the Poisson process; what happens in the interval t is independent of what has happened in the interval s.

The memoryless property of the Poisson process leads to an interesting paradox. Suppose we have a Poisson process and pick an arbitrary time t’. The average time until the next event is and, looking backwards, the average time since the last event is also . Thus the average time between these two events must be . But the average interevent time is . The paradox can be resolved by noting that the time t’ is more likely to fall in a large interevent interval than in a small one. In fact, the expected size of the interevent interval containing t’ will be twice the average.

To illustrate the consequences of this paradox suppose that requests for a disk I/O operation in a computer system are exponentially distributed. We measure the time between requests by reading a clock and resetting it to zero after each read or write. If we attempt to sample this process by picking some ar¬bitrary time, t’, and then reading the clock at the next I/O operation the average value of our samples will be twice the average time between I/O operations. To sample this process correctly we would need to pick an ar¬bitrary time and then sample the time between the next two I/O operations.

Events Uniformly Distributed in Interval[edit | edit source]

If we have an interval of length t and one event occurred during t, what is the probability that the event occurred during a subinterval of length s?

This is the same as asking the probability that (where X is the time from the start of the interval until the event) given that exactly one event occurred during t: . We note that if then one event occurred during s and no events occurred during the interval (t-s). Hence

Now, since what happens in two non-overlapping intervals is independent, . Substituting the

Poisson distribution gives


The probability that the event occurred during the subinterval s is proportional to the size of the interval s. That is, the event is uniformly distributed in the interval t. This has the distribution and density functions


where t is the length of the interval. This is shown below.

Superposition and Decomposition[edit | edit source]

Suppose we have two independent Poisson processes which we designate as process 1 and process 2 and let:

= the number of events in process 1 during an interval of length T
= the number of events in process 2 during the same interval
= the average event rate for process 1
= the average event rate for process 2

The superposition of these two processes is the process having the events of both processes 1 and 2. This is shown below.

What is the probability distribution for the events in the superimposed process,  ?

If r is the number of events during T in the first process, the superimposed process can have k events only if process 2 has k-r events (conditions on k events). These processes are independent. Hence for each value of r, r=0,1...k


Thus the total probability of k events in the superimposed process is the sum of these probabilities, times the probability that there were r events in process 1. This is called the Law of Total Probability.


Substituting in the Poisson distribution gives


The last term is simply the binomial expansion of


Hence


But this is simply the Poisson distribution with parameter . Furthermore, since process 1 and process 2 are independent and events during non-overlapping intervals in each process are independent, events in non-overlapping intervals of the superimposed process are independent. Hence it is also a Poisson process.

The converse is also true. If a Poisson process is split (decomposed) into two by selecting events for the first process with probability p and events for the second process with probability q = 1-p then the two processes are independent Poisson processes with parameters and respectively.

The superposition and decomposition properties are unique for the Poisson process and arise from its complete randomness. If two non-Poisson processes having average event rates and are superimposed the resulting process will have an average event rate but it will not be Poisson nor will it, in general, be a renewal process in which the interevent times are independent and identically distributed.

Combining Non-Poisson Processes[edit | edit source]

However, if a large number of independent non-Poisson processes (with event rates ) are superimposed the resulting process will be Poisson with parameter [2]

Actua1ly, this is precisely true only in the limit as . Again, it results from the randomness of the Poisson process. For fixed , as the number of superimposed processes is increased, for each process must decrease. Thus events from the same non-Poisson process in the superimposed process become more and more separated making the process more and more random.

Sampling[edit | edit source]

The converse is also true. If events in a non-Poisson process (with average rate ) are sampled infrequently (i.e. the probability of an event being sampled, p, is small) the resulting sampled process having the event rate


will be Poisson. Again, this is precisely true only in the limit as and such that remains constant.

These limiting distributions help explain the usefulness of the Poisson process in analyzing computer systems. For example, if jobs are submitted from a large number of terminals the composite job arrival rate will be the superposition of the submittal processes at each terminal. These tend to be independent. Thus if the number of terminals is large the composite process will be approximately Poisson. In practice only a few terminals are needed to make the Poisson distribution a good approximation to the actual job arrival process.

As another example [1] consider the process making page transfer requests to a disk in a virtual memory system. This is probably not very random, but if the number of disk sectors is large and if requests are directed to the sectors with equal probability the requesting process to each sector could be closely approximated by a Poisson process.

References[edit | edit source]

1. Gelenbe, E. and R.R. Muntz, “Probabilistic Models of Computer Systems - Part I (Exact Resu1ts),” Acta Informatica, Vol. 7, 1976, pp. 35–66.

2. Karlin, Samuel and Howard M. Taylor, A First Course in Stochastic Processes, Academic Press, 1975.

3. Kobayashi , H., Modeling and Analysis, Addison Wesley, 1978.