Introduction: We assume that a order of operation can

Introduction:

 Amortized analysis is technique to find
minimum time required to perform a order of

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Data-structure operations when a set of inputs
effects the next order of input we look at amortized analysis. We assume that a
order of operation can cost higher than the total cost. Amortized analysis is
not applicable everywhere it is applied whenever the order in one problem
effecting the other order in that problem. For calculating the worst-case
running time of any kind of data-structure operation in the order, p; if the order
contains m operations, then mp is an upper bound for the total running time.
This is technically correct but it may come with a surprisingly output and give
a wrong or unexpected result. Now from above paragraph we concluded the main
ideas of our topic is: Amortized analysis applied when a order of data
structure is effecting next order. Amortized analysis is itself an upper bound
which results as an average performance for each operation in worst case time.
Amortized analysis is exercised with the whole cost of order of operations. It does
not deals
with the cost of a single operation in that order. Let’s
suppose that
the amortized cost of insertion into a splay tree with m items is O
(log m), so when I insert ’54’ into this tree, the cost will be O
(log m). In fact, inserting ’54’ might require O
(m) operations. It is only appropriate to say, when I insert
m items into a tree, the average time for
each operation will be O (log m).

History:

Amortization in finance
means to pay off a debt i.e. loans and
mortgage, by smaller payments made over a period of time.

A method aggregate analysis which is now known as
Amortized analysis is a technique which was introduced by Robert Tarjan.
According to him and as he wrote in 1985 in his paper that we can surprisingly
achieve upper bound and lower bound for many varieties of algorithm. This
technique is used by many researchers. Robert Tarjan reveals that by using this
technique we can obtain “self-adjusting” data structures that are efficient but
must have amortized complexity low.
Amortization plays a vital
role in the analysis of many other standard algorithms and data
structures, including

·        
maximum flow (In optimization theory, maximum flow problems involve
finding a feasible flow through a single-source, single-sink flow
network that is maximum)

·        
 Fibonacci heaps In a field of computer science,is
a data structure for priority
queue operations, consisting of a collection
of heap-ordered trees. It has a
better amortized running time than
many other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and Robert
E. Tarjan developed Fibonacci heaps in 1984
and published them in a scientific journal in 1987. They named Fibonacci heaps
after the Fibonacci numbers, which are
used in their running time analysis.

·        
 Dynamic arrays In a field of computer science, is a array that is growable array, resizable
array, dynamic table, mutable array, or array list is a random
access, variable-size list data
structure that allows elements to be added or
removed. It is supplied with standard libraries in many modern mainstream
programming languages. Dynamic arrays overcome a limit of static arrays, which have a fixed capacity that needs to be specified
at allocation.

Comparison to other analysis techniques:

Worst-case analysis can give overly negative
(pessimistic) bounds for orders of
operations, because this type of analysis could not tackle
with the operations on same kind of data structure. Amortized analysis may result
into more than the
practical worst-case bound by observing such kind
of interactions into account. The bound that is given
by amortized analysis is, a worst-case
bound on the average time per
operation; a specific operation in a order of
data-structure may have more cost
than this bound, but the minimum cost on whole of
the operations in any valid order will always perform
within the bound. Amortized analysis is same as average case
analysis, however it is for average cost over a order of operations.

Methods
of Amortized analysis:

We are going to discuss three methods:

·        
Aggregate method

·        
Potential method

·        
Accounting method

Aggregate
Method:

The aggregate method, where the all over running time for a order
of operations is analyzed.

Ø  In aggregate analysis,
we will proof that for every m, a order of m operations takes

             Worst-case time T (m) in
total.

Ø  In the worst
case, the average cost, or amortized-cost, per operation is therefore T(m)/m.
Note that this amortized cost applies to each operation, even when there are
several types of operations in the order.

The other two methods are:

Ø  The accounting
method

Ø  The potential
method

Stack
operations:

                       In our first example we will
discuss to insert a new element we can do two operation push and pop let assume
the cost in O(1).

PUSH.S: x

 pushes object x onto
stack S.

POP.S
pops
the top element/object of stack S and will return
back a popped object. Now make sure that you are not calling pop on an empty
stack.

Ø  What will happen
if you call pop operation on empty stack?

               Surely, it will give us an
error.

Since every operation that we can
implement on stack takes O (1) time. The total cost of a order of number of
PUSH and POP operations is therefore n, and the actual running time for number
of operations is Q (n).

Now let we add one more
functionality/operation in stack operation of MULTIPOP (S,k),
where S is stack and k is a number of objects. MULTIPOP (S,k) perform
will do some simple task it will removes the k top objects of
stack S, it will result as empty stack if the
objects are fewer than k. For sure, we will always suppose that k is
positive number negative can never ever be taken if this happens the MULTIPOP operation
will not change the stack and leaves the stack same as it was previous. In the
following pseudocode, the operation STACK-EMPTY returns TRUE if there are no
objects currently on the stack, and FALSE otherwise.

The top 4 objects
are popped when we call MULTIPOP(S,4). The next
operation is MULTIPOP(S,8) which empties the stack since there
were fewer than 8 objects remaining.

According to you what in the
running time of Multipop(S,k)? The time analysis is linear for numbers of POP operations
executed, hence we can assume that the time analysis of multipop for every PUSH
and POP operation is 1.

 

The while loop will run
till stack is not empty and k is not equal zero that only the when we call
multipop with higher number even than it is working for each iteration of while
loop pop operation will be called as you can see in line 2. Thus, the total cost
of MULTIPOP is Q(sk),now if we recall the amortized analysis we can see
clearly the function give us linear time as whole but a part of order is
expensive than whole. Let us consider a order of
numbers of PUSH, POP, and MULTIPOP operations on an empty stack. The worst-case
cost of a MULTIPOP operation in the order s O(n), since the stack size is
minimum upto n. The worst-case time of any stack operation is therefore O(n),
and hence a order of n operations costs O(n2), since we may have
O(n) MULTIPOP operations costing O(n) each. Although this analysis is correct,
the O(n2) result, which we obtained by considering the worst-case
cost of each operation on each functionality, is not tightly bound. Using
aggregate analysis, we can obtain a better upper bound that considers the
entire order of n operations. In fact, although a single MULTIPOP operation can
be expensive, any order of n PUSH, POP, and MULTIPOP operations on an initially
empty stack can cost at most O(n).We can pop each object from the stack at most
once for each time we have pushed it onto the stack. Therefore, the number of
times that POP can be called on a nonempty stack, including calls within
MULTIPOP, is at most the number of PUSH operations, which is at most n. For any
value of n, any order of n PUSH, POP, and MULTIPOP operations takes a total of
O (n) time. The average cost of an operation is O(n)/n= O(1). In aggregate
analysis, we assign the amortized cost of each operation to be the average
cost. In this example, therefore, all three stack operations have an amortized
cost of O(1).We emphasize again that although we have just shown that the
average cost, and hence the running time, of a stack operation is O(1). We
actually showed a worst-case bound
of O(n) on a order of n operations. Div this total cost by n yielded the
average cost per operation, or the amortized cost.

Incrementing a
binary counter:

The worst case running
time occurs when all i bits are flipped, so increment (A) has running time O(i).
In a order of n increment operations, few increments will cause that many bits
to flip. Indeed, bit 0 flips with every increment bit 1 flips with every 2nd
increment bit 2 flips with every 4th increment

Total number of bit
flips in n increment operations is

n + n/2 + n/4 + … +
n/2i 0, then ?i >
ci.  Operation i stores work in the data structure for later use.

 If  DFi

BACK TO TOP
x

Hi!
I'm Al!

Would you like to get a custom essay? How about receiving a customized one?

Check it out