Introduction:

Amortized analysis is technique to find

minimum time required to perform a order of

Data-structure operations when a set of inputs

effects the next order of input we look at amortized analysis. We assume that a

order of operation can cost higher than the total cost. Amortized analysis is

not applicable everywhere it is applied whenever the order in one problem

effecting the other order in that problem. For calculating the worst-case

running time of any kind of data-structure operation in the order, p; if the order

contains m operations, then mp is an upper bound for the total running time.

This is technically correct but it may come with a surprisingly output and give

a wrong or unexpected result. Now from above paragraph we concluded the main

ideas of our topic is: Amortized analysis applied when a order of data

structure is effecting next order. Amortized analysis is itself an upper bound

which results as an average performance for each operation in worst case time.

Amortized analysis is exercised with the whole cost of order of operations. It does

not deals

with the cost of a single operation in that order. Let’s

suppose that

the amortized cost of insertion into a splay tree with m items is O

(log m), so when I insert ’54’ into this tree, the cost will be O

(log m). In fact, inserting ’54’ might require O

(m) operations. It is only appropriate to say, when I insert

m items into a tree, the average time for

each operation will be O (log m).

History:

Amortization in finance

means to pay off a debt i.e. loans and

mortgage, by smaller payments made over a period of time.

A method aggregate analysis which is now known as

Amortized analysis is a technique which was introduced by Robert Tarjan.

According to him and as he wrote in 1985 in his paper that we can surprisingly

achieve upper bound and lower bound for many varieties of algorithm. This

technique is used by many researchers. Robert Tarjan reveals that by using this

technique we can obtain “self-adjusting” data structures that are efficient but

must have amortized complexity low.

Amortization plays a vital

role in the analysis of many other standard algorithms and data

structures, including

·

maximum flow (In optimization theory, maximum flow problems involve

finding a feasible flow through a single-source, single-sink flow

network that is maximum)

·

Fibonacci heaps In a field of computer science,is

a data structure for priority

queue operations, consisting of a collection

of heap-ordered trees. It has a

better amortized running time than

many other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and Robert

E. Tarjan developed Fibonacci heaps in 1984

and published them in a scientific journal in 1987. They named Fibonacci heaps

after the Fibonacci numbers, which are

used in their running time analysis.

·

Dynamic arrays In a field of computer science, is a array that is growable array, resizable

array, dynamic table, mutable array, or array list is a random

access, variable-size list data

structure that allows elements to be added or

removed. It is supplied with standard libraries in many modern mainstream

programming languages. Dynamic arrays overcome a limit of static arrays, which have a fixed capacity that needs to be specified

at allocation.

Comparison to other analysis techniques:

Worst-case analysis can give overly negative

(pessimistic) bounds for orders of

operations, because this type of analysis could not tackle

with the operations on same kind of data structure. Amortized analysis may result

into more than the

practical worst-case bound by observing such kind

of interactions into account. The bound that is given

by amortized analysis is, a worst-case

bound on the average time per

operation; a specific operation in a order of

data-structure may have more cost

than this bound, but the minimum cost on whole of

the operations in any valid order will always perform

within the bound. Amortized analysis is same as average case

analysis, however it is for average cost over a order of operations.

Methods

of Amortized analysis:

We are going to discuss three methods:

·

Aggregate method

·

Potential method

·

Accounting method

Aggregate

Method:

The aggregate method, where the all over running time for a order

of operations is analyzed.

Ø In aggregate analysis,

we will proof that for every m, a order of m operations takes

Worst-case time T (m) in

total.

Ø In the worst

case, the average cost, or amortized-cost, per operation is therefore T(m)/m.

Note that this amortized cost applies to each operation, even when there are

several types of operations in the order.

The other two methods are:

Ø The accounting

method

Ø The potential

method

Stack

operations:

In our first example we will

discuss to insert a new element we can do two operation push and pop let assume

the cost in O(1).

PUSH.S: x

pushes object x onto

stack S.

POP.S

pops

the top element/object of stack S and will return

back a popped object. Now make sure that you are not calling pop on an empty

stack.

Ø What will happen

if you call pop operation on empty stack?

Surely, it will give us an

error.

Since every operation that we can

implement on stack takes O (1) time. The total cost of a order of number of

PUSH and POP operations is therefore n, and the actual running time for number

of operations is Q (n).

Now let we add one more

functionality/operation in stack operation of MULTIPOP (S,k),

where S is stack and k is a number of objects. MULTIPOP (S,k) perform

will do some simple task it will removes the k top objects of

stack S, it will result as empty stack if the

objects are fewer than k. For sure, we will always suppose that k is

positive number negative can never ever be taken if this happens the MULTIPOP operation

will not change the stack and leaves the stack same as it was previous. In the

following pseudocode, the operation STACK-EMPTY returns TRUE if there are no

objects currently on the stack, and FALSE otherwise.

The top 4 objects

are popped when we call MULTIPOP(S,4). The next

operation is MULTIPOP(S,8) which empties the stack since there

were fewer than 8 objects remaining.

According to you what in the

running time of Multipop(S,k)? The time analysis is linear for numbers of POP operations

executed, hence we can assume that the time analysis of multipop for every PUSH

and POP operation is 1.

The while loop will run

till stack is not empty and k is not equal zero that only the when we call

multipop with higher number even than it is working for each iteration of while

loop pop operation will be called as you can see in line 2. Thus, the total cost

of MULTIPOP is Q(sk),now if we recall the amortized analysis we can see

clearly the function give us linear time as whole but a part of order is

expensive than whole. Let us consider a order of

numbers of PUSH, POP, and MULTIPOP operations on an empty stack. The worst-case

cost of a MULTIPOP operation in the order s O(n), since the stack size is

minimum upto n. The worst-case time of any stack operation is therefore O(n),

and hence a order of n operations costs O(n2), since we may have

O(n) MULTIPOP operations costing O(n) each. Although this analysis is correct,

the O(n2) result, which we obtained by considering the worst-case

cost of each operation on each functionality, is not tightly bound. Using

aggregate analysis, we can obtain a better upper bound that considers the

entire order of n operations. In fact, although a single MULTIPOP operation can

be expensive, any order of n PUSH, POP, and MULTIPOP operations on an initially

empty stack can cost at most O(n).We can pop each object from the stack at most

once for each time we have pushed it onto the stack. Therefore, the number of

times that POP can be called on a nonempty stack, including calls within

MULTIPOP, is at most the number of PUSH operations, which is at most n. For any

value of n, any order of n PUSH, POP, and MULTIPOP operations takes a total of

O (n) time. The average cost of an operation is O(n)/n= O(1). In aggregate

analysis, we assign the amortized cost of each operation to be the average

cost. In this example, therefore, all three stack operations have an amortized

cost of O(1).We emphasize again that although we have just shown that the

average cost, and hence the running time, of a stack operation is O(1). We

actually showed a worst-case bound

of O(n) on a order of n operations. Div this total cost by n yielded the

average cost per operation, or the amortized cost.

Incrementing a

binary counter:

The worst case running

time occurs when all i bits are flipped, so increment (A) has running time O(i).

In a order of n increment operations, few increments will cause that many bits

to flip. Indeed, bit 0 flips with every increment bit 1 flips with every 2nd

increment bit 2 flips with every 4th increment

Total number of bit

flips in n increment operations is

n + n/2 + n/4 + … +

n/2i < n(1/(1-1/2))= 2n
So total cost of the order
is O(n). Amortized cost per operation is O (n)/n = O(1)
The cost of each
INCREMENT operation is linear in the number of bits flipped. As with the stack
example, analysis produces a bound that is correct but not tight.
The
Accounting Method:
The accounting method is a method
of amortized analysis based on accounting. Note, however, that this
does not surety such analysis will be directly manifest; often, choosing the
correct parameters for the accounting method requires as much knowledge of the
problem and the complexity bounds one is endeavoring to prove as the other two
methods.,
different operations are assigned with different charges, with some operations
charged more or less than they actually cost. We call the amount we charge an
operation its amortized cost.
When the amortized cost of an
operation surpasses its actual cost, we assign the difference to specific
objects in the data structure as credit. Credit can help pay for later
operations when actual cost is greater than amortized cost.
Our credit must never be negative not just
because banks use this but because if we let the credit goes negative, we can't
guarantee that our amortized cost is upper bound on an actual cost for some orders.
So we want a guarantee that for any order of operations, the amortized cost
always gives an upper bound on the actual cost of that order.
Thus, we can view the amortized cost of an
operation as being split between its actual cost and credit that is either
deposited or used up. Different operations may have different amortized costs.
This method differs from aggregate analysis, in which all operations have the same
amortized cost.
Table Resizing In data structures
like dynamic array, heap, stack, hash table (to name a few) we used the idea of
resizing the underlying array, when the current array was either full or
reached a chosen threshold. During resizing we allocate an array twice as large
and move the data over to the new array. So, if an array is full, the cost of
insertion is linear; if an array is not full, insertion takes a constant time.
To derive an amortized cost we start with an array of size 1 and perform n
insertions.
We must choose the amortized
costs of operations carefully. If we want to show
that the average cost per operation of the worst case is small by analyzing
with
amortized costs, we must guarantee that for any order of operations, the
amortized cost always gives an upper bound on the actual cost of that order..
Moreover,
as in aggregate analysis, this relationship must hold for all orders of
operations. If we denote the actual cost of the ith operation by ci and the
amortized cost
of the ith operation by c yi, we require that the submission of amortized cost
c (yi) is greater or equal to actual cost which is ci for all orders. In some
cases, the actual cost is greater than amortized cost.
Stack operations
To illustrate the accounting
method of amortized analysis, let us return to the stack
example. Recall that the actual
costs of the operations were
PUSH 1 ,
POP 1 ,
MULTIPOP min(k, s)
where k is
the argument supplied to MULTIPOP and s is the stack
size when it is
called. Let us assign the
following amortized costs:
PUSH 2 ,
POP 0 ,
MULTIPOP 0 .
Note that the amortized cost of
MULTIPOP is a constant (0), whereas the actual cost is variable.
Here, all three amortized costs are constant. In general, the amortized costs
of the operations under consideration may differ from each other, and they may
even differ asymptotically.
We shall now show that we can pay
for any order of stack operations by charging the amortized costs. Suppose we
use a dollar bill to represent each unit of cost. We start with an empty stack
between the stack data structure and a stack of plates in a cafeteria. When we
push a plate on the stack, we use 1 dollar to pay
the actual cost of the push and are left with a credit of 1 dollar
(out of the 2 dollars charged), which we leave on top
of the plate. At any point in time, every plate on the stack has a dollar of
credit on it. The dollar stored on the plate serves as prepayment for the cost
of popping it from the stack. When we execute a POP operation, we charge the
operation nothing and pay its actual cost using the credit stored in the stack.
To pop a plate, we take the dollar of credit off the plate and use it to pay
the actual cost of the operation.
Thus, by charging the PUSH operation
a little bit more, we can charge the POP
Operation nothing costs of the operations under consideration may differ
from each other, and they
may even differ asymptotically.
Moreover, we can also charge
MULTIPOP operations nothing. To pop the first
plate, we take the dollar of
credit off the plate and use it to pay the actual cost of a
POP operation. To pop a second
plate, we again have a dollar of credit on the plate
to pay for the POP operation, and
so on. Thus, we have always charged enough
up front to pay for MULTIPOP
operations. In other words, since each plate on the
stack has 1 dollar
of credit on it, and the stack always has a nonnegative number of
plates, we have ensured that the
amount of credit is always nonnegative. Thus, for
any order of n PUSH,
POP, andMULTIPOP operations, the total amortized cost
is an upper bound on the total
actual cost. Since the total amortized cost is O.n/,
so is the total actual cost.
Incrementing
a binary counter
As another illustration of the
accounting method, we analyze the INCREMENT operation
on a binary counter that starts
at zero. As we observed earlier, the running
time of this operation is
proportional to the number of bits flipped, which we shall
use as our cost for this example.
Let us once again use a dollar bill to represent
each unit of cost (the flipping
of a bit in this example).
For the amortized analysis, let
us charge an amortized cost of 2 dollars to set a
bit to 1.
When a bit is set, we use 1 dollar (out of
the 2 dollars charged) to pay
for the actual setting of the
bit, and we place the other dollar on the bit as credit to
be used later when we flip the
bit back to 0. At any point in time, every 1 in
the
counter has a dollar of credit on
it, and thus we can charge nothing to reset a bit
to 0; we just pay
for the reset with the dollar bill on the bit.
Now we can determine the
amortized cost of INCREMENT. The cost of resetting
the bits within the while loop is paid for by the dollars
on the bits that are reset. The
INCREMENT procedure sets at most
one bit, in line 6, and therefore the amortized
cost of an INCREMENT operation is
at most 2 dollars. The number of 1s
in the
counter never becomes negative,
and thus the amount of credit stays nonnegative
at all times. Thus, for n INCREMENT
operations, the total amortized cost is O(n)
which bounds the total
actual cost.
Potential
Method:
this method of
amortized analysis shows the prepaid
work
as "potential energy," or just "potential," which can be released to pay for
future
operations.
We
associate the potential with data structure completely
Instead
of specific objects within the data structure.
Working:
View
the bank account as the potential
energy (as in physics) of the dynamic
set.
Start
with an initial data structure D0.
Operation
i transforms Di–1 to Di.
The
cost of operation i is ci.
Define
a potential
function F : {Di}
® R,
such
that F(D0 ) = 0 and
F(Di ) ³ 0 for all i.
The
amortized
cost ?i with
respect to F is defined to be ?i =
ci + F(Di) – F(Di–1).
Like
the accounting method, but think of the credit as potential stored with the entire data
structure.
Accounting method stores credit with specific
objects while potential method stores potential in the data structure as a
whole.
Can release potential to pay for future
operations
Most
flexible of the amortized analysis methods ).
?i =
ci + F(Di) – F(Di–1)
If DFi
> 0, then ?i >

ci. Operation i stores work in the data structure for later use.

If DFi < 0, then ?i < ci. The data structure delivers up stored work to help pay for operation i. The total amortized cost of n operations is Summing both sides telescopically. since F(Dn) ³ 0 and F(D0 ) = 0. Stack Example: Define: f(Di) = #items in stack Thus, f(D0)=0. Plug in for operations: Push: ?i = ci + f(Di) - f(Di-1) = 1 + j - (j-1) = 2 Pop: ?i = ci + f(Di) - f(Di-1) = 1 + (j-1) - j = 0 Multi-pop: ?i = ci + f(Di) - f(Di-1) = k' + (j-k') - j k'=min(|S|,k) = 0 Binary Counter Define the potential of the counter after the ith operation by F(Di) = bi, the number of 1's in the counter after the ith operation. Note: F(D0 ) = 0, F(Di) ³ 0 for all i. Example: 0 0 0 1 0 1 0 0 0 0 1$1 0 1$1 0 Accounting method Assume ith INCREMENT resets ti bits (in line 3). Actual cost ci = (ti + 1) Number of 1's after ith operation: bi = bi–1 – ti + 1 The amortized cost of the i th INCREMENT is ?i = ci + F(Di) – F(Di–1) = (ti + 1) + (1 - ti) = 2 Therefore, n INCREMENTs cost Q(n) in the worst case. Examples:1 A sequence of n operations is performed on a data structure in which the kth operation costs k if k is an exact power of 2, and 1 otherwise. Use aggregate analysis to determine the amortized cost per operation. Solution: In a sequence of n operations there are blog2 nc+1 exact powers of 2, namely 1,2,4,...,2 blog2 n c . The total cost of these is a geometric sum ? blog2 n c I =0 2 I = 2 blog2 nc+1 ?1 ? 2 log2 n+1 = 2n. The rest of the operations are cheap, each having a cost of 1, and there are less than n such operations. The total cost of all operations is thus T(n) ? 2n+n = 3n = O(n), which means O(1) amortized cost per operation. Example:2 We want not only to increment a counter but also to reset it to zero (i.e., make all bits in it 0). Counting the time to examine or modify a bit as (1), show how to implement a counter as an array of bits so that any sequence of n INCREMENT and RESET operations takes time O(n) on an initially zero counter We introduce a new field A:max to hold the index of the high-order 1 in A. Initially, A:max is set to 1, since the low-order bit of A is at index 0, and there are initially no 1's in A. The value of A:max is updated as appropriate when the counter is incremented or reset, and we use this value to limit how much of A must be looked at to reset it. By controlling the cost of RESET in this way, we can limit it to an amount that can be covered by credit from earlier INCREMENTs. As for the counter in the book, we assume that it costs $1 to flip a bit. In addition, we assume it costs $1 to update A:max. Setting and resetting of bits by INCREMENT will work exactly as for the original counter in the book: $1 will pay to set one bit to 1; $1 will be placed on the bit that is set to 1 as credit; the credit on each 1 bit will pay to reset the bit during incrementing. In addition, we'll use $1 to pay to update max, and if max increases, we'll place an additional $1 of credit on the new high-order 1. (If max doesn't increase, we can just waste that $1—it won't be needed.) Since RESET manipulates bits at positions only up to A:max, and since each bit up to there must have become the high-order 1 at some time before the high-order 1 got up to A:max, every bit seen by RESET has $1 of credit on it. So the zeroing of bits of A by RESET can be completely paid for by the credit stored on the bits. We just need $1 to pay for resetting max. Thus charging $4 for each INCREMENT and $1 for each RESET is sufficient, so the sequence of n INCREMENT and RESET operations takes O(n) time.