DATA STRUCTURES IN C.
Data structures through C lang.
- Data structure is a way to store and organize the data in computer so that it can used efficientaly
- data structure have two type 1) abstract data type(ADT)(mathematical and logical data type) 2) implementation data type
- list abstract data type is not memory efficient in terms of inserting and removing the data from an array here time require is (T dirc.n) where n is the number element in an array
- that is if we are using just array for storing the data then at the time of inserting the new data into an array then array will shift a whole bytes from array
- in Linked list node are available node=int value + add. of next node
- Θ
----------------------------------------------------------------------------
# Algorithms & Data Structures:
Q. What is data structure?
- it is a way to store data into the memory(main memory) in an oraganized manner
so that we can perform operations on it efficiently.
- operations like addition/insertion, deletion, searching, sorting etc....
- basically there are two types of data structure:
1. linear data structure/basic data structure:
- array
- structure
- union
- linked list
- stack
- queue
2. non-linear data structure/advanced data structure:
- tree
- graph
- heap
(associative data structure)
- hash table/hash map
-----------------------------------------------------------------------------
- array -> array is a collection/list of logically related similar type elements
gets stored into the memory at contiguos memory locations.
- structure -> it is a collection/list of logically related similar and disimmilar
type of elements (which gets stored into the memory collectively).
typedef struct emp
{
int empid;
char emp_name[32];
float salary;
};
-----------------------------------------------------------------------------------
# algorithm --> an algorithm is a finite set of instructions written in any
informal language.
- it is an informal description of a solution for a given problem.
- algorithm is nothing but solution of a any given problem.
- sorting -- it is a process to arrange data elements in a collection/list either
in an ascending order or in a descending order.
- collection/list may be an array, linked list.
- there are different sorting algorithms:
1. selection sort
2. bubble sort
3. insertion sort
4. quick sort
5. merge sort
6. heap sort
7. radix sort
8. shell sort
- one problem may has many solutions, e.g. for sorting elements in a collection/list
multiple sorting alogorithms are available/exists.
- when we have multiple solutions for any given problem then there is need to decide
an efficient solution/algorithm out of it.
- to decide efficiency of an algorithm/to decide which algorithm is best suited at
which situation we need to its analysis.
- "analysis of an algorithm" is a work of determinig how much "computer time" and
"memory" it takes to run to completion.
- analysis of an algorithm is also called as "performance analysis".
- there are two measure of an analysis of an algorithm:
1. time (computer time)
2. space (memory)
- there are two major components of an analysis of an algorithm:
1. time complexity
2. space complexity
1. time complexity -- time complexity of an algorithm is the total amount of computer
time it needs to run to completion.
2. space complexity -- space complexity of an algorithm is the total amount of memory
it needs to run to completion.
- basically there are two types of an algorithms/there are two approaches by which
we can write an algorithm:
1. iterative approach -- (by using loops)
2. recursive approach -- (by using recursion)
-------------------------------------------------------------------------------
# space complexity -- space complexity of an algorithm is the total amount of memory
it needs to run to completion.
* space required for a program = code space(instructions resides in memory) +
data space(space required for simple variables, constants and instance variables)
+ stack space(space required for local variables, formal parameters and return addr).
* space complexity has two components:
1. fixed component -- code space + data space (space required for simple variables and
constants).
2. variable component -- space required for instance variables + stack space.
//iterative algorithm:
Algorithm Sum(A, n)//A-array name, n-> no. of ele's in an array
{
sum = 0;
for ( index = 1 ; index <= n ; index++)
sum += A[index];
return sum;
}
S = C + Sp
Sp = 3 units (for A, sum & index) + 2 units ( for 1 & 0) + n units (for n)
Sp >= ( n + 5 )
S = C + Sp
=> S >= Sp
=> S >= (n+5)
-------------------------------------------------------------------
Algorithm RSum(A,n,index)
{
if( index < 0 )
return 0;
return ( A[index] + RSum(A, n, index-1));
}
S = C + Sp
Sp = 3(n+1)
S >= 3(n+1)
--------------------------------------------------------------------------
/*
void fun(int n)
{
if( n == 0 )
return;
pritnf("n = %d", n);
fun(n--);
}
*/
/*
void fun(int n)
{
if( n == 0 )
return;
pritnf("n = %d", n);
fun(--n);//tail-recursive
}
*/
/*
void fun(int n)
{
if( n == 0 )
return;
fun(--n);//non tail-recursive
pritnf("n = %d", n);
}
*/
int main(void)
{
fun(5);
return 0;
}
- tail recursive function -- recursive function in wich recursive function call is the
last executable instruction.
- non-tail recursive function -- recursive function in which recursive function call is
not the last executable instruction.
- tail recursive functions are efficient than non-tail recursive functions.
* recursive algorithms takes extra space
* recursive algorithm are simple to implement
---------------------------------------------------------------------------------------
# time complexity:
* time complexity = compilation time(fixed component) + execution time(variable
component), it depends on instance characteristics.
- as an execution time of any program is not only depends on execution time, it also
depends on machine configuration and some external factor in the environment in
which program is running.
- it is not a good choice/practice to calculate time complexity on basis of
compilation and execution time.
- "asymptotic analysis" -- it is a mathematical way/tool to calculate time complexity
and space complexity of an algorithm without implementing it in any programming
language.
- in asysmptotic analysis there are three notations can be used to represent time
complexities:
1. Big Oh (O) --> asymptoic upper bound
2. Big Omega () --> asymptoic lower bound
3. Big Theta () --> asymptoic tight bound
1. Big Oh (O) --> asymptoic upper bound:
- if an algorithm takes maximum time to complete its execution for any given input
sequence, then such time complexity is called as "worst case time complexity",
worst case time complexity can be denoted by using asymptotic upper bound.
- running time of an algorithm cannot be more than its asymptotic upper bound.
2. Big Omega () --> asymptoic lower bound:
- if an algorithm takes minimum time to complete its execution for any given
input sequence, then such time complexity is called as "best case time
complexity",
- best case time complexity can be denoted by using asymptotic lower bound.
- running time of an algorithm cannot be less than its asymptotic lower bound.
3. Big Theta () --> asymptoic tight bound:
- if an algorithm takes neither minimum nor maximum, i.e. it takes moderate
amount of time to complete its execution then such time complexity is called
as an "average case time complexity".
- average case time complexity can be denoted by using asymptotic tight bound.
+ rules/assumption to calculate time and space complexities of an algorithm:
1. if running time of an algorithm/function is O(n/2) or O(n*2), i.e.
if running time having any multiplicative or divisive constant it can be neglected.
O(n/2) or O(2*n) => O(n).
2. if running time of an algorithm/function has additive or substractive constants
can be neglected as well
i.e. O(n+2) or O(n-2) => O(n)
3. if running time of algorithm has polynomial then only leading term can be
considered.
i.e. O(n^3+n+2) => O(n^3)
4. if function/algorithm/program contains loop, and if loop executes constant
no. of times then time complexity of it considered as O(1).
e.g.
for( int i = 0 ; i < c ; i++ )//whereas c is any constant
{
//statement/statements
}
5. if any function/an algorithm do not contains call to any non-constant function,
recursion or non-constant loop
in other words if any function/an algorithm contains instructions which will going
fixed amount/constant amount of time, then time complexity of such an algo/function
can be considered as O(1).
e.g.
void swap(void)
{
int n1 = 10, n2 = 20;
int temp = n1;
n1 = n2;
n2 = temp;
}
6. if any function/an algorithm/program contains loop, and if loop executes n
no. of times then time complexity of it considered as O(n) or
if loop counter variable is either getting incremented/decremented by 1.
e.g.
for( int i = 0 ; i < n ; i++ )
{
//statement/statements
}
OR
for( int i = n ; i > 0 ; i-- )
{
//statement/statements
}
7. if any function/an algorithm/program contains loop, and if loop executes n
no. of times, if loop counter variable is either getting multiplied/divided by a
constant value then we get time complexity in terms of O(log n).
e.g.
for( int i = 0 ; i < n ; i = i * c)
{
//statement/statements
}
OR
for( int i = n ; i > 0 ; i = i / c )
{
//statement/statements
}
8. if any function/an algorithm contains nested loops, then whatever time required for
the execution of statement/s inside innermost loop can be considered as time complexity
of that algo.
for( int i = 1 ; i <= n ; i++ ) //--> (n+1) times
{
for( int j = 1 ; j <= n ; j++ )//--> n(n+1)
{
//statemet/statements//-> n(n) --> n^2
}
}
9. if any function/an algo contains two consecutive loops the time complexity
of that algo can be calculated as follows:
for( int i = 1 ; i <= m ; i++ )
{
//statement/s
}
for( int i = 1 ; i <= n ; i++ )
{
//statement/s
}
---------------------------------------------------------------------------------------
- selection sort -- it maintains two lists(logically), initially one list contains
all ele's to be sort and another list is empty.
- in every iteration algo finds the smallest ele and add it into the another list
which was initially empty, so after max (n-1) no. of iterations all elements gets
shifted/added towards/into another list which are in a sorted manner.
- in every iteration one position got selected and ele at selected position gets
compared with remaining all pos elements, if selected position ele is greater than
ele at any other position it gets swapped with it, and so on.....
- best case time complexity = big omega(n^2)
- worst case time complexity = big oh(n^2)
- average case time complexity = big theta(n^2)
* advantages:
- it is simple to implement
- it is inplace
* disadvantage:
- it is not an adaptive sorting algo
- it is not at all efficient for larger size array
* stability -- ???
# features of sorting algos:
- inplace -- if any sorting algo do not takes extra space for sorting ele's in a
collection/list (it may be an array or linked list) of ele's, then such an algo
is reffered as an inplace sorting algorithm.
- "adaptive" -- if any sorting algo works efficiently for already sorted input
sequence then such algo is reffered as an adaptive sorting algo.
- "stable" -- if relative order of two ele's having same magnitude remains same
after sorting then such sorting algo is reffered as "stable", otherwise unstable
not stable.
# Bubble Sort:
- bubble sort maintains two lists (logically), initially one list contains all
ele's to be sort and another list is empty, in every iteration algo finds largest
ele's from one list and add it into the another list which was initialy empty, so
in max (n-1) no. of iterations all ele's gets added into the another list in a sorted
manner.
- advantages:
- it is simple to implement
- it is inplace
- stability -- ??
- disadvantages:
- not an adaptive in nature but can be implemented as an adaptive.
- it is not efficient for larger input size array
- best case time complexity = big omega(n)
- worst case time complexity = big oh(n^2)
- average case time complexity = big theta(n^2)
---------------------------------------------------------------------------------------
# insertion sort:
# quick sort:
-
void quick_sort(int *arr, int left, int right)
{
//termination condition
int i = left;
int j = right;
while( i < j )
{
while( i <= right && arr[i] <= arr[left] )
i++;
while( arr[j] > arr[left] )
j--;
if( i < j )
SWAP(arr[i], arr[j]);
}
SWAP(arr[left], arr[j]);
quick_sort(arr,left,j-1);
quick_sort(arr,j+1,right);
}
------------------------------------------------------------------------------
# array:
- searching algorithms
- sorting algorithms
# limitations of an array data structure:
- array is static -- i.e. size of an array cannot grow or shrinked during runtime
- addition and deletion of elements operations in an array are not efficient
as it takes O(n) time and these operations are also not convenient.
- to overcome limitations of an array "linked list" data structure can be used.
# linked list -- it is a collection/list of logically related similar type of elements
in which, each element has two parts:
- data part -- actual data of any primitive or derived data type
- pointer part -- addr of next element in that list (next part).
# singly linear linked list -- it is linked list in which
- head pointer always contains addr of first element/node if list is not empty
- each node contains data and addr of its next node in that list
- last node points to NULL i.e. next part of last node points to NULL.
class node
{
private:
int data;
node *next;
};
- accept key from user
- search_node -- it should return addr of node as well as addr of its prev node
if key found
-
- search_and_delete(int key)
--------------------------------------------------------------------------------------
# singly linear linked list -- it is linked list in which
- head pointer always contains addr of first element/node if list is not empty
- each node contains data and addr of its next node in that list
- last node points to NULL i.e. next part of last node points to NULL.
# singly circular linked list -- it is linked list in which
- head pointer always contains addr of first element/node if list is not empty
- each node contains data and addr of its next node in that list
- last node points to first node i.e. next part of last node contains addr of
first node.
# disadvantages/limitations of singly linked list (linear or circular):
1. we can traverse singly linked list in only forward direction
2. from any node we cannot access its prev node
3. all addition and deletion operations are not efficient
# to overcome limitations of a singly linked list doubly linked list is designed.
# doubly linked list: it is a collection/list of logically related similar type
of elements in which:
- head always contains addr of first node if list is not empy
- each node has three parts:
1. data part -- actual data of any primitive or derived data type
2. prev part ( pointer ) -- contains addr of its prev node
3. next part ( pointer ) -- contains addr of its next node
# doubly linear linked list: it is a doubly linked list
in which:
- head always contains addr of first node if list is not empy
- each node has three parts:
1. data part -- actual data of any primitive or derived data type
2. prev part ( pointer ) -- contains addr of its prev node
3. next part ( pointer ) -- contains addr of its next node
- prev part of first node points to NULL, and next part of last node points to
NULL.
# doubly circular linked list: it is a doubly linked list
in which:
- head always contains addr of first node if list is not empy
- each node has three parts:
1. data part -- actual data of any primitive or derived data type
2. prev part ( pointer ) -- contains addr of its prev node
3. next part ( pointer ) -- contains addr of its next node
- prev part of first node points last node and next part of last node points to
first node.
# difference between array and linked list:
- array is "static" i.e. size of an array cannot grow or shrik during runtime,
whereas linked list is "dynamic" i.e. we can grow or shrink size of a linked list
during runtime (we can add as well elements in a linked list during runtime).
- array elements can be accessed by using "random access" method which is faster
than "sequential access" method used for accessing linked list elements.
- array elements gets stored into the main memory at contiguos memory locations,
whereas linked list elements gets stored into the memory at random locations and
need to maintained link between them.
- for storing array elements it takes less space in comparison with space required
to store linked list elements as in an array link between array ele's maintained
by the compiler whereas programmer need to take care about maintaining link between
linked list ele's and for maintaining link extra space is required.
- addition and deletion ele operations in array takes O(n) time which is not an
efficient one as well these operations are not convenient, whereas addition
and deletion ele operations in a linked list takes O(1) time which is an efficient
operations and convenient as well.
- array elements gets stored into the "stack section", whereas linked list elements
gets stored into the "heap section" in main memory.
# applications of linked list:
- linked list is used mainly to implement kernel data structures like
job queue, ready queue, inode list, kernel list etc....
- linked list can be used to implement os algorithms for cpu scheduling,
page replacement algo's etc...
- linked list can be used to implement basic data structures like stack, queue
dynamically as well advanced data structure like tree, grap, hash table etc....
- to implement advanced data structure algorithms
- linked list can be used in all applications where the size of collection/list
of an elements dont know in advanced.
------------------------------------------------------------------------------------
# stack -- it is (basic data structure) a collection/list of logically related
similar type of elements into which element can be added as well as deleted only
from one end reffered as "top" end.
- in this list element which was inserted last can only be deleted first, so this
list works in "last in first out" manner, hence it is also called as "LIFO" list.
- we can perform three basic operations on the stack:
1. push --> insert/add element into the stack at top position.
2. pop --> remove/delete element from the stack which is at top position.
3. peek --> to get value of topmost ele (ele which is at top end).
- stack can be implemented by two ways:
1. static stack -- array
2. dynamic stack -- linked list
1. static stack: array --
# push -- insert/add ele into the stack at top position
step1 -- check stack is not full
step2 -- increment value of top by 1
step3 -- push/insert ele into the stack at top pos
# pop -- remove/delete ele from the stack which at top position
step1 -- check stack is not empty
step2 -- decrement the value of top by 1
# peek -- get the value of ele which is a top position
step1 -- check stack is not empty
step2 -- get the value of ele which is at top position (without incrementing
or decrementing value of top).
# stack full --> top == SIZE-1
# stack empty --> top == -1
# dynamic stack: linked list
- stack empty -->
# push -- add_at_last()
# pop -- delete_at_last()
OR
# push -- add_at_first()
# pop -- delete_at_first()
# peek --
# applicaitons of stack:
- OS maintains a stack to control flow of an execution of programs.
- in recursion internally OS maintains stack
- stack can be used to implement advanced data structure algo's like "DFS: Depth First
Search Traversal" in tree and graph.
- stack can be used in all application where in a collection/list of elements there is
need to add and delete element in last in first out manner.
- "undo" and "redo" functionalities of an OS internally use stack.
- stack can be used to implement algorithms for conversion of infix expression into
its equivalent prefix and postfix expressions, and for expression evaluation as well.
# algorithm to convert given infix expression into its equivalent prefix and postfix:
- expression -- is a combination operands and operators in a sequence
- there are three types of an expressions as per position of operator and operands
1. infix expression: "a+b"
2. prefix expression: "+ab"
3. postfix expression: "ab+"
- infix expression = "a+b*c/d*e+f*g+h".
- postfix expression = "abc*d/e*+fg*+h+".
- prefix expression = "+++a*/*bcde*fgh"
- infix expression -- "(a+b)*(c-d)*e/f+g*h"
- postfix expression -- "ab+cd-*e*f/gh*+"
# algo to convert infix expression into its equivalent postfix expression:
step1 -- start scanning infix sting from left to right till end of the string
step2 -- if the cur ele is an operand
then
append it into the postfix string
step3 -- if the cur ele is an operator
then
if ( stack is not empty && priority topmost ele >= priority of cur ele )
pop ele from the stack and append it into the postfix string
push the cur ele into the stack
step4 -- repeat steps 2 & 3 till the end of the infix string
step5 -- pop all remaining ele's from the stack and append them one by one
into the postfix string till stack not becomes empty
# algo to convert infix expression into its equivalent prefix expression:
step1 -- start scanning infix sting from right to left till
step2 -- if the cur ele is an operand
then
append it into the postfix string
step3 -- if the cur ele is an operator
then
if ( stack is not empty && priority topmost ele > priority of cur ele )
pop ele from the stack and append it into the prefix string
push the cur ele into the stack
step4 -- repeat steps 2 & 3 till the end of the infix string
step5 -- pop all remaining ele's from the stack and append them one by one
into the postfix string till stack not becomes empty
---------------------------------------------------------------------------------------
int postfix_evalution(char *post)
{
stack s;
int res;
for( int i = 0 ; post[i] != '\0' ; i++ )
{
if( is_operand(post[i]) )
{
s.push_element(post[i]);
}
else//if the cur ele is an operator
{
int op2 = s.peek_element(); s.pop_element();
int op1 = s.peek_element(); s.pop_element();
res = calculator(op1, op2, post[i]);
s.push_element(res);
}
}//end of for loop
res = s.peek_element(); s.pop_element();
return res;
}
-----------------------------------------------------------------------------------------
# C++:
STL: Standard Template Library
- STL is a collection of C++ container template classes i.e. most frequently used
programming data structures, and functions which performs operations on contents of
the container.
- STL contains: (STL has four components)
1. container classes -- most frequently used programming data structures
2. algorithms -- are the functions can be used to perform operations on contents
on the container.
3. iterator -- an iterator is an object of a class can be used to traverse
containers ( smart pointer -- object of an iterator class can be used as a pointer
for traversal operation)
4. functors/function objects -- in STL classes implemented in which function call
operator is overloaded -- object of such class is called as functor/function
object.
---------------------------------------------------------------------------------
# queue -- it is a collection/list of logically related similar type of elements
in which we can add/insert ele from one end referred as "rear" end, and we can
delete/remove ele from another end reffered as "front" end.
- element which was inserted first can be deleted first, so this list works in
first in first out manner, hence it is also called as "FIFO" list.
- there are different types of queue:
1. linear queue
2. circular queue
3. priority queue -- it is queue in which ele can be added from rear end randomly,
and element having highest priority can only be deleted first.
- priority queue can be implemented by two ways:
1. linked list
2. it can be implemented efficiently "binary heap"
4. double ended queue (deque):
- it is queue in which ele's can be inserted as well deleted from both the
ends.
- it can be implemented by using doubly circular linked list
- there are two types of dequeue:
1. input restricted deque -- ele can be added from only one end but can
be deleted from both the ends.
2. output restricted deque -- ele can be deleted from only one end but can
be added from both the ends.
# linear queue:
1. enqueue -- insert/push/add element into the queue from rear end
2. dequeue -- delete/remove/pop element from the queue which is at front end
1. enqueue -- insert/push/add element into the queue from rear end:
- check queue is not full
- increment value of rear by 1
- push ele into the queue at rear position
- if ( front == -1 )
front = 0
2. dequeue -- delete/remove/pop element from the queue which is at front end
- check queue is not empty
- increment the value of front by 1 --
queue_full --> rear == SIZE-1
queue_empty --> rear == -1 || front > rear
------------------------------------------------------------------------------------
# circular queue:
1. enqueue -- insert/push/add element into the queue from rear end
2. dequeue -- delete/remove/pop element from the queue which is at front end
1. enqueue -- insert/push/add element into the queue from rear end:
- check queue is not full
- increment value of rear by 1 [ rear = (rear + 1) % SIZE ]
- push ele into the queue at rear position
- if ( front == -1 )
front = 0
2. dequeue -- delete/remove/pop element from the queue which is at front end
- check queue is not empty
- if( front == rear ) -- we are deleting last ele
front = rear = -1;
else
increment the value of front by 1 [ front = (front+1) % SIZE ]
queue_full --> front == (rear+1)%SIZE
queue_empty --> rear == -1 && front == rear
rear++;
rear = rear + 1
rear = ( rear + 1 ) % SIZE
for rear = 0 => (rear+1) % SIZE => (0+1)%5 => 1%5 => 1
for rear = 1 => (rear+1) % SIZE => (1+1)%5 => 2%5 => 2
for rear = 2 => (rear+1) % SIZE => (2+1)%5 => 3%5 => 3
for rear = 3 => (rear+1) % SIZE => (3+1)%5 => 4%5 => 4
for rear = 4 => (rear+1) % SIZE => (4+1)%5 => 5%5 => 0
front == (rear + 1 ) % SIZE
for front = 1, rear = 0
front == (rear + 1 ) % SIZE
1 == (0+1)%5 => 1 == 1%5 => 1
1 == 1
for front = 2, rear = 1
front == (rear + 1 ) % SIZE
2 == (1+1)%5 => 2 == 2%5 => 2
2 == 2
.
.
.
for front = 0, rear = 4
front == (rear + 1 ) % SIZE
0 == (4+1)%5 => 0 == 5%5 => 0
0 == 0
---------------------------------------------------------------------
# queue can be implement by using linked list -- dynamically
enqueue --> add_at_last()
dequeue --> delete_at_first()
OR
enqueue --> add_at_first()
dequeue --> delete_at_last()
# applications of queue:
- queue can be used to implement advanced data structure algo's like BFS traversal
in tree and graph.
- queue can be used to implement kernel data structures
- queue can be used to implement an OS algorithms like FIFO page replacement algo,
FCFS cpu sched algo, FCFS disk sched algo etc....
- queue can be used in all application where reuiqrement is collection/list of
elements can be added and deleted in "first in first out manner"
etc....
-------------------------------------------------------------------------------------
# tree -- it is an advanced data structure, which is a collection of finite number of
logically related similar type of elements in which:
- there is one specially designated element called as "root" element, and
- remaining all elements are connected to the root element in a heirarchical
manner follows parent-child relationship.
- in a tree data structure element is also called as "node"
- in a diagram -- A is a root node
- B,C & D are child nodes of A, where as A is parent node of B, C & D
- parent node is also called as "father", and child node is also called as "son"
- "siblings" (brothers): child nodes of same parent are called as siblings
e.g. as B, C & D are the child nodes of A, they are reffered as siblings
- "degree of a node" -- no. of child nodes of a node is called as degree of a node
e.g. node A having 3 child nodes -- degree of A = 3
as node B has two child nodes --> degree of B = 2
as node E do not having any child node --> degree of E = 0
- "degree of a tree" -- max degree of any node in the given tree
- "leaf node" -- node having degree 0 is called as leaf node
OR if any node dont has any child node then such a node is called as leaf node
or it is also called as "terminal node"
- "non-leaf node" -- node having non-zero degree is called as non-leaf node
OR if any node child node then such a node is called as non-leaf node
or it is also called as "non-terminal node".
- ascestors -- all the nodes from the root node which are in the path till that node
are called its ancestors.
- descedents -- all the nodes which are accessible from that node are called as
its descedents.
- level of a tree:
- if we considered root node at level 0 then child nodes of root node can be
considered at next level i.e. at level 1.
i.e. => level of any node = level of its parent + 1
- max level in a given tree is called as level of a tree
- depth of tree = level of a tree.
----------------------------------------------------------------------------
binary tree -- it is a tree in which each node can have max two child nodes
i.e. any node in a binary tree can have either 0 or 1 or 2 child nodes.
- binary tree is a collection of finite no. of elements in which has three
subsets:
1. root element
2. left subtree
3. right subtree
- whereas left subtree and right subtree may be empty.
- "binary search tree" -- it is a binary tree in which left child is always smaller
than its parent and right child is always greater or equal to its parent.
# input for binary search tree:
50 20 90 85 10 45 30 100 15 75 95 120 5 50
- implement binary search tree:
- each node has three parts:
1. data part -- actual data of any primitive or derived data type
2. left pointer -- contains addr of left child node
3. right pointer -- contains addr of right child node
preorder : 50 20 10 5 15 45 30 90 85 75 50 100 95 120
inorder : 5 10 15 20 30 45 50 50 75 85 90 95 100 120
postorder: 5 15 10 30 45 20 50 75 85 95 120 100 90 50
----------------------------------------------------------------------------------
while( trav != NULL || !s.empty() )
{
while( trav != NULL )
{
s.push(trav);
trav = trav->left;
}
if( !s.empty() )
{
trav = s.top(); s.pop();
if( trav->right != NULL && trav->right->visited == false )
{
s.push(trav);
trav = trav->right;
}
else
{
cout << setw(4) << trav->data;
trav->visited = true;
trav = NULL;
}
}
}
-------------------------------------------------------------------------------------
# balanced binary search tree
- if balance factor of any node is in between (-1,0,1) then we say node is balanced
- if all nodes in given binary search tree are balanced then such bst is called as
balanced binary search tree.
- balance factor of a node = height of its left subtree - height of its right subtree
- height of node = max( height of left subtree, height of right subtree) + 1
- height of a tree = max height of any node in a given binary search tree.
- AVL tree -- self balanced binary search tree
- "Adelson Velsinki" and "Lendis".
-------------------------------------------------------------------------------
# graph: it is a collection of finite no. of logically related similar type of
elements which contains two sets:
- set of vertices, and
- ordered or unordered pairs of vertices called as an edges also called as an
arch which may contains cost/value/distance/weight.
------------------------------------------------------------------------
------------------------------------------------------------------------------
#
- linear search is also called as "sequential search".
- in linear search scanning of array elements starts from the first ele, and
value of key element gets compared with each array ele sequentially.
- after every iteration serach space is getting reduces by 1, in first iteration
search is n, in second iteration search space is (n-1), in third iteration it
is (n-2) and so on...., whereas in binary search in every iteration mid position
gets calculated and key gets compared with mid pos ele, if it is found then control
gets returns back into the calling function with success if not, the search space
gets reduces by half for next iteration.
- in binary search after every iteration search is getting reduces by half of elements,
so base condition occurs earliar in binary search and hence it is efficient than
linear search.
- for comparison of efficiency of any two algorithms can be decided on the basis of
their worst case time complexity.
- generally magnitude of average case and worst case time complexities of an
algorithms are same.
Comments
Post a Comment