EOS(Embedded Operating System)
Pipes--->
it is used to transfer data between two processes in FIFO manner.
char *fgets(char *s, int size, FILE *stream);
write(1, ,buf ,cnt )
it means that read from file write into buffer and display on console(stdout)
FILE PERMISSIONS IN LINUX
4 ---> Read( in binary 100)
2 ---> Write(in binary 010)
1 ---> Execute(in binary 001)
for read and write (4 + 2 = 6 ) --->110
for read, write and execute (4 + 2 +1 =7) ---> 111
WHAT IS 777?
first digit is for owner
second digit is for group
third digit is for others
and all 777 means all three have all permission
A symbolic link is a link to another name in the file system
We use two data structures to implement an LRU(least recently used) Cache.
Copy-on-write (sometimes referred to as "COW") is an optimization strategy used in computer programming. The fundamental idea is that if multiple callers ask for resources which are initially indistinguishable, you can give them pointers to the same resource. This function can be maintained until a caller tries to modify its "copy" of the resource, at which point a true private copy is created to prevent the changes becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that if a caller never makes any modifications, no private copy need ever be created.
Shared memory
The problem with pipes, fifo and message queue – is that for two process to exchange information. The information has to go through the kernel.
System calls for shared memory
ftok(): is use to generate a unique key.
shmget(): int shmget(key_t,size_tsize,intshmflg); upon successful completion, shmget() returns an identifier for the shared memory segment.
shmat(): Before you can use a shared memory segment, you have to attach yourself
to it using shmat(). void *shmat(int shmid ,void *shmaddr ,int shmflg);
shmid is shared memory id. shmaddr specifies specific address to use but we should set
it to zero and OS will automatically choose the address.
shmdt(): When you’re done with the shared memory segment, your program should
detach itself from it using shmdt(). int shmdt(void *shmaddr);
shmctl(): when you detach from shared memory,it is not destroyed. So, to destroy
shmctl() is used. shmctl(int shmid,IPC_RMID,NULL);
So to avoid race around condition we use the Semaphore
Semaphore(s)----->
semaphore has two types binary semaphore and counter semaphore
p--->popreheren (decrement)
v---->verhogen (increment)
P operation is also called as wait,sleep or down
V operation is also called as signal, wakeup or up operation
both operations P(s) and V(s) are atomic and semaphore is always initialized with 1
USER SPACE AND KERNEL SPACE
SEMAPHORE VS MUTEX
it is used to transfer data between two processes in FIFO manner.
- one processes can transfer data without knowing the what other process is doing in other end.
- Pipes are used in file system for data storage
- There are two type
- named
- unnamed
- Syntax for pipe system call
- pipe(fdptr)
- fdptr is the pointer to an integer array that will contain two file discriptor
- one is for reading and other is for writing
- kernel assign the inode for pipe from file system designated pipe device using algorithm "ialloc"
- unnamed pipe
- who | wc---->internally shell create unnamed pipe
- can be used for related processes
- pipe() syscall
- named pipe
- special file(p) --->mkfifo
- mkfifo cmd ------->mkfifo syscall
- can be used for non related processes
- {if(fork()>0)
- sleep(1000)} it create zombie process
- Number of times hello printed is equal to number of process created. Total Number of Processes = 2n where n is number of fork system calls.
- the total number of only child/parent is created is2^n-1
- both child and parent are created indpendtaly so value of i is different for both child and parent
- exec() system call does not return anything
- Priority based scheduling:smaller is the number higher is the priority
- Processes with same priority are executed on first come first served basis.
- larger is the burst time lower is the priority
- if logically runable process is temporarily suspended then it is call preemptive scheduling
- A schedular which select processes from secondary storage is called medium term schedular
char *fgets(char *s, int size, FILE *stream);
write(1, ,buf ,cnt )
it means that read from file write into buffer and display on console(stdout)
FILE PERMISSIONS IN LINUX
4 ---> Read( in binary 100)
2 ---> Write(in binary 010)
1 ---> Execute(in binary 001)
for read and write (4 + 2 = 6 ) --->110
for read, write and execute (4 + 2 +1 =7) ---> 111
WHAT IS 777?
first digit is for owner
second digit is for group
third digit is for others
and all 777 means all three have all permission
A file in the file system is basically a link to an inode.if all links to inodes are deleted then and then only inode is deleted(deletable or overwritable)
A hard link then just creates another file with a link to the same underlying inode.
A symbolic link is a link to another name in the file system
- fork() returns 0 to child process and pid of child process into parent process
- CPU Scheduling Algorithm
- FCFS
- SJF
- Preemptive SJF
- Priority--->each process is allocated with priority number lower is priority number higher is the priority due to higher priority processes lower priority process will not get sufficient time to execute on CPU this is called Starvation
- by increasing priority of starved process periodically so that it will schedule sooner. this called Aging
- RR--->Preemptive
- bankers algorithm is used for deadlock avoidance
- Maximum size for a message text: 8192 bytes (on Linux, this limit can be read and modified via /proc/sys/kernel/msgmax). Default maximum size in bytes of a message queue: 16384 bytes (on Linux, this limit can be read and modified via /proc/sys/kernel/msgmnb).
- The default maximum value of the pid on a linux 32678
- process id is not inherited by child
- one to one threading is allocated in linux
- hardware solution for critical section is Test and set
- Peterson's algorithm is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication.
We use two data structures to implement an LRU(least recently used) Cache.
- Queue which is implemented using a doubly linked list. The maximum size of the queue will be equal to the total number of frames available (cache size).The most recently used pages will be near front end and least recently pages will be near rear end.
- A Hash with page number as key and address of the corresponding queue node as valu
Replacement algorithms can be local or global.
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition). A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are fixed partitioning and balanced set algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.[
Copy-on-write (sometimes referred to as "COW") is an optimization strategy used in computer programming. The fundamental idea is that if multiple callers ask for resources which are initially indistinguishable, you can give them pointers to the same resource. This function can be maintained until a caller tries to modify its "copy" of the resource, at which point a true private copy is created to prevent the changes becoming visible to everyone else. All of this happens transparently to the callers. The primary advantage is that if a caller never makes any modifications, no private copy need ever be created.
Shared memory
- shared memory region is present into the heap section of RAM it has its own shared memory table
- it is fastest IPC mechanism
- cmd is ipcs -m
The problem with pipes, fifo and message queue – is that for two process to exchange information. The information has to go through the kernel.
- Server reads from the input file.
- The server writes this data in a message using either a pipe, fifo or message queue.
- The client reads the data from the IPC channel,again requiring the data to be copied from kernel’s IPC buffer to the client’s buffer.
- Finally the data is copied from the client’s buffer.
- There is no waiting queue into the shared memory
System calls for shared memory
ftok(): is use to generate a unique key.
shmget(): int shmget(key_t,size_tsize,intshmflg); upon successful completion, shmget() returns an identifier for the shared memory segment.
shmat(): Before you can use a shared memory segment, you have to attach yourself
to it using shmat(). void *shmat(int shmid ,void *shmaddr ,int shmflg);
shmid is shared memory id. shmaddr specifies specific address to use but we should set
it to zero and OS will automatically choose the address.
shmdt(): When you’re done with the shared memory segment, your program should
detach itself from it using shmdt(). int shmdt(void *shmaddr);
shmctl(): when you detach from shared memory,it is not destroyed. So, to destroy
shmctl() is used. shmctl(int shmid,IPC_RMID,NULL);
So to avoid race around condition we use the Semaphore
Semaphore(s)----->
semaphore has two types binary semaphore and counter semaphore
p--->popreheren (decrement)
v---->verhogen (increment)
P operation is also called as wait,sleep or down
V operation is also called as signal, wakeup or up operation
both operations P(s) and V(s) are atomic and semaphore is always initialized with 1
- in unix semsphore is array of counters
- let us suppose a resource has 3 capacity if process P1 want to access that resource then it has grant a permission to utilize it so the capacity of resource decremented by 1 and become 2 if another process ia arrived then it will also allocate the resource similary for process P3 if another process is arrived P4 then P1, P2 and P3 will call wait operation if when one of the process will call signal function then P4 process have permission to access resource
Socket Programming
- Socket programming is a way of connecting two Sockets(nodes) on a network to communicate with each other
- one socket listen on a particular port at an IP while other node is reaches out the other to form connection.
- server forms listener Sockets while client reaches out to server
MULTIPROGRAMMING
In a multiprogramming system there are one or more programs loaded in main memory which are ready to execute. Only one program at a time is able to get the CPU for executing its instructions (i.e., there is at most one process running on the system) while all the others are waiting their turn.
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the currently running process is performing an I/O task (which, by definition, does not need the CPU to be accomplished). Then, the OS may interrupt that process and give the control to one of the other in-main-memory programs that are ready to execute (i.e. process context switching). In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long as there are processes ready to execute.
Note that in order for such a system to function properly, the OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one. Other problems that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory. Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory. Please, refer to this article for more details on that.
Finally, note that if there are
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the currently running process is performing an I/O task (which, by definition, does not need the CPU to be accomplished). Then, the OS may interrupt that process and give the control to one of the other in-main-memory programs that are ready to execute (i.e. process context switching). In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long as there are processes ready to execute.
Note that in order for such a system to function properly, the OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one. Other problems that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory. Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory. Please, refer to this article for more details on that.
Finally, note that if there are
N
ready processes and all of those are highly CPU-bound (i.e., they mostly execute CPU tasks and none or very few I/O operations), in the very worst case one program might wait all the other N-1
ones to complete before executing.
MULTITASKING
Multitasking has the same meaning of multiprogramming but in a more general sense, as it refers to having multiple (programs, processes, tasks, threads) running at the same time. This term is used in modern operating systems when multiple tasks share a common processing resource (e.g., CPU and Memory). At any time the CPU is executing one task only while other tasks waiting their turn. The illusion of parallelism is achieved when the CPU is reassigned to another task (i.e. process or thread context switching).
There are subtle differences between multitasking and multiprogramming. A task in a multitasking operating system is not a whole application program but it can also refer to a “thread of execution” when one process is divided into sub-tasks. Each smaller task does not hijack the CPU until it finishes like in the older multiprogramming but rather a fair share amount of the CPU time called quantum.
Just to make it easy to remember, both multiprogramming and multitasking operating systems are (CPU) time sharing systems. However, while in multiprogramming (older OSs) one program as a whole keeps running until it blocks, in multitasking (modern OSs) time sharing is best manifested because each running process takes only a fair quantum of the CPU time.
There are subtle differences between multitasking and multiprogramming. A task in a multitasking operating system is not a whole application program but it can also refer to a “thread of execution” when one process is divided into sub-tasks. Each smaller task does not hijack the CPU until it finishes like in the older multiprogramming but rather a fair share amount of the CPU time called quantum.
Just to make it easy to remember, both multiprogramming and multitasking operating systems are (CPU) time sharing systems. However, while in multiprogramming (older OSs) one program as a whole keeps running until it blocks, in multitasking (modern OSs) time sharing is best manifested because each running process takes only a fair quantum of the CPU time.
MULTIPROCESSING
it means that executing multiple processes(programs) at the same time. it mainly refer for hardware part
multiple cores on single die or multiple dies in single package or multiple packages in one system
Memory get's divided into two distinct areas:
- The user space, which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine.
- The kernel space, which is the location where the code of the kernel is stored, and executes under.
Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls. If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished.
Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0. Typically in x86 architecture, there are 4 rings of protection. Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For example to this, a "plain" application (like a browser) can not use x86 assembly instructions
lgdt
to load the global descriptor table or hlt
to halt a processor.
SYSTEM CALLS
https://www.geeksforgeeks.org/operating-system-introduction-system-call/
FORK() VS VFORK()
FORK() VS VFORK()
BASIS FOR COMPARISON | FORK() | VFORK() |
---|---|---|
Basic | Child process and parent process has separate address spaces. | Child process and parent process shares the same address space. |
Execution | Parent and child process execute simultaneously. | Parent process remains suspended till child process completes its execution. |
Modification | If the child process alters any page in the address space, it is invisible to the parent process as the address space are separate. | If child process alters any page in the address space, it is visible to the parent process as they share the same address space. |
Copy-on-write | fork() uses copy-on-write as an alternative where the parent and child shares same pages until any one of them modifies the shared page. | vfork() does not use copy-on-write. |
SEMAPHORE VS MUTEX
BASIS FOR COMPARISON | SEMAPHORE | MUTEX |
---|---|---|
Basic | Semaphore is a signalling mechanism. | Mutex is a locking mechanism. |
Existence | Semaphore is an integer variable. | Mutex is an object. |
Function | Semaphore allow multiple program threads to access a finite instance of resources. | Mutex allow multiple program thread to access a single resource but not simultaneously. |
Ownership | Semaphore value can be changed by any process acquiring or releasing the resource. | Mutex object lock is released only by the process that has acquired the lock on it. |
Categorize | Semaphore can be categorized into counting semaphore and binary semaphore. | Mutex is not categorized further. |
Operation | Semaphore value is modified using wait() and signal() operation. | Mutex object is locked or unlocked by the process requesting or releasing the resource. |
Resources Occupied | If all resources are being used, the process requesting for resource performs wait() operation and block itself till semaphore count become greater than one. | If a mutex object is already locked, the process requesting for resources waits and queued by the system till lock is released. |
PROCESS VS THREAD
BASIS FOR COMPARISON | PROCESS | THREAD |
---|---|---|
Basic | Program in execution. | Lightweight process or part of it. |
Memory sharing | Completely isolated and do not share memory. | Shares memory with each other. |
Resource consumption | More | Less |
Efficiency | Less efficient as compared to the process in the context of communication. | Enhances efficiency in the context of communication. |
Time required for creation | More | Less |
Context switching time | Takes more time. | Consumes less time. |
Uncertain termination | Results in loss of process. | A thread can be reclaimed. |
Time required for termination | More | Less |
CPU SCHEDULING ALGORITHMS
please refer below website
SYSTEM CALL
sched_yeild()
yield() system call is used by the kernel to ensure that task is in RUNNING state and then it call sched_yield(). User space application only call sched_yield(). sched_yield() system call provide a mechanism for a process to explicitly yield the processor to other waiting processes, to do so it removes the process(which is currently running) from the active array and put it into the expired array. This has the effect of not only preempting the process and putting it at the end of its priority list, but also putting it on the expired list—guaranteeing it will not run for a while.
schedule() system call is the function kernel uses to invoke the process scheduler, deciding which process to run and then running it. schedule() is generic with respect to scheduler classes.its job is to finds the highest priority scheduler class with a runnable process and asks it what to run next.
BELADY ANAMOLY
In computer storage, Bélády's anomaly is the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns. This phenomenon is commonly experienced when using the first-in first-out page replacement algorithm.
SEMAPHORES
best example for semaphore is if the desk size is for 3 students(which are taken as processses)
and 1st up of all count of desk is 3 if one student is occupied then size of desk become for 2 students that count is decremented by 1 and its value is 2 now if another student is arrived then count of desk become 1 if another one also arrived then size of desk become 0 and now desk is full there is no space for new student if any how new student is arrived then count is decremented by 1 and now it become -1 it means that if value of count is in negative new students(may be more than one process) has to be wait for count become positive and that's how semaphore will work please refer count is space available in RAM & students are processes
device driver
- request region is used to allocate I/O region to the driver
- barriers in device drivers is used to tell compiler that maintain the order before and after the barrier
- passing single value from user space to kernel space using IOCTL method we use get_user
- passing single value from kernel space to user space we use _put_user() method
- sleeping inside the scheduled task is called as "work queue" that is it is run in a process context
- to find IRQ number we use probing method which is nonsharable
- bind()-->port number + IP address for server side socket
-
fork() system call will create a new child with
-
A duplicate copy of all the memory segments from the parent
- SIGSTOP ===>19
- accessing shared variables===>critical section
-
request_mem_region () is used to exclusively lock:
-
Memory space port addresses of a device controller
-
Comments
Post a Comment