In single-tasking processors it is trivial to manage memory as all memory is used for one process but with multi-tasking processors it is hard to do. Here we have to do partitioning. It is important to manage memory in a such a way that there is a steady supply of ready processes for the processor. For memory management we use both hardware and software components
Relocation / Addressing…
A program consists of a sequence of instructions and some of these instructions refer other memory locations
- Function call, variable reference.
Think of a situation where process request for memory using physical address, so that program must be loaded in to that specific memory location before executing.Because of that programs use logical addresses(relative addresses) instead of absolute address. Addresses generated by the processor is translated by the MMU.This scheme allows process images to be relocated into any area of main memory as required.
Protection / Sharing
A memory management scheme should be capable of providing protection between processes and should be flexible enough to provide sharing of memory between processes when required. But processor should not be allowed to read/write memory locations of some other process without permission
Placement / Replacement
Memory management scheme usually keep track of free memory slots. The algorithm / policy used in allocating a free memory slot to a newly admitted process is known as the placement scheme.The algorithm / policy used in replacing (swapping out) an existing process in order to provide room for a higher priority process is known as the replacement scheme
Memory management techniques
- Fixed partitioning
- Dynamic partitioning
- Buddy system
- Having equal-size partitions poses two major problems:
- A program may be too big to fit in a partition.
- Main memory utilization is extremely inefficient due to internal fragmentation (internally[allocated] unused memory)
Unequal-size partitions can lessen both of these issues
In this simple memory management scheme we divide memory into fix equal/non-equal size partitions at the system generation time(At hardware level)
Here OS allocate a chunk of memory to the new processes. Also have to keep track of free memory. Dynamic partitioning suffers from external fragmentation – as time goes on, memory external[unallocated] to all partitions (free memory) becomes too much fragmented.It is necessary to perform costly compaction process time to time as we will encounter such a situations where we can’t find any continuous memory even though there is enough free memory.
Placement algorithm(huge impact in level of external fragmentation):
Best fit: Select the smallest free memory slot that is large enough.
External fragmentation is high – worst performer
First-fit: Select the first free memory slot that is large enough.
Litters up the front of the free memory list which is a good thing (why?) – best performer
Next-fit: Start from the previous allocation point and select the next free memory slot that is large enough.
Litters up the end of the free memory slot – slightly behind first-fit
Memory is allocated in powers of 2
2L = smallest size allowed block, 2U = largest allowed block
- When a process of size S needs to be allocated:
- If 2U > S ≥ 2U-1 the entire memory block is allocated
- Else memory is partitioned into two buddies of size 2U-1
- If 2U-1 > S ≥ 2U-2 first buddy of above two buddies is allocated
- The process continues until a suitable memory block is found
If a memory block and its buddy is free we collapse(merge) them.Buddy system is a reasonable compromise to both fixed and dynamic partitioning schemes.
- It reduces internal fragmentation – we are allocating the smallest block of memory which is a power of 2.
- Less external fragmentation – free memory tend to line up to the end of main memory
In all three schemes we discussed above relocation can be achieved by using a
relative addressing scheme(Form of logical addresses expressed with respect to pivot point ).
- Here memory is divided in to relatively small size chunks called frames. Process
- images also divided in to same size pages. From paging we can make the internal
- fragmentation minimal. Also external fragmentation is not possible.
- With paging memory allocated might be discreet as we have reference to all pages in the page table. There is a page table for each process.
- A logical memory address consists of two parts; the page number and the offset within that page
- By setting page size to a power of 2, address translation can be simplified:
- The process image is divided into unequal-sized memory chunks known as segments. These are loaded into memory independent of other segments (Similar to Dynamic partitioning but only part will be there at the memory).While paging is transparent to programmers this is not. So programmer can create and treat segments in the way he likes(e.g. library code can be a sheared segment between two memories).
- Here we can’t find all offsets in every segments(In pages we did) as they are different in size