Operating system allocates memory dynamically not in sequence this operation is known as Noncontiguous Memory Allocation. When process does not required in memory so it released from memory so there will be Memory hole. So memory hole will filled with another required process. We can overcome the problem of external fragmentation by using dynamic storage allocation. Rearrange processes to minimize external fragmentation so the base registers and address spaces will be
These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The disadvantages of the monolithic kernel are converse with the advantages. Modifying and testing monolithic systems takes longer than their microkernel counterparts.
The Benefits of Secondary Storage Secondary storage, sometimes called auxiliary storage, is storage separate from the computer itself, where you can store software and data on a semi-permanent basis. Secondary storage is necessary because memory, or primary storage, can be used only temporarily. If you are sharing your computer, you must yield memory to someone else after your program runs; if you are not sharing your computer, your programs and data will disappear from memory when you turn off the computer. However, you probably want to store the data you have used or the information you have derived from processing; that is why secondary storage is needed. Furthermore, memory is limited in size, whereas secondary storage media can store as much data as
 4. Why Are SQL Injection Attacks So Successful? Injection attacks are successful for a couple of reasons, the most widely of which is that many newer developers simply do not think about the issue.  They may develop system that accepts data from untrusted users, fail to properly validate the data, and then use that data to dynamically construct an SQL query to the database backing that system. “For example, imagine a simple application that takes inputs of a username and password.
One of the difference a between non-hierarchical and hierarchical methods is there is no need to enter number of clusters. Second significant difference is that the process of gathering and separating is done on the way, meaning what have been done in previous steps cannot be fixed back, while in case of partitioning method it is the aim to select the best option of objects in k clusters during the whole process. One might claim they are similar, but they can lead to absolutely different results. The results of hierarchical methods are graphs with a certain gradualism. The only difference between k and k+1 clusters is that one of the clusters splits up (shown in Figure 8 – Example of dendrogram).
Memory management Main memory is fundamental to the running of most computer systems as CPU can only load instructions from here for program to run. Main memory also referred to as Random Access Memory or RAM acts as a source data for CPU and other devices. Since main memory is volatile and can not hold data permanent, programs must be loaded and re-loaded when not in use. In relation to memory management, operating system keeps records of the part of the memory that is being used and the program that use it and decide which program to move in and out of memory and therefore assigning and freeing up of memory space. Operating system is responsible for mapping out the logical addresses and physical addresses when assigning memory space to programs.
File System Manipulation: • When we are working on the computer we have to manipulate the data or files. Particular the files are interest. In the file system manipulation, we have to read, write, directories, create, delete, files and also search the file, make a list file information and the permission management. • In the OS the computer makes easier for the users to perform the tasks in the in the program to accomplish the file and manipulate the file the whole services are done by the secondary storage which is the part of the OS. • In the file system manipulation, the OS read, write, create and delete files.
Question No.1 a) Describe the difference between De-normalization and normalization with the help of examples. Answer: Normalization and De-normalization are two processes to optimize the performance of database. Normalization is the process of creating database that is structurally consistent having no or minimum redundancy however sometimes fully normalized database does not provide maximum processing efficiency. So when performance does matter, one prefer De-normalized database. Following are the major differences between the two processes.
Not all the stored information is required by the processor at a given point of time. So it is beneficial to use less pricey storage devices, which could be used to store information that is not used by the CPU. Memory Unit that interacts directly with the CPU is known as Main Memory. Devices which are used for backup storages are termed as auxiliary memory. Magnetic disks and tapes are the most common auxiliary
DE-DUPLICATION STORAGE SYSTEM There are few deduplication storage systems that are being used with respect to the different storage purposes. Some of them have mentioned below: 1. Duplicate Data Elimination (DDE): Duplicate Data Elimination supports both the combination of the content hashing then copy on write and lazy updates to identify and merge the indistinguishable data blocks in the SAN (storage area network system).The core difference between DDE storage system and other storage systems is that it accurately deduplicate and analyzes the analogous hash values of the blocks of the data right at the source side itself, before the actual transmission of the data. This kind of system always works in background. In Deduplication technology,