UNIX
UNIX is a multitasking, multi-user computer operating system originally developed in 1969
by a group of AT&T employees at Bell Labs .
Unix operating systems are widely used in servers, workstations, and mobile devices. Unix
was designed to be portable, multi-tasking and multi-user in a time-sharing configuration.
Unix systems are characterized by various concepts:
the use of plain text for storing data;
a hierarchical file system;
treating devices and certain types of inter-process communication (IPC) as files;
the use of a large number of software tools, small programs that can be strung
together through a command line interpreter using pipes.
These concepts are collectively known as the Unix philosophy.
Unix vs Windows: Two Major Classes of Operating Systems
On the server front, Unix has been closing in on Microsofts market share.
On the client front, Microsoft is currently dominating the operating system market with over
90% market share.
Advantages of Unix over windows
Flexiblity -Unix is more flexible and can be installed on many different types of
machines, including main-frame computers, supercomputers and microcomputers.
Stable -Unix is more stable and does not go down as often as Windows does,
therefore requires less administration and maintenance.
Software design -Unix also inspires novel approaches to software design, such as
solving problems by interconnecting simpler tools instead of creating large
monolithic application programs.
multi-user- more than one user can use the machine at a time
supported via terminals (serial or network connection)
portability- only the kernel ( <10%) written in assembler ,tools for program
development,
a wide range of support tools (debuggers, compilers)
LINUX
1.Linux-The Operating System
Linux is a Unix-like computer Operating System (or OS) that uses the Linux kernel. Linux
started out as a personal computer system used by individuals, and now used mostly as a
server operating system. Linux is a prime example of open-source development, which means
that the source code is available freely for anyone to use.
Linus Torvalds, who was then a student at the University of Helsinki in Finland, developed
Linux in 1991. He released it for free on the Internet. Due to the far reach of the Free
Software Foundation (FSF) and the GNU Project, Linux popularity increased rapidly, with
utilities developed and released for free online.
It provides the basic computer services needed for someone to do things with a computer. It is
the middle layer between the computer hardware and the software applications you run.
About GNU
GNU stands for 'GNU's Not Unix.' It was a project conceived by Richard Stallman in 1983 in
response to the increasing tendency of software companies to copyright their software under
terms that prohibited sharing. GNU's purpose is to develop a wholly free system.
The kernel combined with GNU's free software is properly called "GNU/Linux."
About GPL
Both the kernel and the software are freely available under licencing that is sometimes called
"copyleft" (as opposed to copyright). Where traditional copyright was meant to restrict usage
and ownership of a copyrighted item to as few people as possible, inhibiting development and
growth, GNU/Linux is different. It is released under terms designed to ensure that as many
people as possible are allowed to receive, use, share, and modify the software. That licence is
called the GPL (GNU Public Licence).
What is the Kernel?
The kernel is a program that constitutes the central core of a computer operating system. It
has complete control over everything that occurs in the system.
The kernel is the first part of the operating system to load into memory during booting (i.e.,
system startup), and it remains there for the entire duration of the computer session because
its services are required continuously. Thus it is important for it to be as small as possible
while still providing all the essential services needed by the other parts of the operating
system and by the various application programs.
Because of its critical nature, the kernel code is usually loaded into a protected area of
memory, which prevents it from being overwritten . The kernel performs its tasks, such as
executing processes and handling interrupts, in kernel space, whereas everything a user
normally does, in user space.
Crash of the Kernel
When a computer crashes, it actually means the kernel has crashed. If only a single program
has crashed but the rest of the system remains in operation, then the kernel itself has not
crashed. A crash is the situation in which a program, either a user application or a part of the
operating system, stops performing its expected function(s) and responding to other parts of
the system. The program might appear to the user to freeze. If such program is a critical to the
operation of the kernel, the entire computer could stall or shut down.
The kernel provides basic services for all other parts of the operating system, typically
including memory management, process management, file management and I/O
(input/output) management (i.e., accessing the peripheral devices). These services are
Device drivers
The vast majority of the source code in the Linux kernel exists in device drivers that make a
particular hardware device usable. The Linux source tree provides a drivers subdirectory that
is further divided by the various devices that are supported. You can find the device driver
sources in ./linux/drivers.
Architecture-dependent code
While much of Linux is independent of the architecture on which it runs, there are elements
that must consider the architecture for normal operation and for efficiency. The ./linux/arch
subdirectory defines the architecture-dependent portion of the kernel source contained in a
number of subdirectories that are specific to the architecture (collectively forming the BSP).
For a typical desktop, the i386 directory is used.
Categories of Kernels
Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid
kernels and exokernels. Each has its own advocates and detractors.
Microkernel
A microkernel manages: CPU, memory, and IPC (inter process communication).
Microkernels have a advantage of portability ex: they dont have to worry if you change your
video card Microkernels also have a very small footprint, for both memory and install space,
and they tend to be more secure because only specific processes run in user mode which
doesnt have the high permissions as supervisor mode. Ex:Unix
Pros
Portability
Security
Cons
Monolithic Kernel
Monolithic kernels are the opposite of microkernels because they encompass not only the
CPU, memory, and IPC, but they also include things like device drivers, file system
management, and system server calls. Monolithic kernels tend to be better at accessing
hardware and multitasking. Ex: Linux
Pros
Processes react faster because there isnt a queue for processor time
Cons
Hybrid kernels
Hybrid kernels are similar to microkernels, except that they include additional code in kernel
space so that such code can run more swiftly than it would were it in user space. These
kernels represent a compromise that was implemented by some developers .Hybrid kernels
should not be confused with monolithic kernels that can load modules after booting (such as
Linux).
Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000
and XP.
Exokernels
Exokernels differ from the other types of kernels in that their functionality is limited to the
protection and multiplexing of the raw hardware, and they provide no hardware abstractions
on top of which applications can be constructed. This separation of hardware protection from
hardware management enables application developers to determine how to make the most
efficient use of the available hardware for each specific program.
Exokernels in themselves they are extremely small A major advantage of exokernel-based
systems is that they can incorporate multiple library operating systems, each exporting a
different API (application programming interface), such as one for Linux and one for
Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows
applications.
1.2-Linux History
Linux traces its ancestry back to a mainframe operating system known as Multics
(Multiplexed Information and Computing Service). Begun in 1965, Multics was one of the
first multi-user computer systems and remains in use today. Two Bell Labs software
engineers, Ken Thompson and Dennis Richie, worked on Multics .They implemented a
rudimentary operating system they named Unics, as a pun on Multics. Somehow, the spelling
of the name became Unix.
Their operating system was novel in respect of portability. In order to create a portable
operating system, Ritchie and Thompson first created a programming language, called C.
Writing Unix in C made it possible to easily adapt Unix to run on computers.
As word of their work spread and interest grew, Ritchie and Thompson made copies of Unix
freely available to programmers around the world. These programmers revised and improved
Unix, sending word of their changes back to Ritchie and Thompson, who incorporated the
best such changes in their version of Unix.
Linux is the first truly free Unix-like operating system. The underlying GNU Project was
launched in 1983 by Richard Stallman originally to develop a Unix-compatible operating
system calledGNU, intended to be entirely free software. Many programs and utilities were
contributed by developers around the world, and by 1991 most of the components of the
system were ready. Still missing was the kernel.
Linus Torvalds invented Linux itself. In 1991, Torvalds was a student at the University of
Helsinki in Finland where he had been using Minix, a non-free Unix-like system, and began
writing his own kernel. He started by developing device drivers and hard-drive access, and by
September had a basic design that he called Version 0.01. This kernel, which is called Linux,
was afterwards combined with the GNU system to produce a complete free operating system.
On October 5th, 1991, Torvalds sent a posting to the comp.os.minix newsgroup announcing
the release of Version 0.02, a basic version that still needed Minix to operate, but which
attracted considerable interest nevertheless. The kernel was then rapidly improved by
Torvalds and a growing number of volunteers communicating over the Internet, and by
December 19th a functional, stand-alone Unix-like Linux system was released as Version
0.11.
On January 5, 1992, Linux Version 0.12 was released, an improved, stable kernel. The next
release was called Version 0.95, to reflect the fact that it was becoming a full-featured system.
After that Linux became an underground phenomenon, with a growing group of distributed
programmers that continue to debug, develop, and enhance the source code baseline to this
day.
Torvalds released Version 0.11 under a freeware license of his own devising, but then
released Version 0.12 under the well established GNU General Public License. More and
more free software was created for Linux over the next several years.
Linux continued to be improved through the 1990's, and started to be used in large-scale
applications like web hosting, networking, and database serving, proving ready for production
use. Version 2.2, a major update to the Linux kernel, was officially released in January
1999. By the year 2000, most computer companies supported Linux in one way or another,
recognizing a common standard that could finally reunify the fractured world of the Unix
Wars. The next major release was V2.4 in January 2001, providing (among other
improvements) compatibility with the upcoming generations of Intel's 64-bit Itanium
computer processors.
Although Torvalds continued to function as the Linux kernel release manager, he avoided
work at any of the many companies involved with Linux in order to avoid showing favoritism
to any particular organization, and instead went to work for a company called Transmetaand
helped develop mobile computing solutions, and made his home at the Open Source
Development Labs (OSDL), which merged into The Linux Foundation.
1.3 Linux Features:Following are the key features of the Linux operating system:
Multiuser : several users on the same machine at the same time (and no two-user
licenses!).
Protected It has memory protection between processes, so that one program can't
bring the whole system down.
Demand loads executables : Linux only reads from disk those parts of a program that
are actually used.
Shared copy-on-write pages among executables. This means that multiple process
can use the same memory to run in. When one tries to write to that memory, that page
(4KB piece of memory) is copied somewhere else. Copy-on-write has two benefits:
increasing speed and decreasing memory use.
All source code is available, including the whole kernel and all drivers, the
development tools and all user programs; also, all of it is freely distributable. Plenty
of commercial programs are being provided for Linux without source, but everything
that has been free, including the entire base operating system, is still free.
Multiple virtual consoles: several independent login sessions through the console
are allowed , you switch by pressing a hot-key combination. These are dynamically
allocated; you can use up to 64.
Supports several common file systems, including minix, Xenix, and all the
common system V file systems, and has an advanced file system of its own, which
offers file systems of up to 4 TB, and names up to 255 characters long.
Slackware -- is a free and open source Linux-based operating system. It was one of
the earliest operating systems to be built on top of the Linux kernel .Slackware aims
for design stability and simplicity, and to be the most "Unix-like" Linux distribution,
using plain text files for configuration and making as few modifications as possible to
software packages .
SuSE - SUSE is the original provider of the enterprise Linux distribution and the
most interoperable platform for mission-critical computing. It's the only Linux
recommended by Microsoft . And it's supported on more hardware and software than
any other enterprise Linux distribution
TurboLinux
The Turbolinux distribution was created as a rebranded Red Hat distribution
Many networking protocols: the base protocols available in the latest development kernels
include TCP, IPv4, IPv6, AX.25, X.25, IPX, DDP (AppleTalk), Netrom, and others. Stable
network protocols included in the stable kernels currently include TCP, IPv4, IPX, DDP, and
AX.25.
The hardware level contains device drivers and machine specific components.
The kernel level is a mix of machine-dependent and machine-independent software.
The user level is a collection of applications, like shells, editors and utilities.
The hardware and device drivers present a stable set of definitions to support many
types of kernels.
The kernel interfaces with the hardware and device drivers and presents a stable set of
interfaces to support standard UNIX application programs.
The application programs enable the users to accomplish meaningful work with the
hardware, like getting from place to place.
system call interface, which implements the basic functions such as read and write.
the kernel code, which can be more accurately defined as the architecture-independent
kernel code. This code is common to all of the processor architectures supported by
Linux.
the architecture-dependent code, which forms what is more commonly called a BSP
(Board Support Package). This code serves as the processor and platform-specific
code for the given architecture.
Processes
A process is a program that is running. It is an address space with one or more threads
executing within that address space, and the required system resources for those threads.
Each instance of a running program constitutes a process.
A process is a dynamic entity, constantly changing as the machine code instructions are
executed by the processor
In short a process is an executing program encompassing all of the current activity in the
microprocessor. Linux is a multiprocessing operating system.
Each process is a separate task with its own rights and responsibilities. If one process crashes
it will not cause another process in the system to crash.
Each individual process runs in its own virtual address space and is not capable of interacting
with another process except through secure, kernel-managed mechanisms.
Needs of a process
During the lifetime of a process it will use many system resources. It will use the CPUs in the
system to run its instructions and the system's physical memory to hold it and its data. It will
open and use files within the filesystems and may directly or indirectly use the physical
devices in the system. Linux must keep track of the process and its system resources to fairly
manage it and the other processes in the system.
Linux Processes
Linux can manage the processes in the system, each process is represented by a task_struct
data structure. The task vector is an array of pointers to every task_struct data structure in the
system.
This means that the maximum number of processes in the system is limited by the size of the
task vector; by default it has 512 entries. As processes are created, a new task_struct is
allocated from system memory and added into the task vector. To make it easy to find, the
current, running, process is pointed to by the current pointer.
Linux also supports real time processes.
Process State
As a process executes it changes state according to its circumstances. Linux processes
have the following states:
Running
The process is either running (it is the current process in the system) or it is ready to
run (it is waiting to be assigned to one of the system's CPUs).
Waiting
The process is waiting for an event or for a resource. Linux differentiates between two
types of waiting process; interruptible and uninterruptible.
Interruptible waiting processes can be interrupted by signals whereas uninterruptible
waiting processes are waiting directly on hardware conditions and cannot be
interrupted under any circumstances.
Stopped
The process has been stopped, usually by receiving a signal. A process that is being
debugged can be in a stopped state.
Zombie
This is a halted process which, for some reason, still has a task_struct data structure in
the task vector. It is what it sounds like, a dead process.
The Life Cycle of Processes
The state a process is in changes many times during its "life." These changes can occur, for
example, when the process makes a system call, A commonly used model shows processes
operating in one of six separate states, which you can find in sched.h:
1. executing in user mode
2. executing in kernel mode
3. ready to run
4. sleeping
5. newly created, not ready to run, and not sleeping
6. issued exit system call (zombie)
A newly created process enters the system in state 5. If the process is simply a copy of
the original process (a fork but no exec), it then begins to run in the state that the
original process was in (1 or 2).
When a process is running, an interrupt may be generated (more often than not, this is
the system clock) and the currently running process is pre-empted (3). This is the
same state as state 3 because it is still ready to run and in main memory.
When the process makes a system call while in user mode (1), it moves into state 2
where it begins to run in kernel mode.
Assume at this point that the system call made was to read a file on the hard disk.
Because the read is not carried out immediately, the process goes to sleep, waiting on
the event that the system has read the disk and the data is ready. It is now in state 4.
When the data is ready, the process is awakened. This does not mean it runs
immediately, but rather it is once again ready to run in main memory (3).
If a process that was asleep is awakened (perhaps when the data is ready), it moves
from state 4 (sleeping) to state 3 (ready to run). This can be in either user mode (1) or
kernel mode (2).
New processes are created by cloning old processes, or rather by cloning the current process.
A new task is created by a system call (fork or clone) and the cloning happens within the
kernel in kernel mode. At the end of the system call there is a new process waiting to run
once the scheduler chooses it. A new task_struct data structure is allocated from the system's
physical memory with one or more physical pages for the cloned process's stacks (user and
kernel).
When cloning processes Linux allows the two processes to share resources rather than have
two separate copies. This applies to the process's files, signal handlers and virtual memory.
Cloning a process's virtual memory
A new set of vm_area_struct data structures must be generated together with their owning
mm_struct data structure and the cloned process's page tables. None of the process's virtual
memory is copied at this point.
Instead Linux uses a technique called ``copy on write'' which means that virtual memory will
only be copied when one of the two processes tries to write to it.
1.7 - Ext2 and Ext3 File system:LINUX supports large number of file systems with the help of unified interface to the
LINUX kernel called the Virtual File System (VFS).
The Virtual File System supplies the applications with the system calls for file management
to maintain internal structures and passes tasks on to the appropriate actual file system.
Another important job of the VFS is performing standard actions.
Random access is made possible by block-oriented devices, which are divided into a
specific number of equal-sized blocks.
When using these, LINUX also has at its disposal the buffer cache . Using the
functions of the buffer cache, it is possible to access any of the sequentially numbered
blocks in a given device.
In LINUX, the information required for file management is kept strictly apart from
the data and collected in a separate inode structure for each file.
Figure shows the arrangement of a typical LINUX inode. The information contained
includes access times, access rights and the allocation of data to blocks on the
physical media.
the inode already contains a few block numbers to ensure efficient access to small
files. Access to larger files is provided via indirect blocks, which also contain block
numbers.
Every file is represented by just one inode, which means that, within a file system,
each inode has a unique number and the file itself can also be accessed using this
number.
Directories allow the file system to be given a hierarchical structure. These are also
implemented as files, but the kernel assumes them to contain pairs consisting of a
filename and its inode number.
Each file system starts with a boot block reserved for the code required to boot the
operating system .
All the information which is essential for managing the file system is held in the
superblock which is followed by a number of inode blocks containing the inode
structures for the file system
A new file system can be mounted onto any directory. This original directory is then
known as the mount point and is covered up by the root directory of the new file
system along with its subdirectories and files.
Unmounting the file system releases the hidden directory structure again.
Another major important aspect of a file system is data security.
The representation of file systems in the kernel
The actual representation of data in LINUX'S memory sticks closely to the logical
structure of a UNIX file system.
The VFS, calls the file-system-specific functions for the various implementations to
fill up the structures.
These functions are provided to the VFS via the function register_filesystem().
#ifdef CONFIG_MINIX_FS
register_filesystem(&(struct file_system_type)
{minix_read_super, "minix", 1, NULL});
#endif
the VFS is given the name of the file system ('minix').
The function passed, read_super( ), forms the mount interface: further functions of the
file system implementation will be made known to the VFS via this function.
The function sets up the file_system_type structure
it has been passed in a singly linked list whose beginning is pointed to by
file_systems.
Mounting
Before a file can be accessed, the file system containing the file must be mounted.
This can be done using either the system call mount or the function mount_root().
The mount_root() function takes care of mounting the first file system (the root file
system).
It is called by the system call setup after all the file system implementations
permanently included in the kernel have been registered.
The setup call itself is called just once,' immediately after the init process is created
by the kernel function init() (file init/main.c).
This system call is necessary because access to kernel structures is not allowed from
user mode (which is the status of the init process).
Every mounted file system is represented by a super_block structure.
These structures are held in the static table super_blocks[] and limited in number to
NR_SUPER.
The superblock is initialized by the function read_super() in the Virtual File System.
This file-system-specific function will have been made known on registering the
implementation with the VFS.
When called, it will contain:
a superblock structure in which the elements s_dev and s_flags are filled in
accordance with Table below,
a character string (in this case void *) containing further mount options for the
file system, and
a silent flag indicating whether unsuccessful mounting should be reported.
This flag is used only by the kernel function mount_root(), as this calls all the
read_super( ) functions present in the various file system implementations
The file-system-specific function read_super() reads its data if necessary from the
appropriate block device using the LINUX cache functions .
The file-system-independent mount flags in the superblock.
Macro
MSRDONLY
MSNOSUID
MSNODEV
MSNOEXEC
MSSYNCHR
MSREMOU
Value
1
2
4
8
16
32
Remarks
File system is read only
Ignores S bits
Inhibits access to device files
Inhibits execution of program
Immediate write to disk
Changes flags
The superblock contains information on the entire file system, such as block size,
access rights and time of the last change.
In addition, the structure holds special information on the relevant file systems.
For file system modules mounted later, there is a pointer generic_sdp.
The components s_Lock and s_wait ensure that access to the superblock is
synchronized.
This uses the functions Lock_super() and unLock_super(), which are defined in the
file <linux/ Locks. h>.
Superblock operations
The superblock structure provides, in the function vector s_op, functions for accessing the
file system, and these form the basis for further work on the file system.
struct super_operations {
void (*read_inode) (struct inode *);
int (*notify_change) (struct inode*, struct iattr *);
void (*write_inode) (struct inode *);
void (*put_inode) (struct inode *);
void (*put_super) (struct super_block *);
void (*write_super) (struct super_block *);
void (*statfs) (struct super_block *, struct statfs *);
int (*remount_fs) (struct super_block *, int *, char *);
};
The functions in the super_operations structure serve to read and write an individual
inode,to write the superblock and to read file system information.
If a superblock operation is not implemented - that is, if the pointer to the operation is
NULL -no further action will take place.
write_super(sb)
The write_super(sb) function is used to save the information of the superblock.
The function will cause the cache to write back the buffer for the superblock: this is
ensured by setting the buffer's b_dirt flag.
The function is used in synchronizing the device and is ignored by read-only file
systems .
put_super(sb)
The Virtual File System calls this function when unmounting file systems,
when it should also release the superblock and other information buffers and/or
restore the consistency of the file system.
In addition, the s_dev entry in the superblock structure must be set to 0 to ensure that
the superblock is once again available after unmounting.
statfs(sb, statfsbuf)
The two system calls statfs and fstatfs call the superblock operation which fill in the
staffs structure.
This structure provides information on the file system, the number of free blocks and
the preferred block size.
The structure is located in the user address space.
remount_fs(sb, flags, options)
The remount_fs() function changes the status of a file system .
This involves entering the new attributes for the file system in the superblock and
restoring the consistency of the file system.
read_inode(inode)
This function is responsible for filling in the inode structure it has been passed, in a
similar way to read_super().
It is called by the function _iget(), which will already have
given the entries i_dev, i_ino, i_sb and i_flags their contents.
The main purpose of the read_inode0 function is to mark the different file types by
entering inode operations in the inode according to the file type.
notify_change(inode, attr)
The changes made to the inode via system calls are acknowledged by
notify_change().
All inode changes are carried out on the local inode structure only, which means that
the computer exporting the file system needs to be informed.
write_inode(inode)
This function saves the inode structure, analogous to write_super().
put_i node(inode)
This function is called by iput() if the inode is no longer required.
Its main task is to delete the file physically and release its blocks.
The inode structure
When a file system is mounted, the superblock is generated and the root inode for the
file system is entered in the component i_mount at the appropriate mount point, that
is, in its inode structure.
The definition of the inode structure is as follows:
struct Inode {
dev_t
i_dev;
unsigned long i_ino;
umode_t
i_mode;
nlink_t
i_nlink;
uid_t
i_uid;
gid_t
i_gid;
dev_t
i_rdev;
off_t
i_size;
time_t
i_atime;
time_t
i_mtime;
/* time of last modification */
time_t
i_ctime;
/* time of creation */
unsigned long i_blksize;
/* block size */
unsigned long i_blocks;
/* number of blocks */
unsigned long i_version;
/* DCache version management */
struct semaphore i_sem;
/* access control */
struct inode_operations * 1_0p;
/* inode operations */
struct super_block * i_sb;
/* superbLock */
struct wait_queue * i_wait;
/* wait queue */
struct file_lock * i_flock;
/* file locks */
struct vm_area_struct * i_mmap;
/* memory areas */
struct inode * i_next, * i_prev;
/* inode linking . */
struct inode * i_hash_next, * 1_hash_prev;
struct inode * i_bound_to, * i_bound_by;
struct inode * i_mount;
/* mounted inode */
struct socket * i_socket;
/* socket management */
unsigned short i_count;
/* reference counter */
unsigned short i_wcount;
/* number authorized to write */
unsigned short i_flags;
/* flags (= i_sb->s_flags) */
unsigned char i_lock;
/* lock */
unsigned char i_dirt;
/* inode has been modified */
unsigned char i_pipe;
/* inode represents pipe */
unsigned char i_sock;
/* inode represents socket */
unsigned char i_seek;
/* not used */
unsigned char i_update;
/* inode is current */
union {
struct pipe_inode_info pipe_i;
struct minix_inode_info minix_i;
void *generic_ip;
} u;
/* file-system-specific information */ };
In the first section, this holds information on the file.
The remainder contains management information and the file-system-dependent union
u.
In memory, the inodes are managed in two ways. First, they are managed in a doubly
linked circular list starting with first_i node, which is accessed via the entries i_next
and i_prev.
This approach is not particularly efficient, as the complete list of inodes also includes
the 'free', unused inodes, for which the components i_count, i_dirt and i_lock should
all be zero.
The unused inodes are generated via the grow_inodes() function, which is called
every time that less than a quarter of all the inodes are free but not more than
NR_INODE are in existence.
The number of unused inodes and the count of all available inodes are held in the
static variables nr_free and nr_inode respectively.
For fast access, inodes are also stored in an open hash table hash_table[], where
collisions are dealt with via a doubly linked list using the components i_hash_next
and i_hash_prev.
Access to any of the NR_IHASH entries is made through the device and inode
numbers.
The functions for working with inodes are iget(), namei() and iput( ).
The iget( ) function supplies the inode specified by the superblock and the inode
number .
If the required inode is included in the hash table, the i_count reference counter is
simply incremented.
If it is not found, a 'free' inode is selected (get_empty_inode()) and the
implementation of the relevant file system calls the superblock operation read_inode()
to fill it with information.
The resulting inode is then added to the hash table.
An inode obtained using iget() has to be released using the function iput( ).
This decrements the reference counter by 1 and marks the inode structure as 'free' if
the former is 0.
_namei() function supplies the inode for the directory that contains the file with the
name specified.
All functions return an error code smaller than 0 if they are not successful.
The Inode operations
The inode structure also has its own operations, which are held in the
inode_operations structure and mainly provide for file management.
These functions are usually called directly from the implementations of the
appropriate system calls.
struct inode_operations
{
struct file_operations * defauLt_file_ops;
int (*create) (struct inode *,const char *,int,int, struct inode **);
int (*lookup) (struct inode *,const char *,int, struct inode **);
int (*Link) (struct inode *,struct inode *,const char *,int);
int (*unLink) (struct inode *,const char *,int);
int (*symLink) (struct inode *,const char *,int,const char *);
int (*mkdir) (struct inode *,const char *,int,int);
int (*rmdir) (struct inode *,const char *,int);
int (*mknod) (struct inode *,const char *,int,int,int);
int (*rename) (struct inode *,const char *,int,struct inode *, const char *,int);
int (*readlink) (struct inode *,char *,int);
int (*follow_link) (struct inode *,struct inode *,int,int, struct inode **);
int (*bmap) (struct inode *,int);
void (truncate) (struct inode *);
int (permission) (struct inode *, int);
int (*smap) (struct inode *, int);
};
create(dir, name, len, mode, res_inode)
If it cannot be found, executable files must first be loaded into memory completely, as
the more efficient demand paging is not then available.
truncate(inode)
This function is mainly intended to shorten a file, but can also lengthen a file to any
length if this is supported by the specific implementation.
The only parameter required by truncate() is the inode of the file to be amended, with
the i_size field set to the new length before the function is called.
The truncate() function is used at a number of places in the kernel, both by the system
call sys_truncate() and when a file is opened.
It will also release the blocks no longer required by a file.
Thus, the truncate() function can be used to delete a file physically if the inode on the
media is cleared afterwards.
permission(inode, flag)
This function checks the inode to confirm the access rights to the file given by the
mask.
The possible values for the mask are MAY_READ, MAY_WRITE and MAY_EXEC
smap(inode, sector)
This function is intended to allow swap files to be created
this inode operation supplies the logical sector number (not block or cluster) on the
media for the sector of the file specified.
In the memory management function rw_swap_pagc(), the smap() function is required
to prepare to work with a swap file
access type */
/* file position */
/* openO -flags */
reference counter */
read ahead flag */
Links */
PID or -PGRP for SIGIO */
/* related inode */
/* file operations */
Dcache version management */ void
needed for tty driver */
};
The file structures are managed in a doubly linked circular list via
the pointers f_next and
f_prev. This file table can be accessed via the pointer first_file.
File operations
The file_operations structure is the general interface for work on
files, and contains the
functions to open, close, read and write files. The reason why these
functions are not held in inode_ope rat ions but in a separate
structure is that they need to make changes to the file structure.
The inode's inode_operations structure also includes the
component default_file_ops,
in which the standard file operations are already specified.
struct file_operations {
int (*Lseek) (struct inode *, struct file *, off_t, int);
int (*read) (struct inode *, struct file *, char *, int);
int (*wnte) (struct inode *, struct file *, char *, int);
int (*readdir) (struct inode *, struct file *, struct dirent *, int);
int (*select) (struct inode *, struct file *, int, select_table *);
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
int (*mmap) (struct inode *, struct file *, struct vm_area_struct *);
int (*open) (struct inode *, struct file *);
void (*release) (struct inode *, struct file *);
int (*fsync) (struct inode *, struct file *);
int (*fasync) (struct inode *, struct file *, int);
int (*check_media_change) (dev_t);
int (*revalidate) (dev_t);
};
These functions are also useful for sockets and device drivers, as
they contain the actual
functions for sockets and devices. The inode operations, on the
other hand, only use the
representation of the socket or device in the related file system or
its copy in memory.
lseek(inode, filp, offset, origin)
The job of the lseek function is to deal with positioning within the
file. If this function
is not implemented, the default action simply converts the file
position f_pos for the file
structure if the positioning is to be carried out from the start or from
the current
position. If the file is represented by an inode, the default function
can also be
positioned from the end of the file. If the function is missing, the file
position in the file
structure is updated by the VFS.
place when data are received and the on flag is set. If on is not set,
the process
unregisters the file structure from asynchronous messaging.
check_media_change(dev)
This function is only relevant to block devices supporting changeable
media. It tests
whether there has been a change of media since the last operation
on it. If so, the
function will return a 1, otherwise a zero.
The check_media_change() function is called by the VFS function
check_disk_change(); if a change of media has taken place,
it calls put_super() to remove any superblock belonging to the
device, discards all the buffers belonging to the device dev which
are still in the buffer cache, along with all the inodes on this device,
and then calls revalidate().
As check_disk_change() requires a considerable amount of time, it is
only called when mounting a device. Its return values are the same
as for check_media_change0.
revalidate(dev)
This function is called by the VFS after a media change has been
recognized, to restore
the consistency of a block device. It should establish and record all
the necessary parameters of the media, such as the number of
blocks, number of tracks and so on.
Ext2 File System
The ext2 or second extended filesystem is a file system for the Linux kernel. The canonical
implementation of ext2 is the ext2fs filesystem driver in the Linux kernel
LINUX was initially developed under MINIX, the first LINUX file system was the MINIX
file system. However, this file system restricts partitions to a maximum of 64 Mbytes and
filenames to no more than 14 characters, so the search for a better file system was not long in
starting.
The result, was the Ext file system - the first to be designed for LINUX. Although this
allowed partitions of up to 2 Gbytes and filenames up to 255 characters, it had the drawbacks
that it was slower than its MINIX counterpart and the simple implementation of free block
administration led to extensive fragmentation of the file system.
A file system which is now little used was Xia file system. This is also based on the MINIX
file system and permits partitions of up to 2 Gbytes in size along with filenames of up to 248
characters; but its administration of free blocks in bitmaps and optimizing block allocation
functions make it faster and more robust than the Ext file system.
At the same time, Remy Card, Wayne Davidson presented the Ext2 file system as a further
development of the Ext file system. It can be considered by now to be
the LINUX file system, as it is used in most LINUX systems and distributions.
(there is no need to read a data block when accessing such a link). Of course, the space
available in the inode is limited so not every link can be implemented as a fast symbolic link.
The maximal size of the target name in a fast symbolic link is 60 characters.
Ext2fs keeps track of the filesystem state. A special field in the superblock is used by the
kernel code to indicate the status of the file system. When a filesystem is mounted in
read/write mode, its state is set to ``Not Clean''.
When it is unmounted or remounted in read-only mode, its state is reset to ``Clean''. At boot
time, the filesystem checker uses this information to decide if a filesystem must be checked.
The kernel code also records errors in this field. When an inconsistency is detected by the
kernel code, the filesystem is marked as ``Erroneous''.
Always skipping filesystem checks may sometimes be dangerous, so Ext2fs provides two
ways to force checks at regular intervals. A mount counter is maintained in the superblock.
Each time the filesystem is mounted in read/write mode, this counter is incremented. When it
reaches a maximal value (also recorded in the superblock), the filesystem checker forces the
check even if the filesystem is ``Clean''. A last check time and a maximal check interval are
also maintained in the superblock. These two fields allow the administrator to request
periodical checks. When the maximal check interval has been reached, the checker ignores
the filesystem state and forces a filesystem check. Ext2fs offers tools to tune the filesystem
behavior. The tune2fs program can be used to modify:
the error behavior. When an inconsistency is detected by the kernel code, the
filesystem is marked as ``Erroneous'' and one of the three following actions can be
done: continue normal execution, remount the filesystem in read-only mode to avoid
corrupting the filesystem, make the kernel panic and reboot to run the filesystem
checker.
Mount options can also be used to change the kernel error behavior.
An attribute allows the users to request secure deletion on files. When such a file is deleted,
random data is written in the disk blocks previously allocated to the file. This prevents
malicious people from gaining access to the previous content of the file by using a disk
editor.
Immutable files can only be read: nobody can write or delete them. This can be used to
protect sensitive configuration files. Append-only files can be opened in write mode but data
is always appended at the end of the file. Like immutable files, they cannot be deleted or
renamed. This is especially useful for log files which can only grow.
Ext2 Physical File Structure
The physical file structure of Ext2 filesystems is made up of block groups. Block groups are
not tied to the physical layout of the blocks on the disk, since modern drives tend to be
optimized for sequential access and hide their physical geometry to the operating system.
The physical structure of a filesystem is represented in this table:
Boot
Block
Block
...
Block
Sector
Group 1
Group 2
...
Group N
Each block group contains a redundant copy of crucial filesystem control informations
(superblock and the filesystem descriptors) and also contains a part of the filesystem (a block
bitmap, an inode bitmap, a piece of the inode table, and data blocks). The structure of a block
group is represented in this table:
Super
Block
FS
descriptors
Block
Bitmap
Inode
Bitmap
Inode
Table
Data
Blocks
Using block groups is a big win in terms of reliability: since the control structures are
replicated in each block group, it is easy to recover from a filesystem where the superblock
has been corrupted. This structure also helps to get good performances: by reducing the
distance between the inode table and the data blocks, it is possible to reduce the disk head
seeks during I/O on files.
Ext2fs, directories
In Ext2fs, directories are managed as linked lists of variable length entries. Each entry
contains the inode number, the entry length, the file name and its length. By using variable
length entries, it is possible to implement long file names without wasting disk space in
directories. The structure of a directory entry is shown in this table:
inode number
entry length
name length
filename
As an example, The next table represents the structure of a directory containing three files:
file1, long_file_name, and f2:
i1
i2
i3
16
40
05
14
12
file1
long_file_name
02
f2
Performance optimizations
The Ext2fs kernel code contains many performance optimizations, which tend to improve I/O
speed when reading and writing files.
Ext2fs takes advantage of the buffer cache management by performing readaheads: when a
block has to be read, the kernel code requests the I/O on several contiguous blocks. This way,
it tries to ensure that the next block to read will already be loaded into the buffer cache.
Readaheads are normally performed during sequential reads on files and Ext2fs extends them
to directory reads, either explicit reads (readdir(2) calls) or implicit ones (namei kernel
directory lookup).
Ext2fs also contains many allocation optimizations. Block groups are used to cluster together
related inodes and data: the kernel code always tries to allocate data blocks for a file in the
same group as its inode. This is intended to reduce the disk head seeks made when the kernel
reads an inode and its data blocks.
When writing data to a file, Ext2fs preallocates up to 8 adjacent blocks when allocating a new
block. Preallocation hit rates are around 75% even on very full filesystems. This preallocation
achieves good write performances under heavy load. It also allows contiguous blocks to be
allocated to files, thus it speeds up the future sequential reads.
Overview of Ext3, ext4 file systems; Difference between ext2 and ext3 FS;
ext2, ext3 and ext4 are all filesystems created for Linux.
Ext2
This was developed to overcome the limitation of the original ext file system.
On flash drives, usb drives, ext2 is recommended, as it doesnt need to do the over head of
journaling.
Ext3
Journaling has a dedicated area in the file system, where all the changes are tracked. When the
system crashes, the possibility of file system corruption is less because of journaling.
Ordered Only metadata is saved in the journal. Metadata are journaled only after
writing the content to disk. This is the default.
You can convert a ext2 file system to ext3 file system directly (without backup/restore).
Ext4
Supports huge individual file size and overall file system size.
Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
You can also mount an existing ext3 fs as ext4 fs (without having to upgrade it).
Several other new features are introduced in ext4: multiblock allocation, delayed allocation,
journal checksum. fast fsck, etc. All you need to know is that these new features have
improved the performance and reliability of the filesystem when compared to ext3.
In ext4, you also have the option of turning the journaling feature off.
1.8-File Permission:Linux permissions dictate 3 things you may do with a file, read, write and execute. They are
referred to in Linux by a single letter each.
For every file we define 3 sets of people for whom we may specify permissions.
owner - a single person who owns the file. (typically the person who created the file
but ownership may be granted to some one else by certain users)
Three persmissions and three groups of people. That's about all there is to permissions really.
Now let's see how we can view and change them.
View Permissions
To view permissions for a file we use the long listing option for the command ls.
ls -l [path]
1.
ls -l /home/ryan/linuxtutorialwork/frog.png
2.
3.
In the above example the first 10 characters of the output are what we look at to identify
permissions.
The first character identifies the file type. If it is a dash ( - ) then it is a normal file. If
it is a d then it is a directory.
The following 3 characters represent the persmissions for the owner. A letter
represents the presence of a permission and a dash ( - ) represents the absence of a
permission. In this example the owner has all permissions (read, write and execute).
The following 3 characters represent the permissions for the group. In this example
the group has the ability to read but not write or execute. Note that the order of
permissions is always read, then write then execute.
Finally the last 3 characters represent the permissions for others (or everyone else). In
this example they have the execute permission and nothing else.
Change Permissions
To change permissions on a file or directory we use a command called chmod It stands for
change file mode bits which is a bit of a mouthfull but think of the mode bits as the
permission indicators.
Who are we changing the permission for? [ugoa] - user (or owner), group, others, all
1.
ls -l frog.png
2.
3.
4.
5.
ls -l frog.png
6.
7.
8.
9.
ls -l frog.png
10.
11.
Don't want to assign permissions individually? We can assign multiple permissions at once.
1.
ls -l frog.png
2.
3.
4.
5.
ls -l frog.png
6.
7.
8.
9.
ls -l frog.png
10.
11.
It may seem odd that as the owner of a file we can remove our ability to read, write and
execute that file but there are valid reasons we may wish to do this. Maybe we have a file
with data in it we wish not to accidentally change for instance. While we may remove these
permissions, we may not remove our ability to set those permissions and as such we always
have control over every file under our ownership.
1.9-User Management:Types of Users,Powers of Root:There are three type of user accounts1- Super user (root) uid and gid =0
2- System user (ftpd,sshd) uid and gid (1-499)
3- Regular user (what u create with useradd cmd) uid and
gid (500<)
[Note: If you assign uid below to 500 like between 1 to 100 it means that user umask will be
022 like root.And you know that normally user whose uid and gid is above their umask is
002.]
Super User:The super user is also known as system administrator. The job of system administrator
involves the management of the entire system-ranging from maintaining user
accounts,security and managing disk space to performing backups.
root: The system administrators login
The superuser, or root, is a special user account used for system administration. It is given
full and complete access to all system resources or the "superuser". It is also used to describe
the directory named "/"as in, "the root directory" .This account need not to be separately
created but comes with every syatem. Its password is set at the time of installation of the
Linux and has to be logged in :
login: root
password:
The prompt of root is #, unlike $ used by non privileged users.
One can become root by either logging in as user "root" or by typing "su" within a normal
user's login session. The root password is required to become root.
Once you logged in as root, you are placed in roots home directory. This directory can be / or
/root.
Since the super user has to constantly navigate the file system, the PATH for a super user
doest include the current directory(.).
Linux uses Bash shell for normal and system administrative activities.
Can change the contents or attributes of any file like its permissions and ownership.
He can delete any file even if the directory is write protected
Examples of various functions performed by the super user to configure the system
Set the system clock with date command
date -s "11/20/2003 12:48:00"
Set the date to the date and time shown.
MAINTAINING SECURITY
As system administrator ,you have to ensure that the system directories (/bin,/usr/bin/etc/sbin
etc )and the files written in them are protected.
Manage file permissions and ownership
Manage access permissions on both regular and special files as well as directories
Maintain security using access modes such as suid, sgid, and the sticky bit
Linux groups are a mechanism to manage a collection of computer system users. All Linux
users have a user ID and a group ID and a unique numerical identification number called a
userid (UID) and a groupid (GID) respectively. Groups can be assigned to logically tie users
together for a common security, privilege and access purpose. It is the foundation of Linux
security and access. Files and devices may be granted access based on a users ID or group ID.
1.10.Managing Users
Adding User
The easiest way to add a new user is to use the useradd command like this: useradd bart. We
have now created a new user called bart. To assign a password for that user use the command
passwd bart.
/etc/passwd file has several entries that are actually users for programs that need to control
processes or need "special" access to the filesystem.
/etc/passwd and other informative files
The basic user database in a Linux system is the text file, /etc/passwd (called the password
file), which lists all valid usernames and their associated information. The file has one line
per username, and is divided into seven colon-delimited fields:
Username.
Home directory.
user is the username that is used for logging or by programs. The username is case
sensitive on Linux systems and it is recommended to keep the special characters out of it.
password is the field where the encrypted password is stored in. The passwd command
encrypts the passwords and stores them in that field. The default encryption algorithm
used is considered rather poor today. It is better to choose shadow passwords and, in that
case, the field will remain blank and all the passwords will be stored in the /etc/shadow
file.
UID is the user ID. It's a numerical value that is bound to a user. For the root user is
always 0. The UID has to be unique and should have a value between 0 and 4 mil.
Usually, for users, the UID is greater than 100. All the files in a Linux system have an
UID. This UID determines the ownership of files and processes.
GID is the group ID for the primary group of the user. This is also a numerical value and
to root, also has the value 0. For every user, there is at least a GID. This field identifies
the primary group to which a user belongs. Note that a user can be assigned to several
groups. Both the UID and the GID are very important for filesystem security.
comment is a field that holds text information about a user. Usually, you add here the
name of the user but you can also add the phone number, the e-mail address or whatever
you like. Where there are many users to manage, the comment field can really come in
handy.
home defines the home directory of that user. This directory is created automatically by
the useradd command. If you want to change it from here, you should keep in mind that it
has to exist.
shell is the shell that will be used by the user. The default should be more than ok most of
the time. Accounts created for people will have assigned the bash shell and the accounts
created for programs will have no login which is a nice trick for disallowing logins with
that user.
chgrp:
This command is used by any system user who is a member of multiple groups. If the user
creates a file, the default group association is the group id of user.
If he wishes to change it to another group of which he is a member issue the command:
chgrp new-group-id file-name
If the user is not a member of the group then a password is required.
userdel userName
userdel Example
To remove the user vivek account from the local system / server / workstation, enter:
# userdel vivek
To remove the user's home directory pass the -r option to userdel, enter:
# userdel -r vivek
The above command will remove all files along with the home directory itself and the user's
mail spool. Please note that files located in other file systems will have to be searched for and
deleted manually.
Complete Example
The following is recommend procedure to delete a user from the Linux server. First, lock user
account, enter:
# passwd -l username
You can find file owned by a user called vivek and change its ownership as follows:
# find / -user vivek -exec chown newUserName:newGroupName {} \;
This General Public License applies to most of the Free Software Foundation's software.
When we speak of free software, we are referring to freedom, not price. Our General Public
Licenses are designed to make sure that you have the freedom to distribute copies of free
software that you receive source code or can get it if you want it, that you can change the
software or use pieces of it in new free programs.
Journaling: A journaling file system is a fault-resilient file system in which data integrity is
ensured because updates to directories and bitmaps are constantly written to a serial
log on disk before the original disk log is updated.
Journaling filesystems write metadata (i.e., data about files and directories) into the
journal that is flushed to the HDD before each command returns.
In the event of a system crash, a given set of updates may have either been
fully committed to the filesystem (i.e., written to the HDD), in which case there is no
problem, or the updates will have been marked as not yet fully committed, in which
case the system will read the journal, which can be rolled up to the most recent point
of data consistency.
This is far faster than a scan of the entire HDD when rebooting, and it guarantees that
the structure of the filesystem is always internally consistent.
Thus, although some data may be lost, a journaling filesystem typically allows a
computer to be rebooted much more quickly after a system crash.
Such downtime can be very costly in the case of big systems used by large
organizations.
The most commonly used journaling filesystem for Linux is the third extended
filesystem (ext3fs), which was added to the kernel from version 2.4.16 (released in
January 1993).
Also featured is the ability for ext2 partitions to be converted to ext3 and vice-versa
without any need for backing up the data and repartitioning.
If necessary, an ext3 partition can even be mounted by an older kernel that has no ext3
support; this is because it would be seen as just another normal ext2 partition and the
journal would be ignored.
1. When a system is first booted, or is reset, the processor executes code at a well-known
location. In a personal computer (PC), this location is in the basic input/output system
(BIOS), which is stored in flash memory on the motherboard.
2. When a boot device is found, the first-stage boot loader is loaded into RAM and
executed. This boot loader is less than 512 bytes in length (a single sector), and its job
is to load the second-stage boot loader.
3. When the second-stage boot loader is in RAM and executing, a splash screen is
commonly displayed, and Linux and an optional initial RAM disk (temporary root file
system) are loaded into memory.
4. When the images are loaded, the second-stage boot loader passes control to the kernel
image and the kernel is decompressed and initialized.
5. At this stage, the second-stage boot loader checks the system hardware, enumerates
the attached hardware devices, mounts the root device, and then loads the necessary
kernel modules.
6. When complete, the first user-space program (init) starts, and high-level system
initialization is performed.
That's Linux boot in a nutshell. Now let's explore some of the details of the Linux boot
process.
System startup
The system startup stage depends on the hardware that Linux is being booted on. A bootstrap
environment is used when the system is powered on, or reset. In addition to having the ability
to store and boot a Linux image, these boot monitors perform some level of system test and
hardware initialization. In an embedded target, these boot monitors commonly cover both the
first- and second-stage boot loaders.
Commonly, Linux is booted from a hard disk, where the Master Boot Record (MBR) contains
the primary boot loader. The MBR is a 512-byte sector, located in the first sector on the disk.
After the MBR is loaded into RAM, the BIOS yields control to it.
Stage 1 boot loader
The primary boot loader that resides in the MBR is a 512-byte image containing both
program code and a small partition table (see Figure ). The first 446 bytes are the primary
boot loader, which contains both executable code and error message text.
The next sixty-four bytes are the partition table, which contains a record for each of four
partitions (sixteen bytes each). The MBR ends with two bytes that are defined as the magic
number (0xAA55). The magic number serves as a validation check of the MBR.
The job of the primary boot loader is to find and load the secondary boot loader (stage 2) by
looking through the partition table for an active partition.
When it finds an active partition, it scans the remaining partitions in the table to ensure that
they're all inactive. When this is verified, the active partition's boot record is read from the
device into RAM and executed.
Figure . Anatomy of the MBR
The first stage of loading LILO is completed when LILO brings up in order of the each of the
lettersL-I-L-O. When you see the LILO prompt, you are in the second stage.
When LILO loads itself it displays the word LILO. Each letter is printed before or after
some specific action. If LILO fails at some point, the letters printed so far can be used to
identify the problem.
(nothing)
No part of LILO has been loaded. LILO either isn't installed or the partition on which
its boot sector is located isn't active. The boot media is incorrect or faulty.
L
The first stage boot loader has been loaded and started, but it can't load the second
stage boot loader. The two-digit error codes indicate the type of problem. This
condition usually indicates a media failure or bad disk parameters in the BIOS.
LI
The first stage boot loader was able to load the second stage boot loader, but has
failed to execute it. This can be caused by bad disk parameters in the BIOS.
LIL
The second stage boot loader has been started, but it can't load the descriptor table
from the map file. This is typically caused by a media failure or by bad disk
parameters in the BIOS.
LIL?
The second stage boot loader has been loaded at an incorrect address. This is typically
caused by bad disk parameters in the BIOS.
LILThe descriptor table is corrupt. This can be caused by bad disk parameters in the
BIOS.
LILO
All parts of LILO have been successfully loaded.
LILO with WIN XP
If you have WINXP installed to MBR on your hard drive, install LILO to the root partition
instead of the MBR. If you want to boot up Linux, you must mark the LILO partition as
bootable. If you are starting with LILO, you can begin editing the configuration file.
After you install LILO on your system, you can make it take over your MBR. As a root user,
type:
# /sbin/lilo v -v
LILO Configuration File
Given below is a sample /etc/lilo.conf file.
The /etc/lilo.conf File
The sample lilo.conf file shown below is for a typical dual-boot configuration, with Windows
installed on the first partition and Linux on the second. You can probably use this as-is,
except for the image= line and possibly the root= line, depending on where Linux was
installed. Detailed explanation follows.
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
compact
prompt
timeout=50
image=/boot/vmlinuz-2.0.36
label=linux
root=/dev/hda2
read-only
other=/dev/hda1
label=win
boot=/dev/hda:
Tells LILO where to install the bootloader. In this case, it is going into the master boot
record of the first hard drive, which means LILO will control the boot process of all
operating systems from the start. It could also have been /dev/hda2, the boot sector of
the Linux partition. In that case, the DOS bootloader would need to be in the master
boot record, and booting Linux would require setting the Linux partition active using
fdisk.
map=/boot/map:
The map file is automatically generated by LILO and is used internally. Don't mess
with it.
install=/boot/boot.b:
Tells LILO what to use as the new boot sector. This file contains the "bootstrap" code
that starts your operating system.
compact:
Makes LILO read the hard drive faster.
prompt:
Tells LILO to prompt us at boot time to choose an operating system or enter
parameters for the Linux kernel.
timeout=50:
Tells LILO how long to wait at the prompt before booting the default operating
system, measured in tenths of a second. The configuration shown waits for 5 seconds.
image=/boot/vmlinuz-2.0.36:
The name of a Linux kernel for LILO to boot. The first image listed in the file is the
default, unless you specify otherwise.
label=linux:
The name that is used to identify this image at the LILO: boot prompt. Typing this
name will select this image.
root=/dev/hda2:
Tells LILO where the root (/) file system is (where Linux lives), so that the Linux
kernel can mount it at boot time.
read-only:
Tells LILO to instruct the Linux kernel to initially mount the root file system as readonly. It will be remounted as read-write later in the boot process. This is the normal
method of booting
other=/dev/hda1:
other tells LILO to boot an operating system other than Linux. It is given the value of the
partition where this other operating system lives.
label=win:
Same as the label above, gives you a way to refer to this section.
GRUB
GRUB combines installations with one install command and allows for MD5 encryption of
passwords. When a configuration file is configured incorrectly, the system reverts to the
command-line prompts.
MBR Vs. Root Partition
If you have WINXP installed to MBR on your hard drive, install GRUB to the root partition
instead of the MBR. After you install GRUB, you can make it take over your MBR. Do this
at the prompt as a root user:
# /boot/grub/grub
Now, you can use the GRUB command
grub> install (hd1,2)/boot/grub/stage1 (hd1) (hd1,2)/boot/grub/stage2 p
(hd1,2)/boot/grub/menu.conf
Let's take a look at the installation of the first stage in the install command:
install (hd1,2)/boot/grub/stage1 (hd1)
What this says is that GRUB is installing the first stage image on the third partition of the
second disk (Linux).It is also installing to MBR on this same disk.
In the second part of the command, the stage two image is installed:
(hd1,2)/boot/grub/stage2
Finally, the installation is complete with the optional location of the configuration file:
p (hd1,2)/boot/grub/menu.conf
GRUB Configuration File
Given below is a sample /boot/grub/grub.conf file.
default=0
timeout=10
splashimage=(hd1,2)/grub/splash.xpm.gz
password --md5 [encrypted password]
title Linux
password --md5 [encrypted password]
root (hd1,2)
kernel /vmlinuz-2.6.23-13 ro root=LABEL=/
initrd /initrd-2.6.23-13.img
title Windows XP
password --md5 [encrypted password]
rootnoverify (hd0,0)
chainloader +1
The default = option tells GRUB which image to boot by default after the timeout
period.
The splashimage option specifies the location of the image for use as the background
for the GRUB GUI.
The password option specified the MD-5 password to gain access to GRUB's
interactive boot options. To generate an md5 password, run the tool grub-md5-crypt
as root. Copy this into your grub-conf passwordmd5. You can create separate
passwords for each entry in the file.
The initrd option specifies the file that will be loaded at boot time as the initial RAM
disk.
The rootnoverify option tells GRUB to not try to vary the root of the OS.
The chainloader+1 tells GRUB to use a chain loader to load Windows on the first
partition of the first disk. It uses the blocklist notation to grab the first sector of the
current partition with '+1'.