What are Logical block size and Physical block size and How to find it ?

In simple words, logical block size is the unit used by the  ‘kernel’ for read/write operations.. Physical block sizes are the ones which ‘disk controllers’ use for read/write operations.

Are there any way to display logical/ physical block sizes in your system ?

Yep, there are:

1) Fetch these information from the ‘sysfs’:

‘sda’ is the example device

2) Use ‘blockdev’ command to display the same information.

Physical Volume label contains UUID, LABELONE string..etc in it..

LVM tools scans first 2048 bytes of physical volume to get physical volume label. Physical volume label starts with a string called “LABELONE” , also it contains physical volume UUID ..etc information in this region..

I took copy of this area and read it :

0000000000  |…………….|
0000000016  |…………….|
0000000032  |…………….|
0000000048  |…………….|

0000000432  |…………….|
0000000448  |…………….|
0000000464  |…………….|
0000000480  |…………….|
0000000496  |…………….|
0000000512  |LABELONE……..|
0000000528  |..[. …LVM2 001|
0000000544  |btWJchwdkfUTpdX7|
0000000560  |4m1WkesLF1fI87qF|
0000000576  |..%.2………..|
0000000592  |…………….|
0000000608  |…………….|
0000000624  |…………….|
0000000640  |…………….|
0000000656  |…………….|
0000000672  |…………….|
0000000688  |…………….|
0000000704  |…………….|
0000000720  |…………….|
0000000736  |…………….|

Do you see “LABELONE ” and the physical volume UUID (btWJch-wdkf-UTpd-X74m-1Wke-sLF1-fI87qF) which I referred here in that area ?

Why we fear when we hear about LVM metadata ?

It is very common that, people get scared as soon as they hear about LVM metadata.. I really dont know the reason, but I believe it is mainly because they think ‘metadata’ is not in human readable form ? , could be..

Any way, first of all let me tell you that, LVM metadata is an ASCII text…

It is, that simple to read:

See here an example of LVM metadata and its entries.

I have volume group called “MISC_VG_00001″ , the metadata of this volume group has below entries in it..

version = 1

description = "Created *before* executing 'lvcreate -L +100G -n MISC_LV MISC_VG'"

creation_host = "humbles-lap"   # Linux humbles-lap 2.6.34.7-61.fc13.x86_64 #1 SMP Tue Oct 19 04:06:30 UTC 2010 x86_64
creation_time = 1288624674      # Mon Nov  1 20:47:54 2010

MISC_VG {
        id = "5vLkK1-Dzc6-Vbwa-X0fP-AH9D-qEq4-rgAwcu"
        seqno = 1
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "btWJch-wdkf-UTpd-X74m-1Wke-sLF1-fI87qF"
                        device = "/dev/sda7"    # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 419435253    # 200.002 Gigabytes
                        pe_start = 384
                        pe_count = 51200        # 200 Gigabytes
                }
        }

}

Do you think I have to explain “metadata” entries from above ? I know answer is “NO” as it is self explanatory….

 

By the way , you can see LVM metadata copies under /etc/lvm/archieve and /etc/lvm/backup directories..

Get/retrieve/display “UUID” of the scsi device using “scsi_id” command

To Retrieve/display the UUID of a scsi device we can use “scsi_id” command ..

The output of the command shows the UUID of the device. UUID is unique so it is used as primary key to the device. Also, these identifiers are persistent across reboots

 

For ex: To retrieve UUID of my ‘sda’ device I used below command.

[root@humbles-lap ]# scsi_id -g /dev/sda

350024e9003a0e80d

 

This “scsi_id” can be used in a SAN environment. That said, this command can retrieve the UUID of the lun ( scsi device) in host system. “scsi_id” man page give more information on this.

 

–snip–

 

scsi_id queries a SCSI device via the SCSI INQUIRY vital product data (VPD) page 0x80 or 0x83 and uses the resulting data to generate a value that is unique across all SCSI devices that properly support page 0x80 or page 0x83. If a result is generated it is sent to standard output, and the program exits with a zero value. If no identifier is out‐ put, the program exits with a non-zero value. scsi_id is primarily for use by other utilities such as udev that require a unique SCSI identifier. By default all devices are assumed black listed, the --whitelisted option must be specified on the command line or in the config file for any useful behaviour. SCSI commands are sent directly to the device via the SG_IO ioctl interface. In order to generate unique values for either page 0x80 or page 0x83, the serial numbers or world wide names are prefixed as follows. Identifiers based on page 0x80 are prefixed by the character 'S', the SCSI vendor, the SCSI product (model) and then the the serial number returned by page 0x80.

For example: # /lib/udev/scsi_id --page=0x80 --whitelisted --device=/dev/sda SIBM 3542 1T05078453

Identifiers based on page 0x83 are prefixed by the identifier type followed by the page 0x83 identifier. For example, a device with a NAA (Name Address Authority) type of 3 (also in this case the page 0x83 identifier starts with the NAA value of 6):

# /lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/sda 3600a0b80000b174b000000d63efc5c8c

 

–/snip–

Hope it helps..

 

 

Buffer I/O errors in Linux system

I have noticed these  messages in some of the linux systems . I was bit worried when I first saw them, how-ever later I came to know, these messages are harmless, “if” this system  is connected to  SAN ( ex:EMC Clariion series.) which is configured in “Active/Passive” mode and “if” the filesystem is working fine .

Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sda, logical block 0
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 0
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 1
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 2
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 3
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 4
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 5
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 6
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 7
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sda, logical block 0

Commands like “vgscan”, “fdisk” can spit these messages .

These types of SANs contain two storage processors. LUNs are serviced through only one of the storage processors in this setup. This storage processor is called “ACTIVE”  processor..

 

The LUN can receive I/O only via Active processor. The other processor is “passive”; it acts as a backup, ready to receive I/O if the active controller fails, or if all paths to the LUN ( in multipath setup)  on the active controller fail.

Paths to the LUN going via the passive controller are passive paths and will generate  I/O errors when  I/O tried against these paths . At bootup, the kernel’s SCSI mid-layer scans all paths to find devices. Thus it will scan both active and passive paths and will generate buffer I/O errors for the passive paths. LVM can also ‘spit’ these messages ” if  proper LVM filter” is configured..

These messages can be ignored if the filesystem is working fine and you are not facing any other issues because of this, but make sure that , these messages are only reported against Passive Paths and NOT active paths..

So dont be PANIC.. 🙂