Get/retrieve/display “UUID” of the scsi device using “scsi_id” command

To Retrieve/display the UUID of a scsi device we can use “scsi_id” command ..

The output of the command shows the UUID of the device. UUID is unique so it is used as primary key to the device. Also, these identifiers are persistent across reboots

 

For ex: To retrieve UUID of my ‘sda’ device I used below command.

[root@humbles-lap ]# scsi_id -g /dev/sda

350024e9003a0e80d

 

This “scsi_id” can be used in a SAN environment. That said, this command can retrieve the UUID of the lun ( scsi device) in host system. “scsi_id” man page give more information on this.

 

–snip–

 

scsi_id queries a SCSI device via the SCSI INQUIRY vital product data (VPD) page 0x80 or 0x83 and uses the resulting data to generate a value that is unique across all SCSI devices that properly support page 0x80 or page 0x83. If a result is generated it is sent to standard output, and the program exits with a zero value. If no identifier is out‐ put, the program exits with a non-zero value. scsi_id is primarily for use by other utilities such as udev that require a unique SCSI identifier. By default all devices are assumed black listed, the --whitelisted option must be specified on the command line or in the config file for any useful behaviour. SCSI commands are sent directly to the device via the SG_IO ioctl interface. In order to generate unique values for either page 0x80 or page 0x83, the serial numbers or world wide names are prefixed as follows. Identifiers based on page 0x80 are prefixed by the character 'S', the SCSI vendor, the SCSI product (model) and then the the serial number returned by page 0x80.

For example: # /lib/udev/scsi_id --page=0x80 --whitelisted --device=/dev/sda SIBM 3542 1T05078453

Identifiers based on page 0x83 are prefixed by the identifier type followed by the page 0x83 identifier. For example, a device with a NAA (Name Address Authority) type of 3 (also in this case the page 0x83 identifier starts with the NAA value of 6):

# /lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/sda 3600a0b80000b174b000000d63efc5c8c

 

–/snip–

Hope it helps..

 

 

device mapper multipath

Device mapper multipath .

Guys, it is true from my heart that understanding a multipath solution ( especially device-mapper-multipath) is a simple process.

I am more familiar with the multipath solution “device-mapper-multipath” which shipped with the #1 enterprise release RHEL ( Red Hat Enterprise Linux ) .

The mulitpath solution is achieved with the kernel module called “dm_multipath”.

[root@dhcp209-115 ~]# lsmod |grep dm_multipath
dm_multipath 56921 3 dm_emc,dm_round_robin
dm_mod 101905 33 dm_snapshot,dm_multipath
scsi_dh 42177 2 dm_multipath,scsi_dh_rdac
[root@dhcp209-115 ~]#

The related package is “device-mapper-multipath”……..

[root@dhcp209-115 ~]# rpm -q device-mapper-multipath
device-mapper-multipath-0.4.7-34.el5
[root@dhcp209-115 ~]#

How to configure multipath in a system is a simple task .. I would recommend you to refer below kbase for the same.

kbase.redhat.com/faq/docs/DOC-3691

I know it is written for RHEL 4, but it works the same way in RHEL 5 🙂

If you have properly configured multipath, an output similar to below will be provided for the command “multipath -ll”.

[root@dhcp209-115 ~]# multipath -ll
mpath0 (1IET_00010001) dm-2 DGC,RAID 5
[size=200.0G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:1 sda 8:0 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 3:0:0:1 sdb 8:16 [active][ready]
[root@dhcp209-115 ~]#

I am bit lazy to explain above output as it is already documented here..

sources.redhat.com/lvm2/wiki/MultipathUsageGuide

For an end user the above information is sufficient to grasp the technology and for the desired result.. 🙂

I will come up with internals of the same in some other blog ..

References:

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.5/html/DM_Multipath/index.html

dm table for a multipath device explained..

Long time I was trying to interpret “dm” table for a multipath device ( here I am pointing device-mapper-multipath solution) .. Some months ago I was able to .. Thought of sharing it in my space., if somebody can take advantage of it..

-snip–

0 71014400 multipath 0 0 2 1 round-robin 0 2 1 66:128 1000 65:64 1000 round-robin 0 2 1 8:0 1000 67:192 1000

–/snip–

I think this picture explains it nicely..:)

Buffer I/O errors in Linux system

I have noticed these  messages in some of the linux systems . I was bit worried when I first saw them, how-ever later I came to know, these messages are harmless, “if” this system  is connected to  SAN ( ex:EMC Clariion series.) which is configured in “Active/Passive” mode and “if” the filesystem is working fine .

Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sda, logical block 0
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 0
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 1
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 2
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 3
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 4
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 5
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 6
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sdd, logical block 7
Jan 13 13:40:40 humble-node kernel: Buffer I/O error on device sda, logical block 0

Commands like “vgscan”, “fdisk” can spit these messages .

These types of SANs contain two storage processors. LUNs are serviced through only one of the storage processors in this setup. This storage processor is called “ACTIVE”  processor..

 

The LUN can receive I/O only via Active processor. The other processor is “passive”; it acts as a backup, ready to receive I/O if the active controller fails, or if all paths to the LUN ( in multipath setup)  on the active controller fail.

Paths to the LUN going via the passive controller are passive paths and will generate  I/O errors when  I/O tried against these paths . At bootup, the kernel’s SCSI mid-layer scans all paths to find devices. Thus it will scan both active and passive paths and will generate buffer I/O errors for the passive paths. LVM can also ‘spit’ these messages ” if  proper LVM filter” is configured..

These messages can be ignored if the filesystem is working fine and you are not facing any other issues because of this, but make sure that , these messages are only reported against Passive Paths and NOT active paths..

So dont be PANIC.. 🙂