iSCSI remote block-level storage

来源:互联网 发布:linux jvm内存设置 编辑:程序博客网 时间:2024/06/05 20:16

1. A glimpse of iSCSI

iSCSI is an acronym for Internet Small Computer Systems Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. It provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network. iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. It can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.The protocol allows clients (called initiators) to send SCSI commands (CDBs) to storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into storage arrays while providing clients (such as database and web servers) with the illusion of locally attached SCSI disks. It mainly competes with Fibre Channel, but unlike traditional Fibre Channel which usually requires dedicated cabling, iSCSI can be run over long distances using existing network infrastructure. iSCSI was pioneered by IBM and Cisco in 1998 and submitted as a draft standard in March 2000.In essence, iSCSI allows two hosts to negotiate and then exchange SCSI commands using Internet Protocol (IP) networks. By doing this, iSCSI takes a popular high-performance local storage bus and emulates it over a wide range of networks, creating a storage area network (SAN). Unlike some SAN protocols, iSCSI requires no dedicated cabling; it can be run over existing IP infrastructure. As a result, iSCSI is often seen as a low-cost alternative to Fibre Channel, which requires dedicated infrastructure except in its FCoE (Fibre Channel over Ethernet) form. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN), due to competition for a fixed amount of bandwidth.

iSCSI SANs often have one of two objectives:

Storage consolidation

Organizations move disparate storage resources from servers around their network to central locations, often in data centers; this allows for more efficiency in the allocation of storage, as the storage itself is no longer tied to a particular server. In a SAN environment, a server can be allocated a new disk volume without any changes to hardware or cabling.

Disaster recovery

Organizations mirror storage resources from one data center to a remote data center, which can serve as a hot standby in the event of a prolonged outage. In particular, iSCSI SANs allow entire disk arrays to be migrated across a WAN with minimal configuration changes, in effect making storage "routable" in the same manner as network traffic.[citation needed]

2. Structure or architecture of iSCSI

这里写图片描述

Initiator

An initiator functions as an iSCSI client. An initiator typically serves the same purpose to a computer as a SCSI bus adapter would, except that, instead of physically cabling SCSI devices (like hard drives and tape changers), an iSCSI initiator sends SCSI commands over an IP network. An initiator falls into two broad types:A software initiator uses code to implement iSCSI. Typically, this happens in a kernel-resident device driver that uses the existing network card (NIC) and network stack to emulate SCSI devices for a computer by speaking the iSCSI protocol. Software initiators are available for most popular operating systems and are the most common method of deploying iSCSI.A hardware initiator uses dedicated hardware, typically in combination with firmware running on that hardware, to implement iSCSI. A hardware initiator mitigates the overhead of iSCSI and TCP processing and Ethernet interrupts, and therefore may improve the performance of servers that use iSCSI. An iSCSI host bus adapter (more commonly, HBA) implements a hardware initiator. A typical HBA is packaged as a combination of a Gigabit (or 10 Gigabit) Ethernet network interface controller, some kind of TCP/IP offload engine (TOE) technology and a SCSI bus adapter, which is how it appears to the operating system. An iSCSI HBA can include PCI option ROM to allow booting from an iSCSI SAN.An iSCSI offload engine, or iSOE card, offers an alternative to a full iSCSI HBA. An iSOE "offloads" the iSCSI initiator operations for this particular network interface from the host processor, freeing up CPU cycles for the main host applications. iSCSI HBAs or iSOEs are used when the additional performance enhancement justifies the additional expense of using an HBA for iSCSI,[4] rather than using a software-based iSCSI client (initiator). iSOE may be implemented with additional services such as TCP offload engine (TOE) to further reduce host server CPU usage.

Target

The iSCSI specification refers to a storage resource located on an iSCSI server (more generally, one of potentially many instances of iSCSI storage nodes running on that server) as a target.An iSCSI target is often a dedicated network-connected hard disk storage device, but may also be a general-purpose computer, since as with initiators, software to provide an iSCSI target is available for most mainstream operating systems.

3. Configuration for Target

3.1 Install the target software package

[root@issic-server ~]# yum install -y targetcli

3.2 Start service and set it to autostart

[root@issic-server ~]# systemctl enable target [root@issic-server ~]# systemctl start target

3.3 Access to the target interactive mode and make configuration

[root@issic-server ~]# targetcli 

这里写图片描述
这里写图片描述

4. Configuration for Initiator

4.1 Install the initiator software package

[root@issic-client ~]# yum install  iscsi-initiator-utils -y

4.2 Set IQN of client

[root@client ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2017-08.com.example.lockey#InitiatorName matches the Target acls key[root@client ~]# 

4.3 Show target provided by iSCSI server

[root@client ~]# iscsiadm -m discovery -t st -p 172.25.254.136172.25.254.136:3260,1 iqn.2017-08.com.example.issic-server

4.4 Login one or more target from iSCSI server

[root@client ~]# iscsiadm -m node -T iqn.2017-08.com.example.issic-server -p 172.25.254.136 -lLogging in to [iface: default, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260] (multiple)Login to [iface: default, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260] successful.[root@client ~]# fdisk -l##to show your logged target(eg:/dev/sda)

4.5 Mount your target(if not parted , make partion first;if not have a file system , mkfs first)

[root@client ~]# fdisk -l | grep sdaDisk /dev/sda: 838 MB, 838860800 bytes, 1638400 sectors[root@client ~]# df | grep sda[root@client ~]# mount /dev/sda /mnt[root@client ~]# df | grep sda/dev/sda              815788   33056    782732   5% /mnt[root@client ~]# 

4.6 Logout or delete the target

logout target

[root@client ~]#  iscsiadm -m node -T iqn.2010-09.com.example:rdisks.demo -p 192.168.0.254 -u

delete iSCSI target

[root@client ~]# # iscsiadm -m node -T iqn.2010-09.com.example:rdisks.demo -p 192.168.0.254 -o delete

5. Extended iSCSI

5.1 Create LVM(server side)

这里写图片描述

5.2 Create target with LVM(server side)

[root@issic-server ~]# targetcli targetcli shell version 2.1.fb34Copyright 2011-2013 by Datera, Inc and others.For help on commands, type 'help'./> /backstores/block create lockey:storage /dev/v/dev/vda    /dev/vda1   /dev/vdb    /dev/vdb1   /dev/vfio/  /dev/vg0/   ...........................................dev/> /backstores/block create lockey:storage /dev/vg0/lv0 Created block storage object lockey:storage using /dev/vg0/lv0./> /iscsi create iqn.2017-08.com.example.issic-serverCreated target iqn.2017-08.com.example.issic-server.Created TPG 1./> /iscsi/iqn.2017-08.com.example.issic-server/tpg1/luns create /backstores/block/lockey:storage Created LUN 0./> /iscsi/iqn.2017-08.com.example.issic-server/tpg1/acls create iqn.2017-08.com.example.lockeyCreated Node ACL for iqn.2017-08.com.example.lockeyCreated mapped LUN 0./> /iscsi/iqn.2017-08.com.example.issic-server/tpg1/portals create 172.25.254.136Using default IP port 3260Created network portal 172.25.254.136:3260./> exitGlobal pref auto_save_on_exit=trueLast 10 configs saved in /etc/target/backup.Configuration saved to /etc/target/saveconfig.json[root@issic-server ~]# targetcli

这里写图片描述
这里写图片描述

5.3 Extend LVM(server side)

note:you should first add a partion named /dev/vdb2 with label LVM
then excute following command to refresh partion table:

[root@issic-server ~]# partprobe
[root@issic-server ~]# pvcreate /dev/vdb2  Physical volume "/dev/vdb2" successfully created[root@issic-server ~]# vgextend vg0 /dev/vdb2  Volume group "vg0" successfully extended[root@issic-server ~]# lvextend -L 800M /dev/vg0/lv0   Extending logical volume lv0 to 800.00 MiB  Logical volume lv0 successfully resized[root@issic-server ~]# vgs  VG   #PV #LV #SN Attr   VSize VFree    vg0    2   1   0 wz--n- 1.58g 816.00m[root@issic-server ~]# lvs  LV   VG   Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert  lv0  vg0  -wi-ao---- 800.00m                                             [root@issic-server ~]# fdisk -lDisk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x00013f3e   Device Boot      Start         End      Blocks   Id  System/dev/vda1   *        2048    20970332    10484142+  83  LinuxDisk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x3f03c86b   Device Boot      Start         End      Blocks   Id  System/dev/vdb1            2048     2099199     1048576   83  Linux/dev/vdb2         2099200     3327999      614400   8e  Linux LVMDisk /dev/mapper/vg0-lv0: 838 MB, 838860800 bytes, 1638400 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes

这里写图片描述

5.4 Check extension result(client side)

#firstly you should logout to refresh[root@client ~]# iscsiadm -m node -T iqn.2017-08.com.example.issic-server -uLogging out of session [sid: 2, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260]Logout of [sid: 2, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260] successful.##then login to see reslut by executing :fdisk -l [root@client ~]# iscsiadm -m node -T iqn.2017-08.com.example.issic-server -p 172.25.254.136 -lLogging in to [iface: default, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260] (multiple)Login to [iface: default, target: iqn.2017-08.com.example.issic-server, portal: 172.25.254.136,3260] successful.[root@client ~]# fdisk -l#once success you will see your sda device volume increased to a new level

Error notes:

1.Error when start or restart iscsi:

[root@client ~]# systemctl restart iscsi#...authentication failed....

once this happed,you should do following operations in order:

[root@client ~]# systemctl restart iscsid.service [root@client ~]# systemctl restart iscsi

2.Error when pvcreate a new added partion:

[root@issic-server ~]# pvcreate /dev/vdb2  Physical volume /dev/vdb2 not found  Device /dev/vdb2 not found (or ignored by filtering).[root@issic-server ~]# fdisk -l

once this happed,you should first refresh your partition table then make pvcreate

[root@issic-server ~]# partprobe
原创粉丝点击