Domain 3.1 – Configure FC SAN Storage

Objective 3.1.1 – Identify FC SAN Hardware Components

Storage Area Network Concepts

A storage area network (SAN) is a specialized high-speed network that connects computer systems, or host servers, to high performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and storage disk arrays.
A SAN topology with at least one switch present on the network forms a SAN fabric.
To transfer traffic from host servers to shared storage, the SAN uses the Fibre Channel (FC) protocol that packages SCSI commands into Fibre Channel frames.
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.
Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts.

Ports

In the context of this document, a port is the connection from a device into the SAN. Each node in the SAN, such as a host, a storage device, or a fabric component has one or more ports that connect it to the SAN. Ports are identified in a number of ways.
WWPN (World Wide Port Name)
A globally unique identifier for a port that allows certain applications to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the device.
Port_ID (or port address)
Within a SAN, each port has a unique port ID that serves as the FC address for the port. This unique ID enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs in to the fabric. The port ID is valid only while the device is logged on.
When N-Port ID Virtualization (NPIV) is used, a single FC HBA port (N-port) can register with the fabric by using several WWPNs. This method allows an N-port to claim multiple fabric addresses, each of which appears as a unique entity. When ESX/ESXi hosts use a SAN, these multiple, unique identifiers allow the assignment of WWNs to individual virtual machines as part of their configuration.
Storage System Types
ESX/ESXi supports different storage systems and arrays.
The types of storage that your host supports include active-active, active-passive, and ALUA-compliant.
Active-active storage system
Allows access to the LUNs simultaneously through all the storage ports that are available without significant performance degradation. All the paths are active at all times, unless a path fails.
Active-passive storage system
A system in which one storage processor is actively providing access to a given LUN. The other processors act as backup for the LUN and can be actively providing access to other LUN I/O. I/O can be successfully sent only to an active port for a given LUN. If access through the active storage port fails, one of the passive storage processors can be activated by the servers accessing it.
Asymmetrical storage system
Supports Asymmetric Logical Unit Access (ALUA). ALUA-complaint storage systems provide different levels of access per port. ALUA allows hosts to determine the states of target ports and prioritize paths. The host uses some of the active paths as primary while others as secondary.
Overview of Using ESX/ESXi with a SAN
Using ESX/ESXi with a SAN improves flexibility, efficiency, and reliability. Using ESX/ESXi with a SAN also supports centralized management, failover, and load balancing technologies.
The following are benefits of using ESX/ESXi with a SAN:
You can store data securely and configure multiple paths to your storage, eliminating a single point of failure.
Using a SAN with ESX/ESXi systems extends failure resistance to the server. When you use SAN storage, all applications can instantly be restarted on another host after the failure of the original host.
You can perform live migration of virtual machines using VMware vMotion.
Use VMware High Availability (HA) in conjunction with a SAN to restart virtual machines in their last known state on a different server if their host fails.
Use VMware Fault Tolerance (FT) to replicate protected virtual machines on two different hosts. Virtual machines continue to function without interruption on the secondary host if the primary one fails.
Use VMware Distributed Resource Scheduler (DRS) to migrate virtual machines from one host to another for load balancing. Because storage is on a shared SAN array,    applications continue running seamlessly.
If you use VMware DRS clusters, put an ESX/ESXi host into maintenance mode to have the system migrate all running virtual machines to other ESX/ESXi hosts. You can then perform upgrades or other maintenance operations on the original host.
The portability and encapsulation of VMware virtual machines complements the shared nature of this storage. When virtual machines are located on SAN-based storage, you can quickly shut down a virtual machine on one server and power it up on another server, or suspend it on one server and resume operation on another server on the same network. This ability allows you to migrate computing resources while maintaining consistent shared access.

Understanding VMFS Datastores

To store virtual disks, ESX/ESXi uses datastores, which are logical containers that hide specifics of storage from virtual machines and provide a uniform model for storing virtual machine files. Datastores that you deploy on storage devices typically use the VMware Virtual Machine File System (VMFS) format, a special high-performance file system format that is optimized for storing virtual machines.
A VMFS datastore can run multiple virtual machines. VMFS provides distributed locking for your virtual machine files, so that your virtual machines can operate safely in a SAN environment where multiple ESX/ESXi hosts share the same VMFS datastore.
Use the vSphere Client to set up a VMFS datastore in advance on a block-based storage device that your ESX/ESXi host discovers. A VMFS datastore can be extended to span several physical storage extents, including SAN LUNs and local storage. This feature allows you to pool storage and gives you flexibility in creating the datastore necessary for your virtual machine.
You can increase the capacity of a datastore while virtual machines are running on the datastore. This ability lets you add new space to your VMFS datastores as your virtual machine requires it. VMFS is designed for concurrent access from multiple physical machines and enforces the appropriate access controls on virtual machine files.
Metadata Updates
A VMFS datastore holds virtual machine files, directories, symbolic links, RDM descriptor files, and so on. The datastore also maintains a consistent view of all the mapping information for these objects. This mapping information is called metadata.
Metadata is updated each time the attributes of a virtual machine file are accessed or modified when, for example, you perform one of the following operations:
Creating, growing, or locking a virtual machine file
Changing a file’s attributes
Powering a virtual machine on or off
Objective 3.1.2 – Identify How ESX Server Connections are Made to FC SAN Storage

To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.

Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts. Usually, LUN masking is performed at the SP or server level.

Objective 3.1.3 – Describe ESX Server FC SAN Storage Addressing

Understanding Fibre Channel Naming

In Fibre Channel SAN, a World Wide Name (WWN) uniquely identifies each element in the network, such as a Fibre Channel adapter or storage device.
The WWN is a 64-bit address that consists of 16 hexadecimal numbers and might look like this:
20:00:00:e0:8b:8b:38:77 21:00:00:e0:8b:8b:38:77
The WWN is assigned to every Fibre Channel SAN element by its manufacturer.

When using Fibre Channel to connect to the backend storage, VMware ESX requires the use of a Fibre Channel switch. Using more than one allows for redundancy. The Fibre Channel switch will form the “fabric” in the Fibre Channel network by connecting multiple nodes together. Disk arrays in Storage Area Networks (SAN) are one of the main things you will see connected in a Fibre Channel Network along with servers and/or tape drives. Storage Processors aggregate physical hard disks into logical volumes, otherwise called LUNs, each with its own LUN number identifier. World Wide Names (WWNs) are attached by the manufacturer to the Host Bus Adapters (HBA). This is a similar concept as used by MAC addresses within network interface cards (NICs). All Zoning and Pathing is the method the Fibre Channel Switches and SAN Service Processor (SP) use for controlling host access to the LUNs. The SP use soft zoning to control LUN visibility per WWN.

The Fibre Channel Switch uses hard zoning, which controls SP visibility on a per switch basis as well as LUN masking. LUN Masking controls LUN visibility on a per host basis. The VMkernel will address the LUN using the following example syntax:

Vmhba(adapter#):target#:LUN#:partition# or Vmhba1:0:0:1

Objective 3.1.4 – Describe the Concepts of (a) Zoning and (b) LUN Masking

a) Fibre Channel Zoning is the partitioning of a Fibre Channel fabric into smaller subsets to restrict interference, add security, and to simplify management. While a SAN makes available several virtual disks (LUNs), each system connected to the SAN should only be allowed to a controlled subset of the LUNs. Zoning applies only to the switched fabric topology (FC-SW), it does not exist in simpler Fibre Channel topologies.

Zoning is sometimes confused with LUN masking, because it serves the same goals. LUN masking, however, works on Fibre Channel level 4 (i.e. on SCSI level), while zoning works on level 2. This allows zoning to be implemented on switches, whereas LUN masking is performed on endpoint devices – host adapters or disk array controllers.

Zoning is also different from VSANs, in that each port can be a member of multiple zones, but only one VSAN. VSAN (similarly to VLAN) is in fact a separate network (separate sub-fabric), with its own fabric services (including its own separate zoning).

There are two main methods of zoning, hard and soft, that combine with two sets of attributes, name and port.

Soft zoning restricts only the fabric name services, to show the device only an allowed subset of devices. Therefore, when a server looks at the content of the fabric, it will only see the devices it is allowed to see. However, any server can still attempt to contact any device on the network by address. In this way, soft zoning is similar to the computing concept of security through obscurity.

In contrast, hard zoning restricts actual communication across a fabric. This requires efficient hardware implementation (frame filtering) in the fabric switches, but is much more secure.

Zoning can also be applied to either switch ports or end-station names. Port zoning restricts specific switch ports from seeing unauthorized ports. WWN zoning (also called name zoning) restricts access by device’s World Wide Name (WWN). With port zoning, even when device is unplugged from a switch port and a different one is plugged in, the new one still has access to the zone – i.e. the fact that device’s WWN changed is ignored. With WWN zoning, when device is unplugged from a switch port and plugged to a different port (perhaps on a different switch) it still has access to the zone, because the switches check only device’s WWN – i.e. the specific port that device connects to is ignored. This is more flexible, but WWNs can be easily spoofed, reducing security.

Currently, the combination of hard and WWN zoning is the most popular. Because port zoning is non-standard, it usually requires a homogeneous SAN (all switches from one vendor).

In order to bring the created zones together for ease of deployment and management a zoneset is employed (also called zoning config). A zoneset is merely a logical container for the individual zones, which are designed to work at the same time. A zoneset can contain WWN zones, port zones, or a combination of both (hybrid zones). The zoneset must be “activated” within the fabric (i.e. distributed through all the switches and then simultaneously enforced). Switches may contain more than one zoneset, but only one zoneset can be active in the entire fabric.

b) Logical Unit Number Masking or LUN masking is an authorization process that makes a Logical Unit Number available to some hosts and unavailable to other hosts. The security benefits are limited in that with many HBAs it is possible to forge source addresses (WWNs/MACs/IPs). However, it is mainly implemented not as a security measure per se, but rather as protection against misbehaving servers from corrupting disks belonging to other servers. For example, Windows servers attached to a SAN will under some conditions corrupt non-Windows (Unix, Linux, NetWare) volumes on the SAN by attempting to write Windows volume labels to them. By hiding the other LUNs from the Windows server, this can be prevented, since the Windows server does not even realize the other LUNs exist.

Objective 3.1.5 – Configure LUN Masking

Mask Paths
You can prevent the ESX/ESXi host from accessing storage devices or LUNs or from using individual paths to a LUN. Use the vSphere CLI commands to mask the paths.
When you mask paths, you create claim rules that assign the MASK_PATH plug-in to the specified paths.
Procedure
1
Check what the next available rule ID is.
esxcli corestorage claimrule list
The claim rules that you use to mask paths should have rule IDs in the range of 101 – 200. If this command shows that rule 101 and 102 already exist, you can specify 103 for the rule to add.
2
Assign the MASK_PATH plug-in to a path by creating a new claim rule for the plug-in.
esxcli corestorage claimrule add -P MASK_PATH
For information on command-line options, see esxcli corestorage claimrule Options.
3
Load the MASK_PATH claim rule into your system.
esxcli corestorage claimrule load
4
Verify that the MASK_PATH claim rule was added correctly.
esxcli corestorage claimrule list
5
If a claim rule for the masked path exists, remove the rule.
esxcli corestorage claiming unclaim
6
Run the path claiming rules.
esxcli corestorage claimrule run
After you assign the MASK_PATH plug-in to a path, the path state becomes irrelevant and is no longer maintained by the host. As a result, commands that display the masked path’s information might show the path state as dead.
Example: Masking a LUN
In this example, you mask the LUN 20 on targets T1 and T2 accessed through storage adapters vmhba2 and vmhba3.
#esxcli corestorage claimrule list
#esxcli corestorage claimrule add -P MASK_PATH -r 109 -t location -A vmhba2 -C 0 -T 1 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 110 -t location -A vmhba3 -C 0 -T 1 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 111 -t location -A vmhba2 -C 0 -T 2 -L 20
#esxcli corestorage claimrule add -P MASK_PATH -r 112 -t location -A vmhba3 -C 0 -T 2 -L 20
#esxcli corestorage claimrule load
#esxcli corestorage claimrule list
#esxcli corestorage claiming unclaim -t location -A vmhba2
#esxcli corestorage claiming unclaim -t location -A vmhba3
# esxcli corestorage claimrule run

Objective 3.1.6 – Scan for New LUNs

Perform Storage Rescan
When you make changes in your SAN configuration, you might need to rescan your storage. You can rescan all storage available to your host. If the changes you make are isolated to storage accessed through a specific adapter, perform rescan for only this adapter.
Use this procedure if you want to limit the rescan to storage available to a particular host or accessed through a particular adapter on the host. If you want to rescan storage available to all hosts managed by your vCenter Server system, you can do so by right-clicking a datacenter, cluster, or folder that contains the hosts and selecting Rescan for Datastores.
Procedure
1
In the vSphere Client, select a host and click the Configuration tab.
2
In the Hardware panel, select Storage Adapters, and click Rescan above the Storage Adapters panel.
You can also right-click an individual adapter and click Rescan to rescan just that adapter.
Important
On ESXi, it is not possible to rescan a single storage adapter. If you rescan a single adapter, all adapters are rescanned.
3
To discover new disks or LUNs, select Scan for New Storage Devices.
If new LUNs are discovered, they appear in the device list.
4
To discover new datastores or update a datastore after its configuration has been changed, select Scan for New VMFS Volumes.
If new datastores or VMFS volumes are discovered, they appear in the datastore list.


Objective 3.1.7 – Determine and Configure the Appropriate Multi-Pathing Policy

Objective 3.1.8 – Differentiate Between NMP and Third-party MPP

VMware Multipathing Module
By default, ESXi provides an extensible multipathing module called the Native Multipathing Plug-In (NMP).
Generally, the VMware NMP supports all storage arrays listed on the VMware storage HCL and provides a default path selection algorithm based on the array type. The NMP associates a set of physical paths with a specific storage device, or LUN. The specific details of handling path failover for a given storage array are delegated to a Storage Array Type Plug-In (SATP). The specific details for determining which physical path is used to issue an I/O request to a storage device are handled by a Path Selection Plug-In (PSP). SATPs and PSPs are sub plug-ins within the NMP module.
Upon installation of ESXi, the appropriate SATP for an array you use will be installed automatically. You do not need to obtain or download any SATPs.
Subtopics
VMware SATPs
Storage Array Type Plug-Ins (SATPs) run in conjunction with the VMware NMP and are responsible for array-specific operations.
ESXi offers a SATP for every type of array that VMware supports. It also provides default SATPs that support non-specific active-active and ALUA storage arrays, and the local SATP for direct-attached devices. Each SATP accommodates special characteristics of a certain class of storage arrays and can perform the array-specific operations required to detect path state and to activate an inactive path. As a result, the NMP module itself can work with multiple storage arrays without having to be aware of the storage device specifics.
After the NMP determines which SATP to use for a specific storage device and associates the SATP with the physical paths for that storage device, the SATP implements the tasks that include the following:
Monitors the health of each physical path.
Reports changes in the state of each physical path.
Performs array-specific actions necessary for storage fail-over. For example, for active-passive devices, it can activate passive paths.
VMware PSPs
Path Selection Plug-Ins (PSPs) run with the VMware NMP and are responsible for choosing a physical path for I/O requests.
The VMware NMP assigns a default PSP for each logical device based on the SATP associated with the physical paths for that device. You can override the default PSP.
By default, the VMware NMP supports the following PSPs:
Most Recently Used (VMW_PSP_MRU)
Selects the path the ESX/ESXi host used most recently to access the given device. If this path becomes unavailable, the host switches to an alternative path and continues to use the new path while it is available. MRU is the default path policy for active-passive arrays.
Fixed (VMW_PSP_FIXED)
Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the host cannot use the preferred path, it selects a random alternative available path. The host reverts back to the preferred path as soon as that path becomes available. Fixed is the default path policy for active-active arrays.
Caution
If used with active-passive arrays, the Fixed path policy might cause path thrashing.
VMW_PSP_FIXED_AP
Extends the Fixed functionality to active-passive and ALUA mode arrays.
Round Robin (VMW_PSP_RR)
Uses a path selection algorithm that rotates through all available active paths enabling load balancing across the paths.

Domain 3.2 – Configure iSCSI SAN Storage

Objective 3.2.1 – Identify iSCSI SAN Hardware Components

Objective 3.2.2 – Determine Use Cases for Hardware vs. Software iSCSI Initiators

  • You can use both a hardware as well as a software iSCSI initiator within VMware. Both will do the job, but there are some differences:
    • Software iSCSI initiator
      The software iSCSI initiator uses code from the vmkernel and requires only regular NIC’s in you ESX host. Best is to use a dedicated NIC, but using a VLAN is possible as well. The main benefits of an iSCSI software initiator is the low cost (regular NIC of VLAN) that provides most of the functionality needed for most of the environments.
    • Hardware iSCSI initiator
      The hardware initiator allows for some extra functionality and less of a performance penalty on the system processor than the software initiator. Because the handling of IP packets is not done on the system processor, but on the iSCSI hardware initiator. Also hardware initiators allow a boot from iSCSI SAN setup. Generally only the most demanding setups require a hardware initiator but in those environments a fibre channel SAN is another way to go.
  • Configure the iSCSI Software Initiator
    When you need a iSCSI software initiator you need to:
  • Create a VMkernel port for physical network adapters
    • Select a ESX host
    • Select the tab “Configuration”
    • Select “Networking”
    • Select “Add Networking”
    • Select “VMkernel”
    • Select “Create a virtual switch”
    • Select “Select the NICs
    • Go to “Port Group Properties” and enter a friendly name under Network label
    • Enter the IP settings
    • Finish
  • Enable the software iSCSI initiator
    • Select a ESX host
    • Select the tab “Configuration”
    • Select “Storage Adaptors”
    • Select the iSCSI Initiator
    • Select properties
    • Click “Enabled”
  • If you use multiple network adapters, activate multipathing on your host using the port binding technique. You can find all about multipathing here op page 33.
  • If needed, enable Jumbo Frames
    Jumbo Frames must be enabled for each vSwitch through the vSphere CLI. Also, if you use an ESX host, you must create a VMkernel network interface enabled with Jumbo Frames. This can only be done from the Command Line.
    • To set the MTU size for the vSwitch
      vicfg-vswitch -m <MTU> <vSwitch>
    • To check if the creation succeded successfully you can use the command:
      vicfg-vswitch -l
    • To create a Jumbo frames enabled VMkernel interface:
      esxcfg-vmknic -a -I <ip address> -n <netmask> -m <MTU> <port group name>

Make sure that you use the Jumbo frames enable vSwitch to create the VMkernel interface in. To check if the VMkernel interface is jumbo frames enabled:

esxcfg-vmknic -l

Objective 3.2.3 – Configure the iSCSI Software Initiator

Objective 3.2.4 – Configure Dynamic/Static Discovery

Objective 3.2.5 – Configure CHAP Authentication


Objective 3.2.6 – Configure VMkernel Port Binding for iSCSI Software Multi-Pathing



Objective 3.2.7 – Discover LUNs


Objective 3.2.8 – Identify iSCSI Addressing in the Context of the Host


Domain 3.3 – Configure NFS Datastores

Objective 3.3.1 – Identify the NFS Hardware Components

Network Attached Storage
ESXi supports using NAS through the NFS protocol. The NFS protocol enables communication between an NFS client and an NFS server.
The NFS client built into ESXi lets you access the NFS server and use NFS volumes for storage. ESXi supports only NFS Version 3 over TCP.
You use the vSphere Client to configure NFS volumes as datastores. Configured NFS datastores appear in the vSphere Client, and you can use them to store virtual disk files in the same way that you use VMFS-based datastores.
Note
ESXi does not support the delegate user functionality that enables access to NFS volumes using non-root credentials.
Figure 8‑4 depicts a virtual machine using the NFS volume to store its files. In this configuration, the host connects to the NFS server, which stores the virtual disk files, through a regular network adapter.
Figure 8‑4. NFS Storage
The host connects to the NFS server, which stores the virtual disk files, through a regular network adapter.
The virtual disks that you create on NFS-based datastores use a disk format dictated by the NFS server, typically a thin format that requires on-demand space allocation. If the virtual machine runs out of space while writing to this disk, the vSphere Client notifies you that more space is needed. You have the following options:
Free up additional space on the volume so that the virtual machine continues writing to the disk.
Terminate the virtual machine session. Terminating the session shuts down the virtual machine.
* Caution
When your host accesses a virtual machine disk file on an NFS-based datastore, a .lck-XXX lock file is generated in the same directory where the disk file resides to prevent other hosts from accessing this virtual disk file. Do not remove the .lck-XXX lock file, because without it, the running virtual machine cannot access its virtual disk file.

Objective 3.3.2 – Explain ESX Exclusivity for NFS Mounts


Mounting NFS Volumes
In ESXi, the model of how ESXi accesses NFS storage of ISO images that are used as virtual CD-ROMs for virtual machines is different from the model used in ESX Server 2.x.
ESXi has support for VMkernel-based NFS mounts. The new model is to mount your NFS volume with the ISO images through the VMkernel NFS functionality. All NFS volumes mounted in this way appear as datastores in the vSphere Client.

Objective 3.3.3 – Configure ESX/ESXi Network Connectivity to the NAS Device

Set Up VMkernel Networking
Create a VMkernel network adapter for use as a vMotion interface or an IP storage port group.
Procedure
1
Log in to the vSphere Client and select the host from the inventory panel.
2
Click the Configuration tab and click Networking.
3
In the Virtual Switch view, click Add Networking.
4
Select VMkernel and click Next.
5
Select the vSwitch to use, or select Create a virtual switch to create a new vSwitch.
6
Select the check boxes for the network adapters your vSwitch will use.
Select adapters for each vSwitch so that virtual machines or other services that connect through the adapter can reach the correct Ethernet segment. If no adapters appear under Create a new virtual switch, all the network adapters in the system are being used by existing vSwitches. You can either create a new vSwitch without a network adapter, or select a network adapter that an existing vSwitch uses.
7
Click Next.
8
Select or enter a network label and a VLAN ID.
Option
Description
Network Label
A name that identifies the port group that you are creating. This is the label that you specify when configuring a virtual adapter to be attached to this port group when configuring VMkernel services such as vMotion and IP storage.
VLAN ID
Identifies the VLAN that the port group’s network traffic will use.
9
Select Use this port group for vMotion to enable this port group to advertise itself to another host as the network connection where vMotion traffic should be sent.
You can enable this property for only one vMotion and IP storage port group for each host. If this property is not enabled for any port group, migration with vMotion to this host is not possible.
10
Choose whether to use this port group for fault tolerance logging.
11
On an IPv6-enabled host, choose whether to use IP (Default), IPv6, or IP and IPv6 networking.
This option does not appear on hosts that do not have IPv6 enabled. IPv6 configuration cannot be used with dependent hardware iSCSI adapters.
12
Click Next.
13
Select Obtain IP settings automatically to use DHCP to obtain IP settings, or select Use the following IP settings to specify IP settings manually.
If you choose to specify IP settings manually, provide this information.
DHCP cannot be used with dependent hardware iSCSI adapters.
a
Enter the IP address and subnet mask for the VMkernel interface.
b
Click Edit to set the VMkernel Default Gateway for VMkernel services, such as vMotion, NAS, and iSCSI.
c
On the DNS Configuration tab, the name of the host is entered by default.
The DNS server addresses that were specified during installation are also preselected, as is the domain.
d
On the Routing tab, provide the VMkernel gateway information.
A gateway is needed for connectivity to machines not on the same IP subnet as the VMkernel. The default is static IP settings.
e
Click OK, then click Next.
14
If you are using IPv6 for the VMkernel interface, select one of the following options for obtaining IPv6 addresses.
Obtain IPv6 addresses automatically through DHCP
Obtain IPv6 addresses automatically through router advertisement
Static IPv6 addresses
15
If you choose to use static IPv6 addresses, complete the following steps.
a
Click Add to add a new IPv6 address.
b
Enter the IPv6 address and subnet prefix length, and click OK.
c
To change the VMkernel default gateway, click Edit.
16
Click Next.
17
Review the information, click Back to change any entries, and click Finish.

Objective 3.3.4 – Create an NFS Datastore

Create an NFS-Based Datastore
You can use the Add Storage wizard to mount an NFS volume and use it as if it were a VMFS datastore.
Prerequisites
Because NFS requires network connectivity to access data stored on remote servers, before configuring NFS, you must first configure VMkernel networking.
Procedure
1
Log in to the vSphere Client and select the host from the Inventory panel.
2
Click the Configuration tab and click Storage in the Hardware panel.
3
Click Datastores and click Add Storage.
4
Select Network File System as the storage type and click Next.
5
Enter the server name, the mount point folder name, and the datastore name.
Note
When you mount the same NFS volume on different hosts, make sure that the server and folder names are identical across the hosts. If the names do not match exactly, for example, if you enter share as the folder name on one host and /share on the other, the hosts see the same NFS volume as two different datastores. This might result in a failure of such features as vMotion.
6
(Optional) Select Mount NFS read only if the volume is exported as read only by the NFS server.
7
Click Next.
8
In the Network File System Summary page, review the configuration options and click Finish.

Domain 3.4 – Configure and Manage VMFS Datastores

Objective 3.4.1 – Identify VMFS File System Attributes

  • Store virtual disk files on high performance shared storage such as Fibre Channel or iSCSI SAN.
    • Online addition or removal of nodes: Add or delete a VMware ESX/ESXi instance from a VMFS volume without pausing or halting the processing of other VMWare ESX/ESXi instances.
    • On-disk disk file locking: Ensure that the same virtual machine is not powered on by multiple servers at the same time
    • Shared data file system: Enable multiple instances of VMware ESX/ESXi to read and write from the same storage location concurrently. Since virtual machines are hardware independent and portable across servers, VMFS ensures that individual servers are not single points of failure and enables resource balancing across multiple servers.
    • Logical volume manager: Manage the interaction between the physical storage arrays and VMFS with flexibility and reliability.
  • Dynamic VMFS volume resizing: Aggregate multiple storage disks into a single VMFS volume. Add new heterogeneous LUNs to a VMFS volume on the fly.
    • New VMFS Volume Grow: Grow existing LUNs (on arrays that support this feature) and resize VMFS volumes on the fly.
  • Automatic volume re-signaturing: Simplify the use of array-based snapshot technology. Re-signaturing automatically recognizes snapshot VMFS volumes.
  • Partial online operation. VMFS volumes continue to function even if some LUNs are lost.
  • Raw device mapping. Optionally, map SAN LUNs directly to a virtual machine in order to enable application clustering4 and array-based snapshot technology while profiting from the manageability benefits of VMFS.
  • Write-through I/O: Ensure precise recovery of virtual machines in the event of server failure. Write-through I/O enables virtual machines to have the same recovery characteristics as a physical system running the same operating system.
  • Boot from SAN: Run multiple instances of VMware ESX/ESXi on diskless configurations of blade and rack mount servers by booting from SAN. Simplify backups and disaster recovery by eliminating the need to separately backup local attached server disks.
  • High Performance: Optimized for virtual machine I/O. Store and access the entire virtual machine state efficiently from a centralized location with virtual disk performance close to native SCSI.
  • Adaptive block sizing: Uses large block sizes favored by virtual disk I/O. Use sub-block allocator for small files and directories.
  • Increased number of VMware ESX/ESXi hosts per VMFS volume: Connect up to 32 VMware ESX/ESXi installations to a single VMFS volume.
  • Extended block size and file limits. Run even the most data intensive production applications such as databases, ERP and CRM in virtual machines
  • Maximum volume size: 64 TB
  • Maximum virtual disk size: 2 TB
  • Maximum file size: 2 TB
  • Block size: 128KB to 8 MB
  • Built-in storage access multi-pathing: Ensure shared storage availability with SAN multi-pathing for Fibre Channel or iSCSI SAN, and NIC teaming for NAS.
  • New – Hot add and hot extend virtual disks: Add virtual disks or extend virtual disks non-disruptively to a running virtual machine to increase available resources
  • Distributed journaling: Recover virtual machines faster and more reliably in the event of server failure.

Objective 3.4.2 – Determine the Appropriate Datastore Location/Configuration for Given Virtual Machines

View the Virtual Machine Configuration File Location
You can view the location of the virtual machine configuration and working files. This information is useful when you are configuring backup systems.
Prerequisites
Verify that you are connected to the vCenter Server or ESX/ESXi host on which the virtual machine runs.
Verify that you have access to the virtual machine in the vSphere Client inventory list.
Procedure
1
In the vSphere Client inventory, right-click the virtual machine and select Edit Settings.
2
Click the Options tab and select General Options.
3
Record the location of the configuration and working files and click OK to close the dialog box.

Objective 3.4.3 – Determine Use Cases for Multiple VMFS Datastores

Objective 3.4.4 – Create/Configure VMFS Datastores



Objective 3.4.5 – Attach Existing Datastore to New ESX Host

If an existing datastore (formatted etc.) is present and visible to the ESX host you can add it to the “Storage” view by:

  • Select a ESX host
  • Select the tab “Configuration”
  • Select “Storage”
  • Click “Refresh” in the upper right corner
  • After de refresh the disk should appear.

Objective 3.4.6 – Manage VMFS Datastores


Objective 3.4.7 Group/Unmount/Delete Datastores



Objective 3.4.8 – Grow VMFS Volumes

where to put these?

Making LUN Decisions
You must plan how to set up storage for your ESX/ESXi systems before you format LUNs with VMFS datastores.
When you make your LUN decision, keep in mind the following considerations:
Each LUN should have the correct RAID level and storage characteristic for the applications running in virtual machines that use the LUN.
One LUN must contain only one VMFS datastore.
If multiple virtual machines access the same VMFS, use disk shares to prioritize virtual machines.
You might want fewer, larger LUNs for the following reasons:
More flexibility to create virtual machines without asking the storage administrator for more space.
More flexibility for resizing virtual disks, doing snapshots, and so on.
Fewer VMFS datastores to manage.
You might want more, smaller LUNs for the following reasons:
Less wasted storage space.
Different applications might need different RAID characteristics.
More flexibility, as the multipathing policy and disk shares are set per LUN.
Use of Microsoft Cluster Service requires that each cluster disk resource is in its own LUN.
Better performance because there is less contention for a single volume.
When the storage characterization for a virtual machine is not available, there is often no simple method to determine the number and size of LUNs to provision. You can experiment using either a predictive or adaptive scheme.
Multipathing and Path Failover
When transferring data between the host server and storage, the SAN uses a technique known as multipathing. Multipathing allows you to have more than one physical path from the ESX/ESXi host to a LUN on a storage system.
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover.
ESX/ESXi and SAN Use Cases
You can perform a number of tasks when using ESX/ESXi with a SAN.
Using ESX/ESXi in conjunction with a SAN is effective for the following tasks:
Maintenance with zero downtime
When performing ESX/ESXi host or infrastructure maintenance, use VMware DRS or vMotion to migrate virtual machines to other servers. If shared storage is on the SAN, you can perform maintenance without interruptions to the users of the virtual machines.
Load balancing
Use vMotion or VMware DRS to migrate virtual machines to other hosts for load balancing. If shared storage is on a SAN, you can perform load balancing without interruption to the users of the virtual machines.
Storage consolidation and simplification of storage layout
If you are working with multiple hosts, and each host is running multiple virtual machines, the storage on the hosts is no longer sufficient and external storage is required. Choosing a SAN for external storage results in a simpler system architecture along with other benefits.
Start by reserving a large LUN and then allocate portions to virtual machines as needed. LUN reservation and creation from the storage device needs to happen only once.
Disaster recovery
Having all data stored on a SAN facilitates the remote storage of data backups. You can restart virtual machines on remote ESX/ESXi hosts for recovery if one site is compromised.
Simplified array migrations and storage upgrades
When you purchase new storage systems or arrays, use storage vMotion to perform live automated migration of virtual machine disk files from existing storage to their new destination without interruptions to the users of the virtual machines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: