preload

01.05.2014
I've been doing a lot of work with Openfiler, a well regarded Open Source Storage Appliance Software package. As I run across configuration issues not covered in their manual, and figure out some of the advanced features, I'll post notes here.
I'm also going to do some benchmarks with OpenFiler running head to head against dedicated iSCSI arrays. Properly done, using shared storage can be cheaper, faster and better than using local direct attached storage.
Often filers offer advanced snapshot capabilities, which make it easy to backup and restore files. Under the hood, most filers are just a server, running a stripped down operating system, with a lot of disks. Basically you should look at what you have already, and find a server that has a lot of disks (if you have disks, why buy more?). If you're not sure if your server has a dedicated RAID controller, reboot it and watch the prompts during boot. Even if you didn't buy the manual, the Openfiler people have posted excerpts of it for both the text based installation and the graphical installation. Since they have good install guides, complete with screenshots, there's no point in duplicating the effort. If you fail to do this and use automatic partitioning (the default), you'll see that you will wind up with an Openfiler system where you are UNABLE TO USE THE GUI TO MAKE VOLUME GROUPS.
Web based GUI operations should be done as the user named openfiler, which is created automatically.
Probably the easiest way to change the password is to log in the GUI as the user openfiler, password is password.
Do a default install of Openfiler 2.3 from media (I used the 32 bit, I’ve seen it on 64, too).
If you are stuck here, then you can do a one time fix to clear the error, and get the lun(s) to attach to the target (which will allow clients to use them again).
To permanently fix this on OpenFiler 2.3, make three tweaks at the command line, logged in as root. After making these 3 changes, reboot the Openfiler, and your clients will see their targets again.
If you want to use Openfiler for a iSCSI target, you have to turn the iSCSI target server on. So to recap, for my simplistic example, my Openfiler server has 5 285 GB disks on a hardware RAID controller. So the first thing you need to add a partition for Openfiler to use to hold the client storage.
So the point is, you now have a partition on the physical disk in which to build a volume group for client use.
Now you have a volume group that will hold to hold logical volumes, the actual "fake" drives or LUNs that the iSCSI target server will offer to clients. If making logical volumes for virtualization use, you will be storing virtual machines in these.
Note that since like most of us, you are probably using a software initiator (you don't have a fancy TCP Offload or hardware initiator NIC that does some of the work for you). If this turns out to be a hassle either on a client or on the Openfiler host, then think about getting hardware initiators that do iSCSI packet processing on the NIC itself.
Even though the iscsi-target server is now reporting the new larger size, clients may still not see it. The usual issue you come across is that you think you've done all these steps, but you can't see the targets or get to a LUN.
On version 2.3 of Openfiler, after you get a target and some LUNs behind it, everything often goes kerflooey after an Openfiler reboot. The personal blog of a blog developer that is no longer developing blogs for the general public. If the value of hangcheck_reboot is equal to or greater than 1, then the hangcheck-timer module restarts the system.
The O2CB cluster service is a set of modules and in-memory file systems that are required to manage OCFS2 services and volumes.
When you have created an ESXi environment and want to work with features such as vMotion and High Availability you will need shared storage in your environment. To get started you need to create a virtual machine (in ESXi or Workstation), attach the ISO-file and power on. The minimum disk requirement is 10GB so make sure to modify the hard disk size before you start the Openfiler installation. First install the operating system on a single hard disk and then after installation is ready add hard disks that will contain your NFS and iSCSI partitions. For the rest of the setup process I have only placed images in this article and added explanations for those parts that require it.
During the setup you will initialize your empty drive and you will be able to automatically partition the disk for the operating system.
If you access the web based UI for the first time you will see a message that the certificate is un-trusted. After you login for the first time change the administrator password for the virtual machine. The second task is to configure from which hosts or networks your Openfiler installation can be accessed. At this time you should add the virtual hard disks to your virtual machine that you want to use to store the NFS and iSCSI data. Access the Volumes-tab and select Block Devices from the menu on the right to manage the partitions for your disks. We now have the NFS-volume configured and must create a share that we can make available to our ESXi-servers.


As you can see in the image below you must configure a Primary Group and configure ReadWrite access for NFS for your servers or the entire subnet. Your NFS-configuration on the Openfiler server is now done so it's time to connect from the ESXi-host to this new share.
The datastore name should be the same on all your ESXi-servers in your cluster so that virtual machines can run on all your hosts for example with vMotion and High Availability.
Once you have added the adapter it will show up in the list of storage controllers as VMHBA33 or higher.
After adding the target you will be warned that you must rescan the adapter to be able to see the devices that are available. In the configuration you have setup there is no additional security so when you connect other machines with iSCSI software to your network they could access the storage target too. There are many low cost iSCSI storage solutions available such as OpenFiler of which I have written a configuration article here.
Select the NIC(s) you want to use for your iSCSI VMKernel connection (ensure that it has network connectivity through to your iSCSI target ideally on its own separate network segment – this is strongly advised). Type in a meaningful ‘Network Label’ none of the other options are required so leave them unchecked. As outlined in the screen shot below select the ‘iSCSI Software Adapter’ and then click on ‘Properties’.
Once you’ve added in the details of your iSCSI target you’ll see it’s IP address appear in the ‘Send Targets’ list.
I am trying to implement a SQL Server 2008R2 failover cluster using vSphere v4.1 and and EMC CX4-240 SAN. My name is Simon Seagrave and I am a London (UK) based Senior Technology Consultant and Technical Marketing Manager working for EMC.
In this section I will be covering different types of storage (see below), I will also be discussing the Virtual Machine Filesystem System (VMFS) and how to manage it. There are number of ways to attach storage to an ESXi server, I will first cover local storage and HBA controllers.
The second disk is a 500GB seagate disk and is attached to vmhba1, again it is block SCSI type disk, this is where I keep some of my virtual machines.
By selecting the paths tab you can expand to see how the disk is connected, if this device was multipath'ed you have all the paths listed here, as you can see we have one active path. Here is an example of a multipath'ed server I have at work, here you can see in the targets lines I have 17 devices but 34 paths, in this example I have a brocade-825 HBA controller connected to a IBM XIV SAN also notice the World Wide Number (WWN) which are associated with fibre disks. If you have a server that you can hot swap hard disks, you can rescan that particular controller to pick new drives or replaced drives, just right-click on the controller and select rescan, as seen in the image below. The last piece of information is regarding the naming convention on the controllers, the ESXi server uses a special syntax so that the VMKernel can find the storage it is using, it is used by local storage, SAN and iSCSI systems. I have already covered some of the SAN disk usage in the above section, when you install a fibre HBA controller you should be able to see it in the storage adapter section of the ESXi server, the image below is a ESXi server with a brocade HBA installed.
Here is a multipathing screen shot of one of the disks, I will be covering multipathing in the next section and showing you how to set it up and configure it with my test environment. You can get more details about your LUN's in the storage adapter panel as below, here you can see 3 attached LUN's two 390GB LUN's and one 341GB LUN, also if you notice I have 3 devices but 6 paths which means I am using multipathing.
All of this can also be viewed from the events tab, you can see me disconnecting and then reconnecting the network cable, the events can be capture by 3rd software like BMC patrol and then monitored with the rest of your environment. VMFS is VMware's own proprietary filesystem, , it supports directories and a maximum of 30,720 files in a VMFS volume, with up to 256 VM's in a volume.
When ever a ESXi server powers on a virtual machine, a file-level lock is placed on its files, the ESXi server will periodically go and confirm the the VM is still functioning - still running the virtual machine and locking the files, this is known as updating it's heartbeat information. VMFS uses a distributed journal recording all all the changes to the VMFS, if a crash does occur it uses the journal to replay the changes and thus does not carry out a full fsck check, this can prove to be much quicker especially on large volumes.
You should always format new volumes via the vSphere client this keeps the system from crossing track boundaries known as disk alignment. One big word of warning, you have the hugh potential to screw things up pretty badly in VMware, make note of any warnings and make sure that your data is secured (backed up) before making important changes, there is no rollback feature in VMware.
You can use the entire disk or make the VMFS volume smaller than the LUN size if you wish, but it is not easy to get that free space back and that's if anyone remembers were it was. You will be required to set a datastore name during the format process, VMFS volumes are known by four values: volume label, datastore label, vmhba syntax and UUID.
There are a number properties that you can change on a already created volume, increase size, rename it, manage the paths (as see in the iSCSI multipathing), just right click on the volume and select properties and you should see the screen below, It also details various information on the volume maximum file size, file system version number. There are a number of places that you can get details on the storage within you ESXi server and this is where vCenter does have advantages over the vSphere client. Another nice view in vCenter is the folder (my case the Production folder) -> storage view screen, here you can either look at the report or the maps, again the map has many options and it looks nice in your documentation.
The rest of the diagrams I have used in this section have come from using the vSphere client software, the vCenter software gives you additional views and the added bonus of see all the ESXi server views in one place, otherwise you would have to login to each ESXi server. Kernel patches are like this, and there's at least one for Openfiler 2.3 that needs a reboot to take effect as of this writing (Nov 2009). The iscsi-target service will not properly restart after a reboot, period, whether you are using XenServer as a iSCSI client against the Openfiler target or not. If the hangcheck_reboot parameter is set to zero, then the hangcheck-timer module will not restart the node. Openfiler is a good choice to setup a storage appliance to provide shared storage with NFS or iSCSI. There is also an appliance available that you can download but the appliance is not always updated to the latest version and I had mixed results in the past with importing it into different environment.
But I have made a short installation guide below with the most important steps highlighted.
Other steps such as selecting your keyboard or time zone are things you can figure out yourself. If you want to configure a fixed IP address then you can do that later in the web based management UI or you can do it now as part of the installation process.


Be aware that this password will not be the password that you will use to administer the appliance from the web based UI.
Once it is rebooted you will see the URL you can use to manage the appliance on the console. Since the virtual machine was installed with a self signed certificate for HTTPS this is normal behavior and you can proceed anyway.
You can provide individual IP-addresses for your servers or you can specify an entire network as I have done in the image below. You can do this while the virtual machine is powered on (on ESXi, not on Workstation) and Openfiler will detect them automatically. From this point onwards I will first create the entire NFS-configuration and when that is done I will create the iSCSI-disks and -configuration. Specify a name for the volume, a description and select the amount of disk space from the partition to use. You can perform this from the vSphere Client (Configuration-tab - Storage - Add Storage) or from the vSphere Web Client (select your server, Related Objects-tab - Datastores and click the create a new datastore-icon).
From the Openfiler web based management interface select the Volumes-tab and from the menu on the right select Volume Groups.
If the new volume group you had just created is not selected then select it from the drop-down list and click Change. Most important to watch out for here is to select the block-type here to be able to use it as an iSCSI-target disk. If you enable the iSCSI-target on you other ESXi-servers then they will detect the new VMFS datastore when you rescan the adapter. In my environment I purchased two local disk drives (500GB) to use as local storage, there are many companies that have no requirement for an expensive SAN solution, thus a server with a large amount of storage is purchased. When you open a vSphere client then select the ESXi server -> configuration and then storage adapters you will be presented of a list of storage controllers that are attached to your system. If you look closely enough you can see the disk type and model number in case you have to replace the disk like for like, in my case it is a ST3500418AS seagate disk. We will setup a multipath device in my iSCSI section later and the various options that come with multipathing.
VMware ESXi server also supports it own LUN masking, that is if your SAN does not support this already.
You can see the WWN of the HBA controller and the attached LUN's, clicking on the paths tab displays all the LUN's attached to this controller. The below image is my current iSCSI setup as mentioned in my test environment section I am using openfiler as a iSCSI server.
If an ESXi server fails to update these dynamic locks or its heartbeat region of the VMFS file system, then the virtual machine files are forcibly unlocked. The vSphere client automatically ensures that disk alignment takes place when you format a VMFS volume in the GUI.
Block sizes do not greatly affect performance but do affect maximum virtual disk (VMDK) file sizes. It is possible to create multiple VMFS volumes on a single LUN, this can affect performance as it can impose a LUN-wide SCSI reservation which temporary blocks access on multiple VMFS volumes where only one VMFS volume might need locking, so try to keep one VMFS volume per LUN. Volume labels need to be unique to ESXi servers whereas datastore labels need to unique to vCenter. In older versions of ESX you had to increase the VMFS volumes using extents, it is now recommended that you use the increase button on the properties window (see below image) you can double check to see if you have any free space by looking at the Extent device panel in the right-hand window, the device capacity should be greater than the Primary Partition size, also make a note of the disk name in the left panel as this is the list you will be presented with when increasing the volume size.
One of the nice features of vCenter is the map diagrams that you can view, there are many options you can change as you can see in the below image, so that you can display just the information you need, however the diagram can become messy if you have lots of ESXi server, Storage, etc. There is no need to partiiton the hard disk since you will add empty hard disks later that will hold your NFS and iSCSI data. For the name I have used NFS so that it's easy to identify what it's going to be used for later.
I find it most convenient to use a separate virtual disk with only partition and one volume for each type of access. If you extend a LUN beyond 2TB after connecting it to ESX, you will have to restore from tape or blow away the LUN as it will be unavailable. This storage can be used to create virtual machines, please note it does not allow you to use features such as vMotion as you require shared storage for this, however it is an ideal solution for a small companies wishing to reduce the number of servers within there environment. As I mentioned above you can add additional LUN's by simply rescanning the HBA controller (providing LUN masking is setup correctly on the SAN). VMFS has been designed to work with very large files such as virtual disks, it also fully supports multiple access which means more than one ESXi server can access the same LUN formatted with VMFS without fear of corruption, SCSI reservations is used to perform the file and LUN locking.
This is pretty critical in features like VMware HA, if locking was not dynamic the locks would remain in place when a ESXi server failed, HA would not work.
Note however that if you are going to be using RDM files to access raw or native LUN's disk alignment should be done within the guest operating system.
The default file system type is uses here is XFS, you could also select ext4 or another type but since we are accessing the data via NFS it doesn't really matter.
You can clearly read on the share's configuration page that it will only be enabled when you setup a primary group for the share. A ESXi server can handle up to 256 LUNs per controller, from LUN0 to LUN 255, on some SAN's LUN0 is a management LUN and should not be used, unless you want to run management software within your virtual machine.
VMware improved in version 3 to significantly reduce the number and frequency of these reservations, remember these locks are dynamic and not fixed or static.



Books like the secret by rhonda byrne online
Change torch browser language



Comments to «How to change ip address on openfiler command line»

  1. Ronaldinio writes:
    Seclusion is part of the spiritual prescription your.
  2. SOSO writes:
    Will issues retreat at Insight Meditation the other solution to deliver formal meditation principles into everyday.
  3. Joker writes:
    Embody a mindfulness way of life this terribly stunning meditation.
Wordpress. All rights reserved © 2015 Christian meditation music free 320kbps.