EDIT 1: All XtremIO bricks ship with both 8Gb FC and 10Gb iSCSI. You do not have to specify that when ordering.
EDIT2: EMC has asked me to remove the listing of default passwords.
EDIT3: The Xbricks ship with XtremIO Software on them but it’s still highly recommended to do an install to the latest GA code on installation.
Often when customers buy a new storage array they don’t get to see the entire installation process. Usually, it’s not a big deal as it’s a bit tricky and not that exciting. It’s mostly tasks that are only used once and then never needed again. But when installing our XtremIO brick in the Varrow lab I thought it might be fun to walk through the installation of that system. If you’re like me you probably think installing new gear is fun and I thought it might be interesting to see.
Note that I’m also doing a training course for PluralSight on XtremIO that shows this process in more detail with lab videos. So look for that coming soon.
If you are unfamiliar with XtremIO or need a refresh you can check out the post I did here when it originally launched late last year.
The physical install is very easy. Below is the layout for a single XBrick.
As you can see, from the bottom, it goes Controller 1, disk enclosure, Controller 2, and then two battery backup units. When you install this you need to confirm your cabling. It’s well documented and simple…just confirm it. I won’t go through that in detail since the documentation lays it out as simple as wiring up a BluRay player. The Infiniband ports go to there other controller in a single brick install or to the IB switch in a multi-brick install. The SAS connections go to the DAE.
Your brick may have both iSCSI ports and FC or it may not. It depends on what you order. Each brick has both 8Gb FC and 10Gb iSCSI ports. Power from each controller goes to the battery backup units.
It’s also highly recommended you lay out the physical gear as in the diagram above. You do name things based on where they are, such as Controller 1 being on the bottom. Obviously Controller 1 doesn’t know if it’s on the bottom but it will help you remember which one is which later.
Also note that there is an option for a physical XMS, XtremIO Management Server. This would be an additional 1U server that runs the management UI for all of the XBricks in your cluster. But most people will just use the virtual XMS. To me it’s better to use the virtual since you can just run it as a VM and protect it like you would any other VM with tools such as VMware HA. If the XMS is down for any reason the cluster will continue to service hosts, you just can’t make any changes until it is back up. If you completely lose the XMS and need to reinstall it’s not a big deal as you just point it to your existing cluster and it will read in all configuration information.
Here is a two-XBrick configuration.
Notice that the main difference is the addition of the two Infiniband switches in the middle. When you have a single XBrick the two controllers are cabled to each other for the backplane, but when you go to multiple XBricks you connect them to the Infiniband switches.
Before you being you’ll need to gather some information, namely IP addresses. You’ll need 5 total IPs for one brick:
You’ll also want to get the default logins for the different steps. While the install documentation for XtremIO tells you which user account to log in as in each step they do not tell you the password. EMC has this documented in KB 000172817 on support.emc.com. The title of the article is “XtremIO: Default System / Cluster Access Credentials for XtremIO”. If you are not a partner you may not have access to that article so I’ll give you the ones you need for the install here. Not that these are current as of GA Code 2.2.3-17:
REMOVED AT THE REQUEST OF EMC
Let’s get going! Confirm that your cabling is correct and everything is powered on.
This can be a little odd so follow along. On the back of each controller is a Tech interface. Think of it like a console interface but over SSH. The Tech interface has a hardcoded IP address of 169.254.254.1/20 (255.255.240.0).
Every controller has the same IP on the Tech interface! Don’t connect them to a switch or you’ll get duplicate IP problems!
The idea is that you will connect your notebook directly to the Tech interface on each controller, one at a time. These interfaces do not need to be connected once the initial configuration is done. If you’re doing this in a lab, like me, you can connect both to a switch but only activate one port at a time.
Connect your notebook to the Tech interface on Controller 1, the one on the bottom of the stack. Assign the IP of 169.254.254.2/20 to your NIC. SSH in to 169.254.154.1 and login as xinstall.
You’ll be presented with the menu shown in the screenshot. We’ll perform the base configuration here for each controller. Select Option 2 – Configuration.
You’ll be prompted for the following information:
One the script configures the controller I suggest you do Option 3 and then Option 4 to confirm the configuration is correct. You’ll also see the configuration script assign some IPs to the interfaces ib0 and ib1. These are the back-end Infiniband interfaces and the IPs are automatically assigned for those. Go ahead and Exit.
Now go and do the same thing to the second controller in the brick but be sure to use the same cluster name, but different controller ID and IP information.
At this point your two (or more if you have more bricks) have their basic configuration and should being talking to each other. The next step is to go ahead and install the XMS.
As I said earlier, the XtremIO Management Server (XMS) is the management front-end for your XtremIO cluster. You have one of these per cluster, not per brick. You have the option to order a physical XMS but most will prefer the virtual one. The configuration is basically the same. The only difference is that with the virtual XMS we will just do the initial configuration using the VM’s console in vCenter. If you have the physical XMS you’ll connect to the appliances Tech port just like we did in the controller config above. Else they are the same…in fact, if you look at the XMS VM you’ll see there are two NICs but only one is connected. The disconnected NIC is actually the Tech interface that isn’t used by the VM. So you can see that it’s the same software image for both.
The Physical XMS Tech interface has the same IP scheme as the controllers: 169.254.254.1/20
The XMS is not in the data path at all. It’s just there for visibility and management changes. If it goes down the cluster continues to operate just fine. You can even delete and deploy a new XMS if you want and connect it to your cluster. For this reason I like the virtual XMS using resiliency features like VMware HA. By default is uses 2 vCPUs or I’d recommend VMware FT…but not yet…
To deploy the XMS you first go to support.emc.com and download the latest version. This will be in OVA format. You then deploy that OVA just as you would any other. I’m not going to go through that here as that’s a common task. The XMS has the following configuration:
Once the XMS is deployed and booted open the VM console and login as xinstall and choose Option 2 for Configuration. You’ll be prompted for the following information:
When finished display the configuration and confirm that it is correct and then Exit.
The XtremIO controllers don’t actually ship with the XtremIO software on them. They only ship with a basic Linux OS installed so you have to deploy the actual XtremIO OS to them. Before you can do that you need to go to support.emc.com and download the latest version. Once you’ve downloaded the file you need to copy that file to the XMS. You can use this using whatever SCP tool you prefer. You just need to copy it over and put it in the /var/lib/xms/images/ directory on the XMS. I normally use the root account for authentication.
After you copy the XtremIO Software file to the XMS you’ll need to SSH to the XMS and login as xinstall. From here you want to choose Option 6, Fresh Install.
You’ll be prompted for some information:
It may seem odd that filename for the XtremIO Software is called upgrade-to-<version>.tgz but you use the same file for both new installs and upgrades. You aren’t doing anything wrong.
Once you give it the correct information the scripts does a number of things. You can see a lot of it in the screenshot above. It will go out and confirm communication to all expected bricks and controllers. If you tell it to expect 2 bricks and it only finds 2 controllers, instead of 4, it will error out. If it does confirm everything is cabled correctly. Little bit of advice…. I’ve found that I can get through this process so fast that I get to this point before my controllers have a chance to establish communication. Therefore I’ll often wait 10 minutes between finishing the controller configuration and this step to give them time.
Assuming everything works fine it’ll install the software to all controllers and reboot them. When it is done choose Exit.
Almost done! The last major step is to create your XtremIO cluster…meaning, bringing all of your bricks and controllers together. To do this SSH in to the XMS and login as xmsadmin. Oddly enough you’ll next be presented with a Username: prompt. Login to that as tech. This is the XMS management shell where you can execute CLI commands.
Once at the CLI enter the command:
create-cluster expected-number-of-bricks=<i> sc-mgr-host=”<j>” cluster-name=”<k>”
This process may take some time to complete…10 or 15 minutes. Just watch it and confirm there are no errors.
Once the process is done I like to run a show-clusters-info command to confirm the configuration. At this point you are basically done. You can point a web browser to the IP address of the XMS and login as admin to make sure the XMS is monitoring and managing your cluster.
There are a few more minor steps such as configuring call home support but at this point you have a working, functional, super-fast all flash array.
While it might seem like there are a lot of steps here there really aren’t. You’re just standing up your controllers, management station, and tying it all together. Now that I’ve done a few of these I’ve found I can rebuild our XtremIO in the lab in under 45 minutes, start to finish. Now that doesn’t include time for racking the gear, cabling it, configuring the network, or connecting hosts but it shows that the array itself is very easy to stand up.
Soon I’ll be posting more articles and videos on using and managing XtremIO. It’s amazingly simple.