Shared storage for clusters with vRealize Orchestrator – Part 1

Introduction

I was recently working on a customer requirement where they wanted shared storage provisioned using Dell VNX eNAS for their middleware cluster solution using vRealize Orchestrator. The business requirement was to provision storage and provide the share information to the chef run for application cookbooks. The storage should be decommissioned with the VM decom. I will cover the high-level framework that I used for this and it can be used for any shared storage provisioning. Since this a lengthy topic, I will try to cover this in multiple posts. In this post, I will explain the core storage operations. The next blog post will cover the business logic and vRA side of the solution.

Solution Design

Dell VNX provides a plugin for its storage solution. The plugin provides all the API calls you would need. I used some basic building blocks to stitch this solution. So, at a high level:

  • I used the vRO plugin to provide the foundation for the solution.
  • Plugin ships some OOB configuration workflows to add the adapters.
  • VNX connection objects were stored in the config element.
  • I stored some metadata about the storage connections in the resource element. The resource element is used to provide the mapping between storage adapter, filesystem name, and data center location.
  • I used my custom actions and workflows to execute the storage calls.
  • The solution in vRO gets triggered by a vRA subscription.

For this article, I will assume two DC’s Sydney and Brisbane each DC having a production and a non production setup.

Plugin

Plugin install is a standard process using vRO Control Center. I am not going to cover the details on the plugin install.

The plugin provides a number of OOB workflows and actions for storage configuration/consumption. Surprisingly, the OOB actions and workflows were substandard, most of the workflows have errors. However, you can use the “Add VNX File Adapter” workflow in VNX library for the file adapter config. Once added file adapters and its related storage elements can now be viewed under “Dell MS VNX Plugin”

Config Elements

Once the adapters are configured, you can add these adapters as “VNX:VNXFileAapter” objects in the config elements. This will make our life easy once we start writing code to create and modify the shares.

Resource Element

When writing the workflow, we need to access the filesystem name associated with each file adapter. Since its a static data, I am going to store this information as a JSON for easy access. Below is my sample JSON.

{
    "Sydney":{
        "Non-Production":{
            "vdm":"vdm_sydney_nonprod",
            "fileSystem":"np_cluster"
        },
        "Production":{
            "vdm":"vdm_sydney_prod",
            "fileSystem":"pr_cluster"
        }
    },
    "Brisbane":{
        "Non-Production":{
            "vdm":"vdm_brisbane_nonprod",
            "fileSystem":"np_cluster"
        },
        "Production":{
            "vdm":"vdm_brisbane_prod",
            "fileSystem":"pr_cluster"
        }
    }
}

vRO Actions & Workflows

vRO actions will implement the heart of this solution. The customer is using vRA Developer Tools (aka IAC Tool Chain) as an Infrastructure As Code solution. When you use developer tools, it is recommended that the code should be written in actions where possible, as its easier to maintain the code base with the toolchain.

The art of writing actions as objects

Lets quickly cover the 101 of writing actions as a service objects before we dive into the code for the solution.

This is one of the most efficient ways of writing code in vRO. It was originally proposed by one of our peers Marwin Ma (who is also an author on our blog). Its a detailed topic and might need a post of its own.

In this approach, a base service action is created which is one big function that acts as an object and we write different modules within this function that can be called methods in that object. Let’s take an example to simplify.

function dcService() {
	//#region VALIDATION 
	this.validateInit = function() {
		if (!this.dc) { throw _CustomException("Unable to retrieve data center!!!"); }
	}	
	//Get the DC name
    this.dcName = fucntion(){
        return dc.name;
    }
    
	//#region INIT
    var _CustomException = System.getModule("au.fluffyclouds.service").CustomException(arguments.callee.name);//LOAD MODULE
    var dc = System.getModule("au.fluffyclouds.service").getdc();//LOAD MODULE
    this.validateInit();
    //#endregion
}
return dcService;

The above example is my data center service with just one module to get the dc name. Now I can call this action as an object in workflow and get the dc name as a method. Let’s look at the sample code below.

var dcSvc = System.getModule("au.fluffyclouds.service").dcService();
var dcSvcObj = new dcSvc();
var dcName = dcSvcObj.dcName();
System.log(dcName);

The above code is just a hypothetical example but this pattern can be used to consolidate the code and classify the code as services.

Back on storage track

Let’s get back to our original topic. Similar to the above example I wrote a NAS service action. This action has the following methods.

    this.isTreeQuotaEmpty = function(){
    }
    this.getVNXFileAdapter = function(){
    } 
    this.getVNXFileSystemName = function(){
    }
    this.getNFSExportName = function(){
    }
    this.createQtree = function(){
    }
    this.createNFSExport = function(hosts){
    }
    this.deleteQtree = function(){
    }
    this.deleteNFSExport = function(){
    } 
    this.isQtreeDuplicate = function(){
    }
    this.isNFSExportDuplicate = function(){
    }

Create Tree Quota

Before creating the NFS export, you need to create a tree quota. Use the API call

VNXFileAdapter.createTreeQutaExt((<|>?String_fsName , ?String_path , ?Number_filesSoftLimit , ?Number_filesHardLimit , ?Number_spaceSoftLimit , ?Number_spaceHardLimit))

Unfortunately, the API returns a void, so I return a boolean value if the opertation is successfull.

    this.createQtree = function(shareName,shareSize){
    var qtreeCreated = false;
    try{
        System.log("Creating Qtree for "+ shareName +" with size: "+ shareSize);
        //Create a  qtree
        vnxFileAdapter.createTreeQuotaExt(vnxFileSystemName,shareName,0,0,0,shareSize);
        
        //Verify the Qtree has been created
        var qTreesFound = this.isQtreeDuplicate();
        
        if(qTreesFound){
            System.log("Qtree's created successfully");
            return qtreeCreated = true;
        } else{
            System.log("Errors creating Qtree");
            return qtreeCreated;
        }
        return qtreeCreated = true;
    }catch(e){
        System.warn("Qtree creation failed....deleting any orpahn qtree's created during the process...");
        this.deleteQtree();
        throw ("Error Creating Qtree: "+e);
    }
}
vnxFileAdapter = //<Get File Adapter from the config element created in the "confg element section>
vnxFileSystemName = //<Get File System Name from the resource element listed in the "Resource element" section>

I use some of the additional methods in my code like “isQtreeDuplicate”. I am using the below API to get a list of all the QTrees and doing a match the share name specified by the user. It returns a boolean for a match. In case the operation fails, I delete any orphan qtrees created during this process.

vnxFileAdapter.listTreeQuotasByFilesystemId(vnxFileSystemName);

Create NFS Export

Once tree quota has been created the following API call can be used to create the NFS export.

createNfsExport(<|>?String_mover , ?[String_roHosts , ?[String_rwHosts , ?[String_rootHosts , ?[String_accessHosts , ?String_path)

I added some additional validation and return in my method.

    this.createNFSExport = function(hosts){
        var isNFSCreated = false;
        try{
   
            vnxFileAdapter.createNfsExport(vnxFileAdapter.name,null,null,hosts,hosts,vnxFileSystemName+"/"+shareName);
            var isNFSFound = this.isNFSExportDuplicate()
            if(isNFSFound){
                System.log("NFS exports created successfully");
                return isNFSCreated = true;
            } else{
                System.log("Errors creating NFS exports");
                isNFSCreated
            }
        }catch(ex){
            this.deleteNFSExport();
            throw ("Error Creating NFS Export: "+ex);
        }
    }
var hosts = //array of strings with of both cluster node IPs e.g. ["10.0.0.1","10.0.0.2"]

I followed a similar pattern as before.

  • Check if the NFS mount is created by running “this.isNFSExportDuplicate()”.
  • If the operation fails. Cleanup the orhan exports created during the process using “this.deleteNFSExport()

I used the below API calls for the respective opertations.

vnxFileAdapter.listNfsExports(vnxFileAdapter.name);
vnxFileAdapter.deleteNfsExport(vnxFileAdapter.name,nfsLogPath);

Now that the basic service constructs are created, we will discuss the business logic of this solution in the next post.

stay tuned….

Leave a Reply