Archive for the ‘Data Storage – Cloud’ Category

Storage as a Service – Clouds of Data

May 26, 2010

Storage as a Service (SaaS) – How in the world do you?

There is a very good reason why cloud storage has so much hype. It simply makes sense. It has an array of attractive use case models. It has a wide range of potential scope and purpose making it as flexible as the meaning of the bits stored. But most importantly, it has a good business model that has attracted some major names into the market sector.

If you read the blog posts and articles, most will say that Cloud Storage will never be accepted due to the lack of security & accountability. The end result is that many CISO’s & CIO’s have decided that it is just too difficult to prove due diligence for compliance. As a result, they have not widely embraced the cloud model. Now while this is correct, it is not totally true. As a matter of fact most folks are actually using Cloud Storage within their environment. They just don’t equate it as such. This article is intended to provide some insight into the use models of SaaS as well as some of the technical and business considerations that need to be made in moving to a SaaS environment.

Types of SaaS Clouds

It is commonly accepted that there are two types of clouds; public and private. It is the position of this architect that there are in reality three major types of clouds and a wide range of manifestations of them. There are reasons for this logic and the following definitions will clarify why.

Public SaaS Clouds

Public clouds are clouds that are provided by open internet service providers. They are truly public in that they are equally available to anyone who is willing to put down a credit card number and post data to the repository. Examples of this are Google, Amazon & Storage Planet. While this is a popular model, as attested by its use, many are saying the honeymoon is fading along with issues of accountability, reports of lost data and lack of assurances for security and integrity of content.

Semi- Private SaaS Clouds

These are clouds that are more closed in that they usually require some sort of membership or prior business subscribership. As a result the service is typically less open to the general public. Also, the definition of semi-private can have a wide range of embodiments. Examples are, network service providers like cable and telco companies, then slightly more closed might be an educational clouds for higher education to store, post and share vast quantities of content; finally the most closed would be government usage where say in the example of a county that provides a SaaS cloud service to the various agencies within its area of coverage.

Private SaaS Clouds

These are the truly private SaaS services that are totally owned and supported by a single organization. The environment is totally closed to the outside world and access is typically controlled with the same level of diligence as corporate resource access. The usual requirements are that the user has secure credentials and his department is accounted for usage by some of type of cost center.

As indicated earlier these can occur in a variety of embodiments and in reality there is no hard categorization between them. Rather a continuum of characteristics that range from truly private to truly public.

While placing data up into a truly public cloud would cause most CISO’s and CIO’s to cringe, many are finding that semi-private and private clouds are totally acceptable in dealing with issues of integrity, security and compliance. Concern about security and integrity of content is one thing. Another more teasing issue is knowing exactly where your data is in the cloud. Is it in New York? California? Canada? Additionally, if the SaaS provider is doing due diligence in protecting your data then they are replicating it to a secondary site. Where is that? India? As you can see in a totally public cloud service there are a big set of issues that prevent large scale serious use. Additionally, often performance is a real issue. This is particularly the case for critical data or for system restores, when the disappointed systems administrator finds that it will be a day and a half before the system is back on line and operational. These are serious issues that are not easily addressable in a true public cloud environment. Semi-private and Private Clouds on the other hand can often answer these requirements and can provide fairly solid reporting about the security and location of posted content.

The important thing to realize is that it is not all or nothing. A single organization may use multiple clouds for various purposes, each with a different range of scope and usage. As an example, the figure below shows a single organization that has two private clouds one of which are used exclusively by a single department and one of which spans the whole organization. Additionally, that same organization may have semi-private clouds that are used for B2B exchange of data for use in partnerships, channel relationships, etc. Then finally, the organization may have an e-Commerce site that provides a fairly open public cloud service for its customer and prospect communities.

Figure 1. Multiple tiered Clouds

If you really boil it down, you come to a series of tiered security environments that control what type of data gets posted, by whom and for what purpose. Other issues include data type and size as well as performance expectations. Again, in a Semi-private to private usage model these issues can effectively be addressed in a fashion that satisfies both user and provider. The less public the service, the more stringent the controls for access and data movement and the tighter the security boundaries with the outside world.

It is for this reason that I think truly public SaaS clouds have too much stacked against them to be taken as a serious tool for large off site data repositories. Rather, I think that organizations and enterprises will more quickly embrace semi-private and private Cloud storage because of the more tractable environment to address the issues mentioned earlier.

There are also different levels of SaaS offerings. These can vary in complexity and offered value. As an example, a network drive service might be handy for storing extra data copies but might not be too handy as a tool for disaster recovery. As a result, most SaaS offerings can be broken into three major categories.

  • Low level – Simple Storage Target

–        Easy to implement

–        Low integration requirements

–        Simple network drive

  • Mid level – Enhanced Storage Target

–        VTL or D2D

–        iSCSI

–        Good secondary ‘off-site’ use model

  • High level – Hosted Disaster Recovery

–        VM failover

–        P2V Consistency Groups

–        Attractive to SMB sector

As one moves from one level to the next the need for more control and security becomes more important. As a result, the higher the level of SaaS offering the more private it needs to be in order to satisfy security and regulatory requirements.

The value of the first Point of Presence in SaaS

As traffic leaves a particular organization or enterprise it enters either a private WAN and at some point there is boundary to the public Internet. Often these networks are depicted as clouds. We of course realize that there is in reality a topology of networking elements that handle the various issues of data movement. These devices are often switches or routers that operate at L2 or L3 and each imposes a certain amount of latency to the traffic as it moves from one point to another. As are result, the latency profiles to access data in a truly public SaaS becomes longer and less predictable due to increasing variables. The figure below illustrates this effect. As data traverses across the Internet it intermixes with other data flows at the various points of presence where these network elements route and forward data.

Figure 2. Various ‘points of presence’ for SaaS

In a semi-private or a private cloud offering, the situation is much more controlled. In the case of a network provider, they are the very first point of presence or ‘hop’ that their customer’s traffic crosses. It only makes sense that hosting a SaaS service at that POP will offer significantly better and more controlled latency and as a result far better throughput than will a public cloud service somewhere on the network. Also consider that the bandwidth of the connection to that first POP will be much higher than the average aggregate bandwidth that would be realized to the public storage provider on the Internet. If we move to a private cloud environment such as that hosted by a University as a billed tuition service for its student population, very high bandwidth can be realized with no WAN technologies involved. Obviously, the end to end latency in this type of scenario will be minimal when compared to pushing the data. This in addition to the security and control issues mentioned above will in the opinion of the author result in the dramatic growth in semi-private and private SaaS.

Usage models for SaaS

Now that we have clarified the issues of how SaaS can be embodied, what would someone use it for? The blatant response of ‘to store data stupid’ is not sufficient. Most certainly that is an answer, but it turns out that the use case models are much more varied and interesting. At this point, I think that it is fruitful to discern between two major user populations – Residential & Business, with business including education and government institutions. The reason for the division is the degree of formality in usage. In most residential use models, there are no legal compliance issues like SOX or HIPPA to deal with. There may be confidentiality and security issues but as indicated earlier these issues are easier to address in a semi-private or private SaaS.

Business and Institution use models

Virtual Tape Library SaaS

The figure below illustrates a simple VTL SaaS topology. The basic premise is to emulate a physical tape drive across the network with connectivity provided as an iSCSI target to the initiator, which is the customer’s backup software. With the right open system VTL, the service can be as easy as a new iSCSI target that is discovered and entered into the backup server. With no modifications to existing practices or installed software, the service matches well with organizations that are tape oriented in practice and are looking for an effective means of secondary off site copies. Tapes can be re-imported back across the network to physical tape if required in the future.

Figure 3. A simple VTL SaaS

D2D SaaS

Disk to disk SaaS offerings basically provide an iSCSI target of a virtual disk volume across the network. In this type of scenario the customers existing backup software simple points to the iSCSI target for D2D backup or replication. Again, the benefit is that because the volume is virtualized and hosted, it effectively addresses off site secondary data store requirements. In some instances that may require CPE, it can even be used in tandem with next generation technologies like continuous data protection and data reduction methods, which moves towards the Hosted Disaster Recovery end of the spectrum. The figure below shows a D2D SaaS service offering with two customers illustrated. One is simply using the service as a virtual disk target. The other has an installed CPE that is running CDP and data reduction resulting in a drastic improvement on the overall required bandwidth.

Figure 4. A D2D SaaS

Collaborative Share SaaS

Another use model that has been around for a long time is collaborative sharing. I say this because I can remember better than ten years ago placing a file up on an FTP server and then pasting the URL into an email that went out to a dozen or so recipients. Rather than plug up the email servers with multiple copies of large attachments. Engineers have a number of things in common regardless of discipline. First is collaboration. A close second though is the amount of data that they typically require in order to collaborate. This type of model is very similar to the FTP example except that it is enhanced with a collaborative portal that might even host real time web conferencing services. The storage aspect, though of primary importance to the collaboration is now a secondary supporting service that is provided in a unified fashion out to the customer via a web portal. The figure below shows an example of this type of service. Note that in reality there is no direct link between the SaaS and the Web Conferencing application. Instead they are unified and merged by a front end web portal that the customer sees when using the service. On the back end a simple shared virtual network drive is provided that receives all content that is posted by the collaborative team. Each may have there own view and sets of folders for instance and each can share them with one individual, or with a group, or with everyone. This type of service makes a lot of sense for this type of community of users. In fact, any user community that regularly exchanges large amounts of data would find value in the type of use model.

Figure 5. A Collaborative Share Service

Disaster Recovery as a Service (DRaaS)

There are times when the user is looking for more than simple storage space. There is a problem that is endemic in small and medium business environments today. There is minimal if any resident IT staff and even less funding to support back end secondary projects like disaster recovery. As a result many companies have BC/DR plans that are woefully inadequate and often would leave them with major or even total data loss in the event of a key critical system failure. For these types of companies using an existing network provider for warm standby virtual data center usage makes a lot of sense. The solution would most probably require CPE to be installed, but after that point the solution could offer a turnkey DR plan that could be tested at regular scheduled intervals for a per event fee.

The big advantage of this approach is that the customer can avoid expanding IT staff and addresses a key issue of primary importance, which is the preservation of data and system up time.

Obviously, this type of service offering requires a provider who is taking SaaS seriously. There is a Data Center required where virtual resources are leased out and hosted to the customer as well as the IT staff required to run the overall operations. As shown by the prevalence of vendors providing this type of service, even with the overhead, it does have an attractive business model that only improves with expanded customer base.

Figure 6. DRaaS implementation


Residential Use Models

PC Backup & Extra Storage

This type of SaaS offering is similar to the virtual disk service (D2D) mentioned above. The important difference is that it is not iSCSI based. Rather it a NAS virtual drive that is offered to the customer through some type of web service portal. Alternatively, it could be offered as a mountable network drive via Windows Explorer™. The user would then simply drag the folders that they want to store into the cloud onto that network drive. If they use backup software they can with a few simple modifications copy data into the cloud by pointing the backup application to the virtual NAS drive. Additionally, this type of service could support small and medium businesses that are NAS oriented from a data storage architecture perspective. In the figure below, a NAS SaaS is illustrated with a residential user who is using the service to store video and music content. Another user is a small business that is using the service for NAS based D2D backup. Both customers see the service as a mapped network drive (i.e. F or H:). For the residential customer it is a drive that content can be saved to, for the business customer it is a NAS target for its backup application.

Figure 7. NAS SaaS

Collaborative Share

More and more, friends and family are not only sharing content, but creating it as well. Additionally, most of it is in pictures, music and video. All files of huge size. This results in a huge amount of data that needs to be stored but also needs to be reference able in order to be shared with others. The widely popular YouTube™ is a good example of such a collaborative service. Another example is FaceBook™, where users can post pictures and video to their walls and share them with others as they see fit. As shown in the figure below, SaaS is an embedded feature of the service. The first user posts content into the service there by using the SaaS feature. Then the second user receives the content in a streaming CDN fashion. The first user would post the content via the web service portal (i.e. their wall).The second user would initiate the real time session via the web service portal by clicking on the posted link and view the content via their local installed media player. Aside from the larger industry players, there is a demand for more localized community based collaborative shares that can exist with art and book communities, student populations, or even local business communities.

Figure 8. Collaborative Share for Residential

Technologies for SaaS

The above use models assumed the use of underlying technologies to move the data, reduce it and store it. These are then merged with supporting technologies such as web services, collaboration and perhaps content delivery to create a unified solution to the customer. Again, this could be as simple as a storage target where data storage is the primary function or it could be as complex as a full collaboration portal where data storage is more ancillary. In each instance, the same basic technologies come into play. It is obvious that from the point of view of the customer, only the best will do. While from the point of view of the provider, it is providing what will meet the level of service required. This results in a dichotomy – as often results in a business model. The end result is an equitable compromise which uses the technologies below to arrive at an equitable solution that satisfies the interest of the user as well as that of the provider. The end result is a tenable set of values and benefits to all parties which is the sign of a good business model.

Disk Arrays

Spinning disks have been around almost as long a modern computing itself. We all have the familiar spinning and clicking (now oh so faint!) on our laptops as the machines chunks through data on its relentless task of providing the right bits at the right time. Disk technology has come a long ways as well. The MTBF rating for even lower end drives are exponentially higher than the original ‘platter’ technologies. Still though, this is the Achilles Heel. This is the place where the mechanics occur. Where mechanics occur, particularly high speed mechanics – failure is just one of the realities that need to be dealt with.

I was surprised to learn just how common it is that just a bunch of disks are set up and used for cloud storage services. The reason is simple, cost. It is far more cost effective to place whole disk arrays out for leasing than it is to take that same array and sequester a portion of it for parity or mirroring. As a result, many cloud services offer best effort service and with smaller services that pretty much works – particulary if the IT staff is diligent with backups. As the data volume grows however, this approach will not work as the MTBF rate of potential failure will out weigh the ability to pump the data back into the primary. That exact number is related to the network speed available and since most organizations do not infinite bandwidth available, that limit is a finite number.

Now one could go through the math to figure the probability of data loss and gamble, or one could invest into RAID and be serious about the offering they are providing. As we shall see later on, there are technologies that assist in the economic feasibility. In my opinion, it would be the first question I asked someone who wanted to provide me a SaaS offering. That is first beyond backup and replication or anything else. Will my data be resident on a RAID array? If so what type? Another question to ask is the data replicated? If so, the next question is how many times and where?

Storage Virtualization

While a SaaS offering could be created with just a bunch of disk space. Allocation of resources would have very rough granularity and the end result would be an environment that would be drastically over provisioned. The reason for this is that as space is leased out the resource is ‘used’ whether it has data or not. Additionally, as new customers are brought on line to the service additional disk space must be acquired and allocated in a discrete fashion. Storage virtualization overcomes this limitation by creating a virtual pool of storage resources that can consist of any number and variety of disks. There are several advantages that are brought about by the introduction of this type of technology. The most notable is that of thin provisioning. Which, from a service provider standpoint is some thing that is as old as service offerings itself. As an example, network service providers do not build their networks to be provisioned to 100% of the potential customer capacity 100% of the time. Instead they analyze and look at traffic patterns and engineer the network to handle the particular occurrences of peak traffic. The same might be said of a thinly provisioned environment. Instead of allocating the whole chunk of disk space at the time of the allocation, a smaller thinly provisioned chunk is setup but the larger chunk is represented back to the application. The system then monitors and audits the usage of the allocation and according to high water thresholds, allocate more space to the user based on some sort of established policy. This has obvious benefits in a SaaS environment as only very seldom will a customer purchase and use 100% of the space at the outset. The gamble is that the provider keeps enough storage resources within the virtual pool to accommodate any increases. Being that most providers are very familiar with type of practice in bandwidth provisioning, it is only a small jump to apply that logic in storage.

Not all approaches to virtualization are the same however. Some implementations are done at the disk array level. While this approach does offer pooling and thin provisioning, it only does so at the array level or within the array cluster. Additionally, the approach is closed in that it only works with that disk vendors’ implementation. Alternatively, virtualization can be performed above the disk array environment. This approach more appropriately matches a SaaS environment in that the open system approach allows any array to be encompassed into the resource pool which better leverages on the SaaS providers’ purchasing power. Rather than getting locked into a particular vendors approach, the provider has the ability to commoditize the disk resources and hence allow better pricing points.

There are also situations called ‘margin calls’. These are scenarios that can occur in thinly provisioned environments where the data growth is beyond the capacity if the resource pool. In those instances, additional storage must physically be added to the system. With array based approaches, this can run into issues such as spanning beyond the capacity of the array or the cluster. In those instances, in order to accommodate for the growth, the provider needs to migrate the data to a new storage system. With the open system approach, the addition of storage is totally seamless and it can occur with any vendors’ hardware. Additionally, implementing storage virtualization at a level above the arrays allows for very easy data migration, which is useful in handling existing data sets.

Data Reduction Methods

This is a key technology for the providers return on investment. Here remember that storage is the commodity. In typical Cloud Storage SaaS offerings the commodity is sold by the Gigabyte. Obviously, if you can retain 100% of the customers data and only store ten or twenty percent of the bits, the delta is revenue back to you for return on investment. If you are then able to take that same technology and not only leverage it across all subscribers but across all content types as well then it becomes something that is of great value to the overall business model of Storage as a Service. The key to the technology is that the data reduction is performed at the disk level. Additionally, the size of the bit sequence is relatively small (512 bytes) rather than the typical block levels. As a result, the comparative is large (the whole SaaS data store) while the sample is small (512 bytes) The end result, is that as more data is added to the system the context of reference is widened correspondingly meaning that the probability that a particular bit sequence will match another in the repository is hence  increased.

But beware, data reduction is not a panacea. Like all technologies it has its limitations and there is the simple fact that some data just does not de-duplicate well. There is also the fact that the data that is stored by the customer is in fact manipulated by an algorithm and abstracted in the repository. This means that some issues of regulatory legal compliance may come into play with some types of content. For the most part however, these issues can be dealt with and data reduction can play a very important role in SaaS architectures, particularly in the back end data store.

Replication of the data

If you are doing due diligence and implementing RAID rather than selling space on ‘just a bunch of disks’, then your most probably the type that will go further to create secondary copies of the primary data footprint. If you do this, you also probably want to do this on the back end so as not to impact the service offering. You also probably want to use as little network resource as possible to keep that replicated copy up to date. Here technologies like Continuous Data Protection and thin replication can assist in getting the data into the back end and performing the replication with minimal impact to network resources.

Encryption

There are more and more concerns about placing content in the cloud. Typically these concerns are from business users who see it as a major compromise of security policy. Individual end users are also broaching concerns around confidentiality of content. Encryption can not solve the issue by itself but it can go a long way towards it. It should be noted though that with SaaS encryption needs to be considered in two aspects. First is the encryption of data in movement. That is protecting the data as it is posted into and pulled out of the cloud service. Second is the encryption of data at rest, which is protecting the content once it is resident in a repository. The first is addressed by methods such as SSL/TLS or IPSec. The second is addressed by encryption at the disk level or prior to disk placement.

Access Controls

Depending on the type and intention of the service, access controls can be relatively simple (i.e. user name & password) to complex (RSA type). In private cloud environments, normal user credentials for enterprise or organization access would be the minimum requirement. Likely, there will be additional passwords or perhaps even tokenization to access the service. For semi-private clouds the requirements are likely to not be as intense but again, can be if needed. Also, there may be a wide range in the level of access requirements. As an example, for a backup service there only needs to be an iSCSI initiator/target binding and a monthly report on usage that might be accessible over the web. In other services such as collaboration, a higher level portal environment will need to be provided – hence the need for a higher level access control or log on. Needless to say, some consideration will need to be made for access to the service, even if it is for the minimal task of data separation and accounting.

The technologies listed above are not ‘required’, as pointed out above just a bunch of disks on the network could be considered cloud storage. Nor is the list exhaustive.  But if the provider is serious about the service offering and also serious about its prospect community, it will make investments into at least some if not all of them.

Planning for the Service

There are two perspectives to cover here. The first is that of the customer. When IT organizations start thinking about using cloud services they are either attempting to reduce cost or bypass internal project barriers. Most of these will plan on using the service to answer requirements for off site storage. Secondary sites are not cheap, particularly if the site is properly equipped as a data center. If this does not already exist, it can be a prime motivator for moving secondary or even tertiary data copies into a cloud service.

There are a number of questions and concerns that should be asked prior to using such a service though. The IT staff should create a task group to assemble a list of questions, requirements & qualifications as to what they expect out of the service. Individuals from various areas of practice should be engaged in this process. Examples are, Security, Systems Administrators, DB Administrators, IT Audit, Networking, etc… the list can be quite extensive. But it important to be sure to consider all facets of the IT practice in regards to the service in question. In the end a form should be created that can be filled out in dialogs with the various providers that are being entertained. Tests and pilots are also a good thing to arrange if it can be done. It is important to get an idea of how fast data can be pumped into the cloud. It is also very important to know how fast it can be pulled out as well. At the very least the service should be closely monitored by both storage and networking staff to be certain that the service works according to SLA (if there is one) and is not decaying in performance over time or increase in data. In either instance communication with the SaaS provider is then in order and may involve technical support and troubleshooting or service expansion. In any event, it should be realized that a SaaS service package, just like the primary data footprint, is not a static thing; and they usually do not shrink!

Some sample questions that might be asked of a SaaS vendor are the following:

Is the data protected by RAID storage?

Is the data replicated? If so, how many times and where will copies be located?

Is the data encrypted in movement? At rest?

What is the estimated ingestion capacity rate? (i.e. how much data can be moved in an hour into the cloud)

What is the estimated restore time? (i.e. how much data can be moved off of the cloud in an hour)

(The two questions above may require an actual test.)

What security measures are taken at the storage sites (both cyber and physical)?

These are only a few generic level questions that can help in getting the process started. You will quickly find that once you start bringing in other individuals into the process from various disciplines that list can get large and may need to be optimized and pared down. Once this process is complete, it is good to set up a review committee that will meet with the various vendors and move through the investigation process.

From the perspective of the SaaS provider the issues are similar as it is in the best interest to meet the needs of the customer. There is a spin of using the service to providing it however. There are two ways that this can occur. The first instance is where a prospective SaaS provider already has an existing customer base that it is looking to provide a service to. In this case the data points are readily available. A survey needs to be created that will assemble the pertinent data points and that then needs to be filled out by the various customers of the service. Questions that might be asked are, what is your backup environment like, what is the size of the full data repository, what is the size of the daily incremental backup, can you provide an estimated growth rate, what is your network bandwidth capacity? Once the data is assembled, it can be tallied up and sizing can occur in a rather accurate fashion.

The second method is in the case of a prospective provider who does not yet have a known set of data for existing customers. Here some assumptions must be made on a prospective business model. It needs to be determined what the potential target market is for the service launch. Once those numbers are reached a range or average needs to be figured on many of the data points above to create a typical customer profile. It is important that this is well defined and well known. The reason for this is that as you add new customers onto the service you can in the course of the service profile survey identify a relative size for the customer. (i.e. 1 standard profile or 3.5 times the standard profile) With that information predicting service impact and scaling estimations can be much easier. From there the system can then be sized according to those metrics with an eye to the future for growth. Capacity is added as the service deployment grows.

As a storage solution provider, my company will assist prospective SaaS providers in doing this initial sizing exercise. As an example, in the first case point we assisted a prospect in the creation of the service requirements survey as well as helped in actually administering it. Afterwords, we worked interactively with the provider to size out the appropriate system to meet the requirements of the initial offering. Additionally, we offered scaling information as well as regular consultative services so that the offering is scaled properly.

Like all service offerings, SaaS is only as good as its design. Someone can go out and spend the highest dollar on the ‘best’ equipment and then be some what slipshod in the way the system is sized and implemented and end up with a mediocre service offering. On the other hand one can get good cost effective equipment, size and implement them with care and wind up with a superior offering. The message here is that the key to success in SaaS is in the planning, both for the customer as well as the provider.

Advertisements

Infiniband and it’s unique potential for Storage and Business Continuity

February 18, 2010

It’s one of those technologies that many have only had cursory awareness of. It is certainly not a ‘mainstream’ technology in comparison to IP, Ethernet or even Fibre Channel. Those who have awareness of it know Infiniband as a high performance compute clustering technology that is typically used for very short interconnects within the Data Center. While this is true, it’s uses and capacity have been expanded into many areas that were once thought to be out of its realm. In addition, many of the distance limitations that have prevented it’s expanded use are being extended. In some instances to rather amazing distances that rival the more Internet oriented networking technologies. This article will look closely at this networking technology from both historical and evolutionary perspectives. We will also look at some of the unique solutions that are offered by its use.

Not your mother’s Infiniband

The InfiniBand (IB) specification defines the methods & architecture of the interconnect that establishes the interconnection of the I/O subsystems of next generation of servers, otherwise known as compute clustering. The architecture is based on a serial, switched fabric that currently defines link bandwidths between 2.5 and 120 Gbits/sec. It effectively resolves the scalability, expandability, and fault tolerance limitations of the shared bus architecture through the use of switches and routers in the construction of its fabric. In essence, it was created as a bus extension technology to supplant the aging PCI specification.

The protocol is defined as a very thin set of zero copy functions when compared to thicker protocol implementations such as TCP/IP. The figure below illustrates a comparison of the two stacks.

Figure 1. A comparison of TCP/IP and Infiniband Protocols

Note that IB is focused on providing a very specific type of interconnect over a very high reliability line of fairly short distance. In contrast, TCP/IP is intended to support almost any use case over any variety of line quality for undefined distances. In other words, TCP/IP provides robustness for the protocol to work under widely varying conditions. But with this robustness comes overhead. Infiniband instead optimizes the stack to allow for something known as RDMA or Remote Direct Memory Access. RDMA is basically the extension of the direct memory access (DMA) from the memory of one computer into that of another (via READ/WRITE) without involving the server’s operating system. This permits a very high throughput, low latency interconnect which is of particular use to massively parallel compute cluster arrangements. We will return to RDMA and its use a little later.

The figure below shows a typical IB cluster. Note that both the servers and storage are assumed to be relative peers on the network. There are differentiations in the network connections however. HCA’s (Host Channel Adapters) refer to the adapters and drivers to support host server platforms. TCA’s (Target Channel Adapters) refer to the I/O subsystem components such as RAID or MAID disk subsystems.

Figure 2. An example Infiniband Network

At its most basic form the IB specification defines the interconnect as (Point-to-Point) 2.5 GHz differential pairs (signaling rate)- one transmit and one receive (full duplex) – using LVDS and 8B/10B encoding. This single channel interconnect delivers 2.5 Gb/s. This is referred to as a 2X interconnect. The specification also allows for the bonding of these single channels into aggregate interconnects to yield higher bandwidths. 4X defines a interface with 8 differential pairs (4 per direction). The same for Fiber, 4 Transmit, 4 Receive, whereas 12X defines an interface with 24 differential pairs (12 per direction). The same for Fiber, 12 Transmit, 12 Receive. The table below illustrates various characteristics of the various channel classes including usable data rates.

Table 1.

Also note that the technology is not standing still. The graph below illustrates the evolution of the IB interface over time.

Figure 3. Graph illustrating the bandwidth evolution of IB

As the topology above in figure 2 shows however, the effective distance of the technology is limited to single data centers. The table below provides some reference to the distance limitations of the various protocols used in the data center environment including IB.

Table 2.

Note that while none of the other technologies extend much further from a simplex link perspective, they do have well established methods of transport that can extend them beyond the data center and even the campus.

This lack of extensibility is changing for Infiniband however. There are products that can extend its supportable link distance to tens, if not hundreds of Kilometers, distances which rival well established WAN interconnects. New products also allow for the inter-connection of IB to the other well established data center protocols, Fibre Channel and Ethernet. These new developments are expanding its potential topology thereby providing the evolutionary framework for IB to become an effective networking tool for next generation Business Continuity and Site Resiliency solutions. In figure 4 below, if we compare the relative bandwidth capacities of IB with Ethernet and Fibre Channel we find a drastic difference in effective bandwidth both presently and in the future.

Figure 4. A relative bandwidth comparison of various Data Center protocols

Virtual I/O

With a very high bandwidth low latency connection it becomes very desirable to use the interconnect for more than one purpose. Because of the ultra-thin profile of the Infiniband stack, it can easily accommodate various protocols within virtual interfaces (VI) that serve different roles. As the figure below illustrates, a host could connect virtually to its data storage resources over iSCSI (via iSER) or native SCSI (via SRP). In addition it could run its host IP stack as a virtual interface as well. This capacity to provide a low overhead high bandwidth link that can support various virtual interfaces (VI) lends it well to interface consolidation within the data center environment. As we shall see however, in combination with the recent developments in extensibility, IB is becoming increasingly useful for a cloud site resiliency model.

Figure 5. Virtual Interfaces supporting different protocols

Infiniband for Storage Networking

One of the primary uses for Data Center interconnects is to attach server resources to data storage subsystems. Original direct storage systems were connected to server resources via internal busses (i.e. PCI) or over very short SCSI (Small Computer Serial Interface) connections, known as Direct Access Storage (DAS). This interface is at the heart of most storage networking standards and defines the internal behaviors of these protocols for hosts (initiators) to I/O device (targets). An example for our purposes is a host writing data to or reading data from a storage subsystem.

Infiniband has multiple models for supporting SCSI (including iSCSI). The figure below illustrates two of the block storage protocols used, SRP and iSER.

Figure 6. Two IB block storage protocols

SRP (SCSI RDMA Protocol) is a protocol that allows remote command access to a SCSI device. The use of RDMA avoids the overhead and latency of TCP/IP and because it allows for direct RDMA write/read is a zero copy function. SRP never made it into a formal standard. Defined by ANSI T10, the latest draft is rev. 16a (6/3/02).

iSER (iSCSI Extensions for RDMA) is a protocol model defined by the IETF that maps the iSCSI protocol directly over RDMA and is part of the ‘Data Mover’ architecture. As such, iSCSI management infrastructures can be leveraged. While most say that SRP is easier to implement than iSER, iSER provides enhanced end to end management via iSCSI management. Both protocol models, to effectively support RDMA, possess a peculiar function that results in all RDMA being directed towards the initiator. As such, a SCSI read request would translate into an RDMA write command from the target to the initiator; whereas a SCSI write request would translate into an RDMA read from the target to the initiator. As a result some of the functional requirements for the I/O process shift to the target and provides offload to the initiator or host. While this might seem strange, if one thinks about what RDMA is it only makes sense to leverage the direct memory access of the host. This is results in a very efficient leverage of Infiniband for use in data storage.

Another iteration of a storage networking protocol over IB is Fibre Channel (FCoIB). In this instance, the SCSI protocol is embedded into the Fibre Channel interface, which is in turn run as a virtual interface inside of IB. Hence, unlike iSER and SRP, FCoIB does not leverage RDMA but runs the Fibre Channel protocol as an additional functional overhead. FCoIB does however provide the ability to incorporate existing Fibre Channel SAN’s into an Infiniband network. The figure below illustrates a network that is supporting both iSER and FCoIB, with a Fibre Channel SAN attached by a gateway that provides interface between IB and FC environments.

Figure 7. An IB host supporting both FC & native IB interconnects

As can be seen, a legacy FC SAN can be effectively used in the overall systems network. Add to this high availability and you have a solid solution for a hybrid migration path.

If we stop and think about it, data storage is number two only to compute clustering for an ideal usage model for Infiniband. Even with this, the use of IB as a SAN is a much more real world usage model for the standard IT organization. Not many IT groups are doing advanced compute clustering and those that do already know the benefits of IB.

Infiniband & Site Resiliency

Given the standard offered distances of IB, it is little wonder that it has not been often entertained for use in site resiliency. This however, is another area that is changing for Infiniband. There are now technologies available that can extend the distance limitation out to hundreds of kilometers and still provide the native IB protocol end to end. In order to understand the technology we must first understand the inner mechanics of IB.

The figure below shows a comparison between IB and TCP/IP reliable connection. The TCP/IP connection shows a typical saw tooth profile which is the normal result of the working mechanics of the TCP sliding window. The window starts at a nominal size for the connection and gradually increases in size (i.e. Bytes transmitted) until a congestion event is encountered. Depending on the severity of the event the window could slide all the way back to the nominal starting size. The reason for this behavior is that TCP reliable connections were developed in a time when most long distance links were far more unreliable and of less quality.

Figure 8. A comparison of the throughput profiles of Infiniband & TCP/IP

If we take a look at the Infiniband throughput profile we find that the saw tooth pattern is replaced by a square profile that is the result of the transmission instantly going to 100% of the offered capacity and maintains as such until a similar event occurs which results in a halt to the transfer. Then after a period of time, it resumes as 100% of the offered capacity. This similar event is something termed as buffer starvation. Where the sending Channel Adapter has exhausted its available buffer credits which are calculated by the available resources and the bandwidth of the interconnect (i.e. 2X, 4X, etc.). Note that the calculation does not include any significant concept of latency. As we covered earlier, Infiniband was originally intended for very short highly dependable interconnects so the variable of transmission latency is so slight that it can effectively be ignored within the data center. As a result the relationship of buffer credits to available resources and offered channel capacity resulted in a very high throughput interconnect that seldom ran short of transmit buffer credits. Provided things were close.

As distance is extended things become more complex. This is best realized in the familiar bucket analogy. If I sit on one end of a three foot ribbon and you sit on the other end and I have a bucket full of bananas (which are analogous to the data in the transmit queue) where as you have a bucket that is empty (analogous to your receive queue) we can run the analogy. As I pass you the bananas , there is only a short distance which can allow for a direct hand off of the bananas. Remembering that this is RDMA, I pass you the bananas at a very fast predetermined speed (the speed of the offered channel) and you take them just as fast. At the end of passing you the bananas, you pass me a quarter to acknowledge the fact that the bananas have been received (this is analogous to the completion queue element shown in figure 1). Now imagine that there is someone standing next to me who is providing me bananas at a predetermined rate (this is the available processing speed of the system). Also, he will only start to fill my bucket if the following two conditions exist. 1). my bucket is empty and, 2). I give him the quarter for the last bucket. Obviously the time required end to end will impact that rate. If that resulting rate is equal to the offered channel, we will never run out of bananas and you and I will be very tired. If that rate is less than the offered channel speed then at some point I will run out of bananas. At that point I will need to wait until my bucket is full before I begin passing them to you again. This is buffer starvation. Now in a local scenario, we see that the main tuning parameters are a). the size of our buckets (available memory resources for RDMA) and, b). the rate of the individual placing bananas into my bucket (the system speed). If these parameters are tuned correctly, the connection will be of very high performance. (You and I will move a heck of a lot of bananas). The further we are from that optimal set of parameters, the lower the performance profile will be and an improperly tuned system will perform dismally.

Now let’s take that ribbon and extend it to twelve feet. As we watch the following scenario unfold it becomes obvious as to why buffer starvation limits distance. Normally, I would toss you a banana and wait for you to catch it. Then I would toss you another one. If you missed one and had to go pick it up off of the ground (the bruised banana is a transmission or reception error), I would wait until you were ready to catch another one. This in reality is closer to TCP/IP. With RDMA, I toss you the bananas just as if you were sitting next to me. What results is a flurry of bananas in the air all of which you catch successfully because hey – your good. (In reality, it is because we are assuming a high quality interconnect) After I fling the bananas however, I need to wait to receive my quarter and until my bucket is in turn refilled. At twelve feet if nothing else changes – we will be forced to pause far more often as my bucket refills. If we move to twenty feet the situation gets even more skewed. We can tune certain things like the depth of our buckets or the speed of the replenishment but these get to be unrealistic as we stretch the distance farther and farther. This is what in essence has kept Infiniband inside the data center.*

*Note that the analogy is not totally accurate with the technical issues but it is close enough to give you a feel of the issues at hand.

Now what would happen if I were to put some folks in between us who had reserve buckets for bananas I send to you and you were to do the same for bananas you in turn send to me? Also, unlike the individual who fills my bucket who deals with other intensive tasks such as banana origination (the upper system and application), this person is dedicated one hundred percent to the purpose of relaying bananas. Add to this the fact that this individual has enough quarters to give me for twice the size of his bucket, and yours in turn as well. If we give them nice deep buckets we can see a scenario that would unfold as follows.

I would wait until my bucket was full then I would begin to hand off my bananas to the person in front of me. If this individual were three feet from me I could hand them off directly as I did with you originally. Better than that, I could simply place the bananas in their bucket and they would give me quarter each time I emptied mine. The process repeats until their bucket is full. They then can begin throwing the bananas to you. While we are at it, why should they toss directly to you? Let’s put another individual in front of you that is also completely focused. But instead of being focused on tossing bananas, they would be focused on catching them. Now if these person’s buckets are roughly 4 times the size of yours and mine, and the relayed bananas occurred over six feet out to your receiver at the same rate as being handed by me, we in theory should never run out of bananas. There would be an initial period of the channel filling and the use of credit but after that initial period the channel could operate at optimal speed with the initial offset in reserve buffer credits being related to the distance or latency of the interconnect. The reason for the channel fill is that the person has to wait until their bucket is full before they can begin tossing, but importantly, after that initial fill point they will continue to toss bananas as long as there are some in the bucket. In essence, I always have an open channel for placing bananas and I always get paid and can in turn pay the guy who fills my bucket only on the conditions mentioned earlier.

This buffering characteristic has led to a new class of devices that can provide significant extension to the distance offered by Infiniband. Some of the latest systems can provide buffer credits equivalent to one full second, which is A LOT of time at modern networking speeds. If we add these new devices and another switch to the topology shown earlier we can begin to realize some very big distances that become very attractive for real time active-active site resiliency.

Figure 9. An extended Infiniband Network

As a case point, the figure above shows an Infiniband network that is extended out to support data centers that are 20Km in distance. The systems at each end, using RDMA are effectively regarding each other as local and for all intensive purposes in the same data center. This means that there are versions of fault tolerance and active to active high availability that otherwise would be out of the question, that are now quite feasible to design and work in practice. A common virtualized pool of storage resources using iSER allow for seamless treatment of data and bring a reduced degree of fault dependency between the server and storage systems. Either side could experience failure at either the server or storage system level and still be resilient. Adding further systems redundancy for both servers and storage locally on each side provides further resiliency as well as provide for off line background manipulation of the data footprint for replication, testing, etc.

Figure 10. A Hybrid Infiniband network

In order for any interface consolidation effort to work in the data center the virtual interface solution must provide for a method of connectivity to other forms of networking technology. After all, what good is an IP stack that can only communicate within the IB cluster? A new generation of gateway products provide for this option. As shown in the figure above, gateway products exist that can tie IB to both Ethernet and Fibre Channel topologies. This allows for the ability to consolidate data center interfaces and still provide for general internet IP access as well as connectivity to traditional SAN topologies and resources such as Fibre Channel based storage arrays.

While it is clear that Infiniband is unlikely to become a mainstream networking technology, it is also clear that there are many merits to the technology that have kept it alive and provided enough motivation (i.e. market) for its evolution into a more mature architectural component. With the advent of higher speed Ethernet and FCoE as well as the current development of lower latency profiles for DC Ethernet, the longer range future of Infiniband may be similar to that of Token Ring or FDDI. On the other hand, even with these developments, the technology may be more likened to ATM. Which, while being far from mainstream, is still being used extensively in certain areas.  If one has the convenience of waiting for these trends to sort themselves out then moving to Infiniband in the Data Center may be premature. However, if you are one of the many IT architects that are faced with intense low latency performance requirements that need to be addressed today and not some time in the future, IB may be the right technology choice for you. It has been implemented by enough organizations that best practices are fairly well defined. It has matured enough to provide for extended connectivity outside of the glass house and gateway technologies are now in place that can provide connectivity out into other more traditional forms of networking technology. Infiniband may never set the world on fire, but it has the potential to put out fires that are currently burning in certain high performance application and data center environments.

Data Storage: The Foundation & potential Achilles Heel of Cloud Computing

November 17, 2009

In almost anything that you read about Cloud Computing, the statement that it is ‘nothing new’ is usually made at some point. The statement then goes on to qualify Cloud Computing as a cumulative epiphenomenon that more so serves as a single label to a multi-faceted substrate of component technologies than it does to a single new technology paradigm. All of them used together comprise the constitution of what could be defined as a cloud. As the previous statement makes apparent the definition is somewhat nebulous. Additionally, I could provide a long list of the component technologies within the substrate that could ‘potentially’ be involved. Instead, I will filter out the majority and focus on a subset of technologies that could be considered ‘key’ components to making cloud services work.

If we were to try to identify the most important component out of this substrate, most would agree that it is something known as virtualization. In the cloud, virtualization occurs at several levels. It can range from ‘what does what’ (server & application virtualization) to ‘what goes where’ (data storage virtualization) to ‘who is where’ (mobility and virtual networking). When viewed as such, one could even come to the conclusion that virtualization is the key enabling technology upon which all other components either rely on or embody in some subset of functionality.

As an example, at the application level Web Services and Service Oriented Architecture serve to abstract & virtualize the application resources required to provide a certain set of user exposed functions. Going further whole logical application component processes can be strung together in a work flow to create an automated complex business process that can be kicked off by the simple submittal of an on line form on a web server.

If we look further, underneath this we can identify another set of technologies where the actual physical machine is host to multiple resident ‘virtual machines’(VM) which house different applications within the data center. Additionally, these VM’s can migrate from one physical machine to another or invoke clones of themselves that can in turn be load balanced for improved performance during peak demand hours. At first this was a more or less local capability that was limited to the physical machines within the Data Center, but recently advances have been made by the use of something known as ‘stretch clustering’ to enable migrations to remote Data Centers or secondary sites in response to primary site failures and outages. This capability has been a great enabling tool in prompt Disaster Recovery plans for key critical applications that absolutely need to stay running and accessible.

In order for the above remote VM migration to work however there needs to be consistent representation and access to data. In other words, the image of the working data that VM #1 has access to at the primary site needs to be available to VM #2 at the secondary site. Making this occur with traditional data storage management methods is possible but extremely complex, inefficient and costly.

Virtualization is also used within storage environments to create virtual pools of storage resources that can be used transparently by the dependant servers and applications. Storage Virtualization not only simplifies data management for virtualized services but also serves to provide the actual foundation for all of the other forms of virtualization within the cloud in that the data needs to be always available to the dependant layers within the cloud. Indeed, without the data – the cloud is nothing but useless vapor.

This is painfully evident in some of the recent press around cloud failures, most notably the T-Mobile Sidekick failure that was the result of Microsoft’s Danger subsidiaries failure to back up key data prior to a storage upgrade that was being performed by Hitachi. Many T-Mobile users woke up one morning to find that their calendars and contact lists were non-existent. After some time, T-Mobile was forced to tell many of their subscribers that the data was permanently lost and not recoverable. This particular instance has had a multi-level reverberation that impacted T-Mobile (the Mobile Service Provider), Microsoft Danger (the Data Management Provider), Hitachi (the company performing the storage upgrade) and finally the thousands of poor mobile subscribers who arguably bore the brunt of failure. To be fair, Microsoft was able to restore most of the lost data, but this was only after days had passed. Needless to say, the legal community is now a buzz over potential law suits and some are already in the process of being filed.

The reasons for the failure are not really the primary purpose of the example. The example is intended to illustrate two things; first, while many think that Cloud Computing somehow takes us beyond the traditional IT practices – it does not. In reality, Cloud Computing builds upon them and is in turn dependent upon them for proper intended functionality. The responsibility for needs to perform them can be vague however and needs to be clearly understood by all parties. Second, Cloud Computing without data is severely crippled, if not totally worthless.  After all, the poor T-Mobile subscriber did not know who to meet or call, or even how to call to cancel or reschedule (unless they took the time to copy all of that information locally to the PDA – and some did).  What good is next generation mobile technology if you have no idea of where to be or who to contact!

If we view it as such then it could be argued that proper data storage management is the key foundation and enabler for Could Computing. If this is the case then it needs to be treated as such when the services are being designed. You often hear that security should not be an afterthought. It needs to be considered in every step of a design process. This is most definitely true. The point of this article is that the same thing needs to be said for data storage and management.

The figure below illustrates this relationship. The top layer, which represents the user leverages on mobility and virtual networking to provide access to resources anywhere, anytime. Key enabling technologies such as 3G or 4G wireless and Virtual Private Networking provide for secure almost ubiquitous connectivity into the cloud where key resources reside.

Figure 1. Cloud Virtualization Layers

In the next layer the enabling services are provided for by underlying applications. Some may be atomic like simple email in that they provide a single function from a single application. More and more however, services are becoming composite in that they may depend on multiple applications acting in concert to complete whole business processes. These types of services are typically SOA enabled in that they follow process flows that are defined by an overarching policy and rule set that is maintained and driven by the SOA framework. In these types of services there is a high degree of inter-dependency which, while enabling enhanced feature service offerings, also creates areas of vulnerability that can become critical outages if one of the component applications in the process flow were to suddenly become unavailable.  To accommodate for this, many SOA environments provide for recovery work flows which can provide for graceful rollback of a particular transaction. Optimally, any failure of a component application should be totally transparent to the composite service. If a server that is providing the application were to fail, another server should be ready to take over that function and allow the services process flow to proceed uninterrupted.

The layer below the service application layer is the layer that would provide for this transparent resiliency and redundancy.  Here physical servers provide hosting for multiple virtual machines which can provide for redundant and even load balanced application service environments.

In the figure below, we see that these added features provide the resource abstraction that allows one VM to step in for another’s failure so that a higher level business process flow can proceed without a glitch. Additionally, applications can be load balanced to allow for scale and higher capacity.

Figure 2. VM’s set up in a Fault Tolerant configuration

As we pointed out earlier however, this apparent Nirvana of application resiliency can only be met if there is consistent data that is available to both systems at the time of the failover at the VM level. In the case of a transaction database the secondary VM should ideally be able to capture the latest exchange so as to allow the application to proceed without interruption. In other words, the data has to have ‘full transactional integrity’. At the very least the user may have to fill out the present form page that they are currently working on once again. Without the availability to data any and all resiliency provided by the higher layers are null and void. The figure below builds upon figure two to illustrate this.

Figure 3. Redundant Data Stores key to service resiliency

As the user interacts with the service they ideally should be totally oblivious to any failures within the cloud. As we see in the figure above however, this can only be the case if there are consistent up to the current transaction data repositories that the failover VM can mount and carry on with the user service with as little interruption as possible. Doing this with traditional Direct Attached Storage (DAS) is a monumental task that is prone to vulnerabilities. The concept of transactional integrity in this approach is difficult. The use of Storage Virtualization helps solve this complexity by creating one large virtual storage space that can be leveraged at the logical level by multiple server resources within the environment. Shown below, this virtualized storage space can be divided up and allocated by a process known as provisioning. Once these logical storage spaces (LUN’s) are created, they can not only be allocated to physical servers but to individual VM’s as well as any higher level fault tolerance. The value to this is that failure at the VM level is totally independent of failure at the data storage level.

Figure 4. Failure mode independence

As shown in the figure above most VM level failures can be addressed at the local site. As a result, the failover VM can effectively mount the primary data store. Data consistency is not an issue in this case because it is the exact same data set. In instances of total site failure the secondary site must take over completely. In this instance the secondary storage must be used. It was pointed out earlier that this secondary store must have complete transactional integrity with the primary store and the dependent application.  In a remote secondary site scenario that is designed for disaster recovery, the costs for up to the minute traditional data backups is cost prohibitive and logistically impossible. Consquently, reliable backup data is in many instances 12 hours old or greater.

Newer storage technologies come into play here that allow for drastic reduction in the amount of data that has to be copied as well as optimization in the methods for doing so.

Thin Provisioning

One of the major reasons for the difficulties noted in the previous section is the prevalence of overprovisioning in the data storage environment. This seems counterintuitive. If there is more and more data, how can data storage environments be  overprovisioned? This occurs because of the friction between two sets of demands. When installing a server environment one of the key steps is in the allocation of the data volume. This is done at install and is not an easy allocation to adjust once the environment has been provisioned. As a result, most administrators will wiegh the risk and downtime to increase volume size against the cost of storage. In the end they will typically choose to over provision the allocation so that they do not have to be concerned about any issues with storage space later on.

This logic is fine in a static example. However, if we consider this practice in light of Business Continuity and Distaster Recovery it becomes problematic and costly. The reason for this is that using traditional volume management and backup methods require the backup of the whole data volume. This is the case even if the application is only actually using 20% of the allocated disk space. Now, size translates to WAN bandwidth. Suddenly disk space is not so cheap.

Storage virtualization enables the ability to do something known as thin provisioning. Because the virtualized storage environment abstracts the actual data storage from the application environment, it can be used to actually allocate a much smaller space than the application believes it has. The concept of pooling allows for the virtualized environment to allocate additonal space as the data store requirements grow for the application environment. This is all transparent to the application however. The end result is a much more efficient data storage environment and the need to re-configure the application environment is eliminated. The figure below illustrates an application that has been provisioned for 1 TeraByte of data storage. The storage virtualization environment however has only allocated 200 GigaBytes of actual storage. This translates into an 80% increase in the efficiency of storage usage.

Figure 5. Thin Provisioning & Replication

The real impact comes when considering this practice in Business Continuity and Disaster Recovery. At the primary site, only the allocated portion of the virtualized data store needs to be replicated for business continuity at the local site. This is something that is termed as thin replication. For disaster recovery purposes the benefits translate directly into an 80% reduction in the required WAN usage to provide for full resiliency. Now it becomes possible not only to seriously entertain network based DR (as opposed to the ‘tape and truck’ method), but to perform the replications at multiple times during the day rather than once at the end of the day during off hours. What enables this are two things, first the drastic reduction in the data being moved and second the fact that the server is removed from these tasks by the storage virtualization. This means that the application server environment can be up 24/7 and provide for a more consistent Business Continuity and Disaster Recovery practice.

Continuous Data Protection (CDP)

The next of these technologies is Continuous Data Protection. CDP is based on the concept of splitting writes to disk to a separate data journal volume. This process is illustrated below. While the write primary storage occurs as normal, a secondary write occurs which is replicated into the CDP data journal.  This split can occur in the host, within the Storage Area Network, in an appliance or in the storage array itself. If the added process is handled by the host (via a write splitter agent), the host must support the additional overhead.

Figure 6. Continuous Data Protection split on writes to disk

If the split is done in the disk array the journal must be local within that array or within an array that is local, hence its use in DR is somewhat limited. If the split occurs within the SAN Fabric or in an appliance the CDP data journal can be located in a different location than the primary store.  This can be supported in multiple configurations but the main point is that on primary storage failover there is a consistent data set that has full transactional integrity available and the secondary VM can take over in as transparent a fashion as possible regardless of which site it’s located at.

Figure 7. CDP and its use in creating ongoing DR Backup

As shown above with less than the original volume size, data consistency can be provided in any minute density that the administrator requires for historical purposes and up to the minute for real time recovery with data journaling. Also consider that disk space is cheap in comparison to bandwidth and even cheaper in comparison to lost business. With only the used disk deltas being copied, far less bandwidth is used. Additionally, with a complete consistent data set always available, off line backups can occur to archive Virtual Tape Libraries (VTL) or directly to tape at any time – even during production hours – to provide for complete DR compliance in the event of total catastrophe at the primary site.

Data De-Duplication

Full traditional backups will usually store a majority of redundant data. This means that every initial image will mostly be of redundant data that was already contained in the last full image. The replication of this data seems pointless and it is.* Data De-duplication works by the assumption that most of the data that moves into backup is repetitive and redundant. While CDP works well towards reducing this for database & file based environments by its very nature of operation, most tape based backups will simply save the whole file if any change has been recorded (typically done by size or last modification date).

*There may be instances where certain types of data cannot be de-duplicated due to regulatory requirements. Be sure that the vendor can support such exceptions.

Data De-Duplication works at the sub block level to identify only the sections of the file that have changed and thereby only backup the delta sub blocks to maintain complete consistency of not only the most recent, but also of all archived versions of the file. (This is accomplished by an in depth indexing that occurs at the time of the de-duplication that preserves all versions of the data file for complete historical consistency.) As an example, when a file is first saved obviously the de-duplication ratio is 1:1 as this is the first time that data is saved. However, over time as subsequent backups occur, file based repositories can realize de-duplication ratios as high as 30:1. The chart below illustrates some of the potential reduction ratios for different types of data files.

Document type    De-dupe ratio    % of data backed up

New working documents                                             2:1                                                          50% less data

5:1                                                          80% less data

Active working documents                                        10:1                                                        90% less data

20:1                                                        95% less data

Archived inactive documents                                  30:1                                                        97% less data

As can be seen, these technologies can drastically reduce the amount of data that you need to move over the wire to provide data consistency as well as greatly reduce the storage requirements for maintaining that consistency. The result is an ROI that is unprecedented and simply cannot be found in traditional storage and networking investments.

In reality, in data de-duplication the reduction ratios occur in ranges. More active data will show less reduction ratios than data that are largely historical. As a data set matures and goes into archive status the ratio for data reduction becomes quite high because there is no change to the data pattern within the file. This leads to the point that data de-duplication is best done at various locations, not only across its end to end path but from a life cycle perspective as well.  For instance, de-duplication provides great value in WAN usage reductions for remote site backups if the function is performed at the remote site. It would also find value within the replication and archive process, particularly to VTL or tape store, knowing that what goes onto this medium typically can be viewed as static and is for archive purposes.

Some of the newer research in the industry is around the management of the flow of data through its life cycle. As new data is created its usage factor is high as well as the amount of change that it undergoes. Imagine a new document that is created at the beginning of a standards project. As the team moves through the flow of the project the document is modified. There may even be multiple versions of the same document at the same time which would be considered valid to the overall project.

Figure 8. Project Data Life Cycle

As the project matures and the standard solidifies however, more and more of these documents will become ‘historical’ and will no longer change. Even the final valid document that the project delivers as its end product will not change without due process and notification. Then at such a time the whole parade begins anew. The main point is that as these pieces of data age they should be moved to more cost effective storage. The end result is that as the de-duplication hit gets higher, that piece of data should be moved to more cost effective storage. Eventually, that piece of data would end up in a VTL where it would act as a template for de-duplication against all further input to those final archives. The end result is the reduction of data amount as well as the lowering of the overall retention cost.

While it may be true that data storage is the key foundation and consequently Achilles heel for Cloud Computing services, there are technologies available to enable data storage infrastructures to step up to the added requirements for a true Cloud service environment. This is why the term Cloud Storage makes me uneasy when I hear it used without any qualification. Consider after all, any exposed disk in a server that is attached to a cloud could be called ‘cloud storage’. Just because it is ‘storage in the cloud’ does not mean that it is resilient, robust, or cost effective. Consequently, I would prefer to differentiate ‘Cloud Storage’, (i.e. storage as a cloud service) and ‘Storage architectures for Cloud Services’ which are the technologies and practices of data storage management to support all cloud services (of which Cloud Storage is one). The technologies reviewed in this article enable storage infrastructures to provide the resiliency and scale that are required for true secure and robust data storage solutions for cloud service infrastructures.  Additionally, they help optimize the IT cost profile both in capital as well as operational expense perspectives.  These technologies also work towards vastly improving the RPO (Recover Point Objectives) and RTO (Recovery Time Objectives) of any Business Continuity and Disaster Recovery plan. As cloud computing moves into the future its fate will depend upon the integrity of the data on which it operates. Cloud Service environments and perhaps the companies that provide or use them will succeed or fail based on whether or not they are built upon truly solid data management practices and solutions. The technology exists for these practices to be implemented. As always it is up to those who deploy the service to make sure that they consider secure and dependable storage in the overall plan for Business Continuity and Disaster Recovery as well as business regulatory compliance.