Quantcast
Channel: Symantec Connect - ブログエントリ
Viewing all articles
Browse latest Browse all 5094

Building Application and Data Availability without SAN

$
0
0

Flexible Storage Sharing provides great capabilities to reduce capital and operational expenditures. In a previous blog entry I was describing how to commoditize high availability and storage using Flexible Storage Sharing. Later we saw in this article how to add an extra node to the cluster. In this blog entry I am going to describe my next step that was to have a database instance running in each cluster node. My idea here was to provide resiliency by having a mirror of my data and redo logs in at least two servers. With this approach, each node will have a local copy of the database that is running plus a mirror for another instance. This will be the architecture I will be using:

blog3_picture1_0.png

Each node has a flash card that is used with the new SmartIO feature to provide a cache that will accelerate the database reads. There is no SAN involved here as I am using the internal storage capacities for each server (up to 25 internal HDDs). From the database high availability point of view, a service group has been created for each instance. Each instance will have a preferred node where it will be running, but it will be able to run in any server in case of failure. This is a screenshot of the configuration:

blog3_picture2.png

Note that I am using here a pre-release of Veritas Operation Manager (VOM) 6.1, which will include support for FSS. There is a new folder where CFS and FSS clusters will be included and visualized. This will provide us great visibility of the configuration, including volumes, mount points, number of mirrors, etc:

blog3_picture3.png

The mount points are always available in all the three cluster nodes, so I have created a parallel service group for each disk group and mount points. On top of that we have failover service groups for each database instance as we can see in the service group dependencies:

blog3_picture4.png

So my next question here was what the performance was going to be. Now each hosts needs to run each own database and benchmark while holding a mirror for another database. These are the results:

blog3_picture5.png

Comparing with my first testing the performance of each database is still very consistent and I am having 216K transactions per minute as a total in my three node cluster.

Something really nice about this configuration is that I can adapt my layout to the performance that is needed. Using 7xHDDs for data plus 1xHDD for redo logs in each of the servers gave me 91K transactions per minute:

blog3_picture6.png

Carlos Carrero.-


Viewing all articles
Browse latest Browse all 5094

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>