HPWorld 98 & ERP 98 Proceedings
HP World ’98

Leveraging Fibre Channel Storage

Rob Young

CLARiiON

Coslin Drive

Southboro, MA 01772

Tel. 508 480 7428

FAX 508 480 7950

ryoung@clariion.com

 

You can profit from using this paper even if you don’t use any of the solutions shown here:

The information in this paper can be used to improve your negotiating leverage on other large-scale storage solutions. The additional leverage can dramatically lower your cost for such solutions.

    1. Key points:
  1. All very large storage solutions are built by connecting modular RAID components.
  2. Fibre Channel provides a powerful and flexible way to connect RAID components
  3. This paper shows how to use Fibre Channel to connect multiple, independent, disk arrays in parallel and configure them to meet virtually all centralized storage requirements while providing high availability, operational flexibility, and extremely high performance.
  1. Summary:

Fibre channel has benefits in both small and large scale solutions. This paper focuses on large-scale solutions where Fibre Channel has profoundly changed the options available. Its flexible connectivity enables a new paradigm in large-scale, centralized storage that is difficult to implement with SCSI.

This paper explains the paradigm shift and shows how to leverage multiple Fibre Channel disk arrays in parallel to achieve the major objectives of large-scale, centralized storage, including:

Even enterprises which choose to continue with their present enterprise storage paradigm can achieve very significant discounts on their next storage purchase by demonstrating their familiarity with these options to their storage vendor.

  1. Organization of this paper:
  2. Section 4: What Fibre Channel is:

    Describes the features of Fibre Channel relevant to this paper

    Section 5: Why the key to any large-scale solution is connecting modules in parallel:

    Lays the ground work for understanding how Fibre Channel connectivity can make such a large impact on large-scale solutions.

    Section 6: Why Fibre Channel is the best way to connect the modules to build large solutions:

    6.1 Defines what the ideal connectivity would look like

    6.2 Defines the module connectivity options available today:
    Inside-the-box,
    SCSI outside-the-box,
    Fibre Channel outside-the-box

    6.3 Explains why Fibre Channel outside-the-box comes closest to the ideal.

    6.4 Explains why the other differences between inside-the-box and outside-the-box connectivity are not major considerations

    Section 7: How to exceed "inside-the-box" performance using Fibre Channel

    Section 8: How to implement special features using standard RAID arrays and Fibre Channel:

    Explains how advanced data center requirements are better met with this new solution:
    Disaster tolerance (remote mirroring and replication)
    Data snap shots (sometimes called triple mirroring)
    Online resource reallocation between unlike platforms
    MVS to UNIX data transfer
    High speed online backup
    File sharing
    Ease of storage management

    Section 10: Conclusion

  3. What is Fibre Channel?
  4. Fibre Channel is a network technology optimized for storage. It can be used both to connect disks to RAID controllers and to connect RAID controllers to servers.

    The focus of this paper is how Fibre Channel improves the options for large-scale solutions by connecting multiple RAID arrays to multiple servers. Whether the RAID arrays themselves use Fibre Channel or SCSI to connect to the disks is not relevant to this paper.

    The future direction of Fibre Channel is to enable full function "Storage Area Networks" that use switched fabric to connect storage and servers across the enterprise. However, a key point of this paper is that this future capability is really just the icing on the cake: the most important impact of Fibre Channel is already on the market today. The only networking capability needed to fundamentally change large-scale storage options is the Fibre Channel hub, which is already a production-quality solution.

    Fibre Channel uses hubs to connect many servers to many disk arrays flexibly and reliably. This connectivity has fundamentally changed the options available to implement large-scale centralized storage solutions.

    1. What is a Fibre Channel hub?

    A Fibre Channel hub does the same thing an Ethernet hub does. A Fibre Channel hub is a component that enables online insertion and removal of servers and storage, and allows simplified star cabling instead of daisy-chain cabling.

    The hub logically connects all the devices connected to it into a Fibre Channel Arbitrated Loop. In the event that a connection stops working, the hub automatically disconnects that component from the logical loop so that the rest of the devices continue to communicate normally. This is in contrast to a bus technology such as SCSI where if one cable or terminator fails, the entire bus is rendered inoperable.

  5. Why the key to any large-scale solution is connecting modules in parallel:
  6. The state of the art in silicon components limits how big a single processing unit (such as a controller) can be. The only practical way to exceed that limit is to divide the processing between multiple sub-units that work in parallel. For example, no single processor computer can match the performance of a Symmetric Multi Processing (SMP) computer which leverages multiple processing sub-units in parallel. The same principles dictate that the most powerful storage solutions be constructed from multiple RAID modules.

    Inside large computers or inside large storage subsystems there are multiple modules. In the case of the large computer, one module does not have access to internal registers in another module.

    In the case of storage, one RAID module does not have access to the disks of another RAID module. For that reason, the RAID module must be fully redundant inside and have two independent external connections, so that no single failure can cut off all access to a disk drive.

    Definition: RAID module:

    The RAID module building block consists of a pair of controllers with power, packaging, and cooling to hold disk drives. The RAID module is what directly controls a set of disk drives and controls RAID functions such as RAID 1/0, RAID 3 or RAID 5.

  7. Why Fibre Channel is the best way to connect the modules to build large solutions. This section…
  8. …defines what the ideal connectivity would look like

    …defines the module connectivity options available today:
    Inside-the-box,
    SCSI outside-the-box,
    Fibre Channel outside-the-box

    …explains why Fibre Channel outside-the-box comes closest to the ideal.

    …explains why the other differences between inside-the-box and outside-the-box connectivity are not major considerations

    1. What ideal connectivity looks like

Connectivity ideals

  • Never Fails
  • Minimized total solution cost
  • Never a bottleneck
  • Easy, non-disruptive changes
  • Accommodates any combination of servers and RAID modules
  •  

    Conceptual Ideal

    Realistic Availability

    Realistic availability and performance

    Comparison to conceptual ideal

       

    Ideal: Never fails

    No single point of failure in hardware or software: no matter what happens to A, B keeps running

    Each cluster has no single point of failure. Can survive simultaneous failures, one in each cluster

    Ideal: Never a bottleneck

    Any application can compete for all of the connectivity performance and all the RAID module performance

    Each application is protected from performance demands of another application.

    This is most like the real world: people seldom combine OLTP and Data Warehousing on the same server cluster or on the same large-scale storage subsystem

      1. Module connectivity options available today

    The example on the following page shows the different connectivity options as they apply to a specific centralized storage requirement. The requirement shown is the following:

    Inside-the-box

    SCSI outside-the-box

    Fibre Outside-the-box

    • Simple connections
    • Scales well to many servers and high bandwidth
    • Non-disruptive changes
    • High performance per server HBA

    • No single point of failure
    • Accommodates any number or RAID modules
    • Minimized storage cost
    • High RAID module performance per investment
      1. How Fibre Channel comes closest to the ideal module connectivity
        1. Fibre Channel advantages over SCSI: scaling and ease of management
        2. The following example illustrates some of the difficulties with creating and managing very large configuration with SCSI. It shows how Fibre Channel addresses these issues.

          The example is a four-node cluster sharing four standard RAID arrays. For availability, each RAID array must have two separate paths to each server. To take advantage of the RAID array performance, each RAID array must have at least 50 MB/sec of bandwidth to the servers while the other RAID arrays are working equally hard . This is accomplished by giving each RAID module a pair of UltraSCSI connections to all the servers, which is not shared with any of the other RAID arrays. Since there are four RAID arrays, there are four pairs of UltraSCSI connections, so each server must have eight HBAs.

          Illustration of Fibre Channel scaling advantages over SCSI

          Attribute

          Parallel SCSI Connectivity

          Fibre Channel Connectivity

          Performance scaling and management

          Lack of fair arbitration: devices at the front of the bus get top priority, making performance unbalanced if more than three or four devices share a bus.

          80 MB/sec per bus pair

          Fair arbitration: all devices on a Fibre Channel arbitrated loop get their fair share of the 100 MB/s bandwidth.

          200 MB/sec per hub pair

          Price/Performance

          8 HBA ports per server required for performance, adds to server cost

          2 HBA ports per server, minimizes server cost

          Capacity expansion

          Requires downtime to extend sensitive SCSI buses

          No downtime required

          Complexity, Reliability

          Many points of possible failure:

          16 Y cables

          32 stiff, 68-conductors cables

          Confusing cabling

          Difficult fault isolation

          Fewer points of possible failure

          16 flexible 4 conductor cables,

          pair of hubs

          Simple point to point cabling

          Automatic fault isolation via hubs

          "Over the river" physical replication for disaster tolerance

          Not supported by SCSI without proprietary bus extender hardware

          Available 2,000 meters now, 10km soon. 30km single hop connections available with special hardware

          Manageability

          Changes require advance planning, requires down time

          Distance constraints affect computer room layout.

          Distance and number of connections per bus limit scaling

          Online device insertion and removal

          Relaxed distance constraints

          Scaling virtually unaffected by computer-room distances and the number of servers and arrays.

        3. Summary of Advantages over SCSI
        4. Both Fibre and SCSI solutions above provide large capacity and very high performance. However, the Fibre solution is vastly easier to make changes to and manage. The Fibre solution takes up less HBA slots in the servers, often allowing less expensive servers, which in turn leads to lower software license fees.

        5. Fibre Channel advantages over inside-the-box connectivity
          1. Total Hardware Cost
          2. Leveraging outside-the-box connectivity, Fibre Channel allows standard RAID arrays to be used as the RAID modules and standard hubs to be used for many-to-many connectivity. This dramatically reduces the purchase cost of the storage and the connectivity.

            In the illustrations above, Fibre Channel allows two less HBAs to be used in each server of the "Other Cluster" because each server needs to connect to only two hubs. In contrast, the inside-the-box solution requires four HBAs per server so that each server an have redundant connections to each storage subsystem. This lowers HBA cost, and more importantly, may lower server cost if HBA slots are scarce.

            Outside-the-box connectivity allows chassis capacity and RAID modules to be purchased as needed, one RAID array at time. Although once in while the capacity required exactly matches the capacity of the chassis, the next incremental requirement forces purchase of a new chassis. On average over time, the last chassis added is only 50% needed. Therefore, the average investment required over time with Fibre Channel is reduced by eliminating having a large-chassis that is twice as big as the requirement. Delaying each investment until needed can take advantage of declines of storage prices.

          3. Application Performance
          4. Because either solution keeps the number of server connections low, either solution can scale to very high performance levels bounded only by how much money is to be spent. Leveraging the cost advantage of outside-the-box connectivity, Fibre Channel allows more performance to be purchased for any fixed level of investment compared to inside-the-box solutions.

            Because the "inside-the-box" solution keeps the clusters completely isolated, there is no danger that peak performance demands from the application on one cluster, such as for data warehousing, will take away from the performance necessary on the other cluster, such as for online transaction processing.

            1. Evidence for "outside-the-box" performance advantages

            The Transaction Processing Council (http://www.tpc.org) exists to facilitate fair comparisons between alternative vendor solutions. It is supported by a large number of vendors and is probably the most successful organization of its type in the industry.

            At the time of this writing, to the best of our knowledge, all TPC benchmark results published that use RAID arrays have used multiple, independent RAID arrays connected via SCSI. While none have been completed using Fibre Channel, the same results can be expected if SCSI is replaced by Fibre Channel.

          5. Manageability
          6. Either solution looks pretty much the same from the point of view of the database administrator. But other aspects of managing storage are improved by the flexibility and low cost of outside-the-box connectivity using Fibre Channel.

            Unified storage solution for both centralized AND DISTRIBUTED storage requirements. If outside-the-box connectivity is used, the centralized solution can use the same RAID arrays that are also used in distributed locations across the enterprise. Therefore centralized support resources will be familiar with the storage used in distributed environments because it is the same storage used in the centralized solutions.
            Inside-the-box connectivity offers a different alternative: the same storage solution used in the mainframe environment can be used in the centralized open systems environment. However, the MVS and open systems environments are very different and are usually staffed by different people for reasons going far beyond the selection of storage devices. Therefore this is a less significant opportunity for synergy compared to the opportunity that outside-the-box solutions provide to have all open systems RAID arrays be the same across both centralized and distributed environments.

            Easier performance planning and management: Unwanted interactions between independent application project groups such data warehousing and OLTP can be eliminated. More performance can be purchased for a fixed investment, lessening the burden to optimize the available hardware

            Easier asset management: All open systems hardware can be re-allocated as time goes by. Disk arrays no longer used centrally can be re-deployed for distributed applications or vice versa.

            Easier capacity planning: More capacity can be purchased for any fixed budget, reducing the need to know exactly how much will be needed. Capacity can be added incrementally as needed without major chassis purchases.

          7. Availability

        Either of these solutions can provide highly reliable connectivity for large-scale clusters. Outside-the-box connectivity with Fibre meets the ideal of no single point of failure connectivity whereas with inside-the-box connectivity, both redundant data channels are in the same chassis and have a potential single point of failure in the software that controls the data paths, particularly centralized buffering or caching.

      2. Why other differences between "inside-the-box" and "outside-the-box" connectivity are of little importance
        1. Inside-the-box connectivity does not change how data placement is managed
        2. From a database administrator point of view, neither inside-the-box or outside-the box connectivity make storage easy. The administrator cannot blindly assign tables to storage partitions and expect to achieve high application performance. In either case, the administrator must consciously spread the heavily accessed data fairly evenly between the RAID modules. Otherwise, if one RAID module and its disk drives are over used, they become the bottleneck while the other RAID modules and their disk drives are idle much of the time.

        3. Inside-the-box connectivity has not been shown to reduce "Storage Management" expense
        4. It has been said that storage management costs are several times the cost of the storage itself. However, there is little evidence to suggest that these costs are related to whether inside-the-box or outside-the box connectivity is selected for centralized storage

          Analyst studies showing significant storage management savings from centralizing storage and administrative control over it apply equally to inside-the-box and outside-the-box methods of building the centralized storage pools.

          No matter which storage system is used, most storage management costs are unrelated to the storage decision. They involve the tasks of managing online data. Backup is by far the biggest cost. The activities associated with backup, and virtually all backup tools on the market, are independent of the storage decision.

          In the few areas where there are management cost differences between outside-the-box with Fibre Channel and inside-the-box connectivity, outside-the-box connectivity is easier to manage for the reasons described above.

        5. Limited benefit to being able to split a RAID module between different application groups
        6. Outside-the-box connectivity using hubs does not allow two separate non-clustered platforms to be simultaneously attached to the same RAID module. Future connectivity using switches will support this. What is the impact of the present lack of this feature in "outside-the-box" connectivity solutions?

          In the ideal world, it would be nice for the data warehouse to be able to take advantage of excess compute and storage resources owned by the transaction processing system during off peak hours. As a practical matter, the benefits of complete insulation between applications exceed the missed opportunity to utilize idle resources. If the missed opportunity were so important, enterprises would put their data warehouse and their transaction processing systems on the same server cluster so that server resources for one application could be used by another during off peak hours.

          Generally the most important thing in performance management is to make sure that enough performance is always there to maintain acceptable performance levels during peak loads. Thus equipment purchases are driven by the "worst-case" scenario where all applications are working at once. Therefore the potential use of "idle" resources is of low importance when deciding how much equipment must be purchased.

        7. Inside-the-box connectivity does not enable complex functions inside storage
        8. At the 50,000 foot overview level, it would appear that having multiple RAID modules linked together inside a subsystem would allow more efficient interaction between RAID modules, just as multi-processor computers have highly efficient communication between processing modules. However, the opportunities for peer-to-peer interactions between RAID modules are limited.

          The software layers on the server (such as volume managers, file systems, operating systems, databases, and applications) expect storage to simply store and retrieve blocks without performing any other function. Storage must never do anything that catches the software by surprise. Examples of common functions that safely can be carried out internal to the storage include making extra copies or snap shots. However, the importance of internal connectivity for these function is limited by the fact that the same functions can be accomplished using inexpensive server software and trivial amounts of server resources.

        9. Limited impact to making the same bytes available to different platforms

    Two different servers in the same cluster can safely share read/write access to the same bytes on storage thanks to the clustered lock manager and the fact that the software and operating system is the same.

    To our knowledge, neither Oracle nor any other database supports byte sharing between two different operating system platforms, such as the example shown at the left.

    Before the time any such capability is supported, Fibre Channel switches will allow "outside-the-box" connectivity to give heterogeneous hosts reliable simultaneous access to the same storage using outside-the-box connectivity. This would be implemented to allow the option of sharing a single disk array between multiple, heterogeneous hosts, thereby making it easier combine enough requirements to justify purchasing the disk array. It seems fair to assume that if Oracle were to support this form of solution, it would be done in a manner that is compatible with standard Fibre channel RAID arrays. This would benefit Oracle by making the function applicable to as many environments as possible.

    1. How to exceed "inside-the-box" performance using Fibre Channel "outside-the-box" connections
    2. They key to leveraging Fibre Channel for higher performance is to leverage the price performance advantage of standard RAID arrays over inside-the-box solutions. The cost per IOPS and cost per MB/sec of RAID module controller performance is lower with outside-the-box solutions. The cost per physical disk is also lower. Therefore, the same investment that that would have been made in an inside-the-box storage solution, more performance can be purchased with the outside-the-box alternative leveraging standard RAID arrays.

      Rather than configuring standard RAID arrays to achieve the minimum configuration that meets the usable capacity goal, instead buy as much performance as you can given the investment you would have made in inside-the-box storage. This results in higher storage performance and higher application performance.

      1. Buying performance
      2. Buy additional RAID controller performance by buying additional RAID arrays, even if that means that you will have several arrays that are only half filled with disks. This also reduces the cost of future capacity expansion to just the cost of the add on disk drives.

        Buy enough disks to provide the desired back-end performance level. If performance goal is dominated by read operations, you can still use RAID 5 with the extra disks, thereby not only getting more back-end disk performance but also getting more usable capacity than is required or than would have been purchased with the inside-the-box solution using RAID 1 or RAID 1/0. If the performance goal is dominated by write operations, and configuring the extra disk capacity using RAID 5 is not adequate, then use RAID 1/0. RAID 1/0 achieves more random write performance out of a fixed number of physical disk drives compared to RAID 5.

        Buy enough cache and I/O buffer. Buy enough write cache to hold the largest burst of writes that will be generated. The size of the largest burst can often be reduced by performing database checkpoints more often, thereby having less writes in each burst. Once this goal has been met, additional money that might have been spent on read cache in an inside-the-box solution can be re-directed to server memory and used for IO buffering there. Very large read caches are generally more effective in the server than in the storage. You can take advantage of the large memory capability of HP/UX 11.

        Buy enough storage connectivity to achieve the desired bandwidth. Each pair of hubs provides close to 200 MB/sec. If more is necessary, use more than one pair. When using two pairs of hubs with one very large cluster, connect all the servers to all the hubs, but connect half the storage to one pair of hubs and half the storage to the other pair of hubs.

      3. Configuring High Availability

      For high availability environments, each RAID array must be fully redundant, including two controllers, A and B. Each server should have two HBAs, A and B, so that no one HBA failure can cut the server off from the data. Use at least two hubs per cluster, A and B. Connect all the HBA A’s and RAID controller A’s to hub A. Connect all the HBA B’s and RAID controller B’s to hub B.

    3. How to implement special features using standard RAID arrays and Fibre Channel:
      1. Across-the-country disaster tolerance

    The most common and most powerful "across the country" disaster tolerance solution is completely unrelated to the choice of storage connectivity. The solution is database-level replication.

    Database-level replication is implemented by server software such as Oracle Replication Server. The replication software copies the transaction to the remote server, which runs a completely separate instance of the database. The remote database is also administered separately.

    This solution has several advantages over alternative methods of replication that depend on storage-based connectivity:

      1. Across-the-river disaster tolerance
      2. For across-the-river distances, storage-level remote mirroring provides a simpler although less powerful method of remote replication than full database-level replication. By late 1998, 10km distances will be commercially available using a single hop between hubs or switches. Today a single hop across 30km can be achieved for about $8000 using a Fibre Channel repeater.

        Inside-the-box connectivity solutions often manage this function within storage. However, it can just as well be managed at the host level using utilities such as HP MirrorDisk/UX or Veritas Volume Manager. These software options are generally much less expensive than storage-specific software used with inside-the-box connectivity. The management load on the host is trivial compared to the savings that can be achieved by using outside-the-box storage connectivity.

        The outside-the-box solution is even more attractive when making the entire cluster disaster tolerant. HP calls this "Wide Availability." Disaster tolerant clusters in HP/UX require two data centers if sever-software is used for replication. Server-software is typically used for replication with outside-the-box connectivity. In contrast, using "inside-the-box" connectivity to replicate data instead of allowing the server to do it requires three data centers to achieve a disaster-tolerant cluster. Thus the outside-the-box solution is less costly to implement and maintain.

        Inside-the-box remote mirroring

        Outside-the-box remote mirroring

         

      3. Data snapshots (for Y2K testing, fast recovery from database corruption)
      4. Data snapshots can be used to create a separate copy of the active database and then split one copy so it remains fixed over time. This is useful to create year 2000 testing environments. It can also be used to provide fast recovery from database corruption. In the event of database corruption, you switch back last nights snapshot and then bring it back up to the minute by applying today’s journaled events. This is much faster than going back to the last full backup and doing the complete restore from tape.

        The function is basically the same as the remote mirroring solution above, but it can be done locally. Therefore HP MirrorDisk/UX or Veritas Volume manager can provide the function very economically in conjunction with commonly available disk arrays used with outside-the-box connectivity.

      5. High speed online backup
      6. Backup is a logical function, not a physical function. Server software must be used to interpret the structure of the data during the backup. This is necessary to allow subsets of the data to be restored while most of the data is left in its current state. For example, individual file recovery is a common cause for restores, far more common than catastrophic storage system failure.

        One partial exception: With a little logical management, the bulk of a table backup consists of simply copying all the data in the table sequentially off a raw partition. As with file backup, the best solutions are completely compatible with any RAID array on the market today.

        Extremely fast online backup has been demonstrated using tools that operate from a logical level on the server rather than at the "physical level" on the storage. Operating at logical level allows them to automatically work regardless of the underlying layers of how volumes have been partitioned, what logical volume manager has been used, or what storage has been used. This makes them easier, safer, and more universally applicable compared to tools that operate directly at the physical level. These tools also operate without the need to take a data snapshot, thereby cutting in half the amount of storage required.

        Examples of logical level high speed backup:

        Spectralogic Alexandria: 505 GB/hour demonstrated

        StorageTek REELBackup in conjunction with StorageTek dbBRZ

        Veritas NetBackup used with Oracle, Sybase, or Informix

      7. MVS to UNIX data transfer
      8. Multi-platform "Storage Area Network" (SAN) capabilities may not exist until early 1999. Before you can proceed with confidence to use outside-the-box connectivity, you must have some degree of confidence that you have other good or even better ways to transfer data even without this "SAN" solution.

        All shipping or announced "inside-the-box" data transfer methods can be used with "outside-the-box" open systems storage. All existing transfer methods leveraging inside-the-box connectivity don’t know and don’t care whether transferred data gets stored on JBOD, independent disk arrays, or inside-the-box connectivity storage.

        These transfer methods only care that the open systems SERVER has a connection to the mainframe STORAGE solution. Once the data gets into the server, it is written to storage using the standard operating system methods, which don’t know and don’t care what brand of storage is used.

        Also, it should be noted that storage-independent transfer methods are both faster and less expensive than proprietary tools using inside-the-box connectivity. These storage-independent tools use special adapters to use mainframe IO channels to get data off the mainframe and feed it to the open systems SERVER at very high speed. Examples: 1) CNT FileSpeed: 30+ GB/hour, jointly marketed by CNT and IBM 2) StorageTek Network Executive: 20+ GB/hour, over 3000 copies installed.

      9. Online Resource re-allocation
      10. Re-allocate entire disk arrays between clusters by unplugging from one hub and plugging into another. This was not possible online with SCSI.

        Re-allocate disks between clusters by removing from one disk array and inserting into another. (Plug in a new disk array if there was no empty chassis capacity available on the target platform. This was not possible online with SCSI.)

        This may seem kludgey compared to software-based re-allocation using inside-the-box connectivity. However, in either case, the resource moving part takes only a few minutes. In contrast, the planning and data management part takes hours or longer and is identical regardless of which connectivity solution is used. So at best software-based re-allocation saves minutes from a process that takes hours.

        In either case, the data must be re-loaded by the server after the storage is moved. For example, the on-disk database image made by Oracle on the Sun platform is not readable by Oracle software running on HP/UX or NT.

      11. File Sharing

    File sharing, also known as "network attached storage" or "network file systems," uses industry standard file system protocols like NFS to share files between different platforms. There are three basic options to construct file sharing solutions:

    1. Use file sharing software with general purpose servers with general purpose storage
    2. Use special purpose real time operating systems and hardware using proprietary interfaces to general purpose inside-the-box connectivity storage
    3. Use special purpose systems that integrate the server with the storage

    To take advantage of the inherent advantages of outside-the-box Fibre Channel connectivity, use option one above. It results in solutions which a) cost less (on $/MB and on $/SpecNFS) b) go faster (SpecNFS) and c) are easier to manage because i) data can be backed up using industry standard backup tools that can run either on the general purpose server or anywhere on the network. (no new backup procedure to implement) and ii) they use the same server and operating systems already in use for applications (no new hardware or operating system software to learn.)

    1. Conclusion

    Fibre Channel outside-the-box connectivity enables a new paradigm in large-scale, centralized storage that is difficult to implement with SCSI.

    Multiple Fibre Channel disk arrays can be used in parallel to achieve the major objectives of large-scale, centralized storage.

    Even enterprises who choose to continue with their present enterprise storage paradigm can achieve very significant discounts on their next storage purchase by demonstrating their familiarity with these options to their storage vendor.

     

     

     

     

     

     

     

    The product names mentioned herein are the trademarks or trade names of their respective owners.

    Author | Title | Tracks | Home


    Send email to Interex or to theWebmaster
    ©Copyright 1998 Interex. All rights reserved.