Anda di halaman 1dari 27

There are number of discussions, blogs, and articles comparing Internet SCSI (iSCSI), Fibre Channel over Ethernet

(FCoE), and Fibre Channel (FC). Many of them share a common belief that FCoE and FC are better suited as core data center storage area networks (SANs) and that iSCSI is ideal for Tier 2 storage or for SAN deployments in remote or branch office (ROBO) and small and medium business (SMB) environments. That is because iSCSI is characterized as low-performing, lousy, and unpredictable. In this blog I will tackle the misinformation around iSCSI performance as compared to FC and FCoE. I will also compare effective efficiency of the various SAN protocols since efficiency is an aspect of performance. Both iSCSI and FCoE share the same 10 Gigabit Ethernet (10GbE) at the transport layer. However, the perception is that the TCP/IP overhead makes iSCSI inefficient compared to FCoE and FC (both having better payload to packet-size ratio), thus leading to lower performance and efficiency. Figure 1 shows protocol efficiency calculation for iSCSI (both 1.5K MTU and 9K MTU), FC, and FCoE (2.5K MTU). It can be seen that when jumbo frames are enabled, iSCSI has the best protocol efficiency.

Figure 1: Protocol efficiency comparisons Regarding performance, iSCSI having low performance might have been true when 1 Gbps was the maximum throughput available per iSCSI port (whereas FC was delivering 2 Gbps, 4 Gbps, and 8 Gbps per port), but with the availability of 10GbE, the commonly held belief that iSCSI performance is not up to par compared to FCoE or FC is no longer true. The Office of the CTO at Dell conducted a series of performance tests to compare 10GbE iSCSI, FCoE, and 4 Gb FC. To ensure similar workloads, the application throughput was limited to 4 Gb. The host bus adapters (HBAs) used for the different protocols were as follows: a 10GbE network interface card (NIC) with iSCSI offload for iSCSI traffic; a 10GbE converged network adapter (CNA) for FCoE traffic; and a 4 Gbps FC HBA for Fibre Channel traffic. The goal of the testing was to capture achieved throughput and CPU utilization for a given SAN protocol.

The protocol efficiency comparisons from Figure 1 might be theoretical in nature; Figure 2 shows results from an I/O workload study comparing throughput of 10GbE iSCSI, FCoE, and 4 Gb FC HBAs. To keep the results easy to visualize, the results show the throughput achieved when the application generated 4 Gb throughput. It can be seen clearly that iSCSI outperforms FCoE and FC regardless of read or write operations for various I/O block sizes.

Figure 2: Throughput performance comparisons (MB/s) Along with capturing the throughput, lets examine the host CPU utilization to better assess the performance and efficiency of specific SAN protocols. All the host adapters are comprised of hardwarebased offload capability to process the protocol-specific traffic, minimizing use of CPU resources. Figure 3 shows the effective CPU utilization for various workloads. It can be seen from this figure that all the host adapters have similar CPU utilization metrics, again reinforcing the fact that iSCSI is as efficient as FCoE and FC. Finally, Figure 4 shows throughput efficiency, defined as MBps/%CPU, for the various storage protocols. The chart shows 10GbE iSCSI having the best throughput efficiency across the workload types, clearly outperforming FCoE and FC. From the test results we can undoubtedly summarize that iSCSI as a SAN protocol is not lowerperforming or inefficient compared to FC or FCoE. On the contrary, iSCSI outperforms both FC and FCoE. Customers who are planning to purchase storage for their data centers can consider an iSCSI SAN as a viable option, knowing iSCSI performance is at par or even better than FCoE and FC. Also, customers considering unifying their data center networks over Ethernet can start doing so now with iSCSI. While FCoE can also deliver storage traffic over Ethernet, it is still under development and is not ready for prime time.

Figure 3: CPU utilization (%) for iSCSI offload, FCoE, and FC

Figure 4: Overall protocol throughput efficiency (MBps/%CPU) for iSCSI offload, FCoE, and FC

Defining The Terms


I want to try to avoid the yeah, but or fanboi comments from the outset. First, I understand FCoE much, much better than I understand iSCSI. So, there may be some specifics or details that I may be missing and I highly encourage corrections or additions. My motive here is to examine the technologies in as detached and unbiased as possible to get to the true performance numbers. Also, Im looking here at the question of performance. By itself performance is a pandoras box of it depends, and I understand and accept that burden from the get-go. Performance, like price, must be handled as a purchase criterion in context, so Im not suggesting that any recommendations be made solely upon any one element over another. Having said that, what exactly are the performance concerns we should have with iSCSI vs. FCoE?

The Nitty Gritty


At first glance, it appears that FCoE provides a more efficient encapsulation method using standard transmission units. There is no need to travel as far up and down the OSI layer stack, for example, which means that there is less processing required on either end of a point-to-point network for dealing with additional headers. If youre new to this, think of it this way: You have a letter you want to send to Santa Claus. You write your letter and place it in an envelope and then drop it in the mail. That letter then arrives at the North Pole (if you addressed it properly) and Santas helpers open the letter and hand it to him. Thats the FCoE metaphor. (Actually, heres a much better and visually appealing description).

How many layers? The TCP/IP metaphor (with respect to layers) means that you have to take that letter to Santa Claus and then place it into a larger envelope, and then put that larger envelope into a box before sending it on its way. The extra layers of packing and unpacking takes time and processing power.

iSCSI requires more packing and unpacking in order to get to the letter, the argument goes, so over time that would mean that Santa would in theory be able to open fewer letters in the same amount of time. There is evidence to suggest that this conventional wisdom may be misleading, however. There are a lot of factors that can affect performance to the degree that a properly-tuned iSCSI system can outperform an improperly configured FC system. In fact, an iSCSI storage system can actually outperform a FC-based product depending on more important factors than bandwidth, including the number of processors, host ports, cache memory and disk drives and how wide they can be striped. (Inverted.com). Ujjwal Rajbhandari from Dell wrote a blog piece comparing the performance between iSCSI, FCoE and FC in which he found that iSCSIs efficiency can be profound, especially when enabling jumbo frames. Dells measurements are somewhat difficult to place in context, however. While the article was written in late October, 2009, only 4Gb throughput was used even though FCoE cards running at line speed had been available for more than half a year. (Also, the graphs are difficult to turn into meaning as well: one of the graphs included doesnt really make much sense at all, in fact, as it appears that CPU utilization is a continuum from reading to writing rather than a categorization of activities.) It seems to me that the whole point of understanding protocol efficiencies become salient as the speeds increase. The immediate question I have is that if Dell points out that iSCSI efficiencies at 1GbE are inappropriate when compared to faster FC speeds, why would Dell compare slower FC speeds and efficiencies to 10 Gb iSCSI? For instance, when moving from 4Gb to 8Gb HBAs, even within a pure 4Gb switching environment using 4Gb storage, the overall throughput and bandwidth efficiency can increase significantly due to the improved credit handling. Nevertheless, there is plenty of evidence to suggest that iSCSI performance is impressive. In February Frank Berry wrote an article about how Intel and Microsoft are tweaking iSCSI for enterprise applications, improving CPU efficiency as well as blasting through some very impressive IOPS numbers. Steven Foskett has a very interesting article on on how it was done and rightfully asks the more important question, can your storage handle the truth? Now, its very easy to get sidetracked as far as looking at other aspects of a FCoE/iSCSI decision tree. Yeah, but becomes very compelling to say, but for our purposes here were going to stick with the performance question.

How much performance is enough?

Ultimately the question involves the criteria for data center deployment. How much bandwidth and throughput does your data center need? Are you currently getting 4 GB/s of storage bandwidth in your existing infrastructure? There is more to SAN metrics than IOPS, of course; you need to take it hand-in-hand with latency (which is where the efficiency question comes into play). Additionally, there is the question of how well-tuned iSCSI target drivers have been written. So, obviously iSCSI performance can be highly tuned to deliver jaw-dropping performance when given the right circumstances. The question that comes to mind, then is

How does performance scale?


iSCSI best practices require a completely separate iSCSI VLAN or network, which help with dedicating the bandwidth for SAN traffic. Nevertheless, whats not clear is what happens to the performance at larger scales:

What happens with boot-from-SAN (e.g., PXE) environments? What is the theoretical maximum node count? What is the practical maximum node count? What is the effect of in-flight security (e.g., encryption) upon performance? What is the threshold for performance degradation? How does scaling affect the performance of the IQN server/management? Where is the retransmission threshold for congestion and what is the impact on the performance curve?

This is where my limited experience with iSCSI is likely to get me into trouble. Im having a hard time finding the answers to those questions as it relates to 10Gb iSCSI, so Im open to input and clarification.

Bottom line.
Even with these additional questions that arise regarding issues that affect performance, its clear that iSCSI does have the performance capability for data center storage traffic. There are other considerations, of course, and Ill be addressing them over time. Nevertheless, I think its quite clear that all things being equal (and yes, I know, they never are), iSCSI can easily put up the numbers to rival FCoE.

Why FCoE? Why not just NAS and iSCSI?


Scott Lowe recently wrote a good post on FCoE, and his thoughts here. The comments of his readers are comments Ive heard from others as well, so I posted a response in the comments, but I think Scott and I dont have the same readership (and perhaps those that do may not read the comments)

This is an important dialog, IMHO, and I thought the my response was worth posting, as Ive gotten loads of questions like this also. If youre interested in this thread suggest reading Scotts posts and the comments. If you want to see my take, read on. (from my comment on Scotts blog post) Guys The multi-hop thing is a bit old news - I did a big post on this when the FCoE spec was done (June 3rd) http://virtualgeek.typepad.com/virtual_geek/2009/06/fcoe-ratified.html This covers it in gory detail. The specific issue is that pre standard initiators and targets were missing something called FIP (FCoE initialization protocol). The gen 1 HBAs from Qlogic and Emulex were really more for early interop, plugfests, and development, and I believe (I know this for a fact for the Qlogic 8000 series - and I would fully expect the same from Emulex) are not software upgradable to the FC-BB-5 standard that includes FIP. BTW - we caught flack at EMC for not natively supporting FCoE earlier on the array targets, but this was why - the standard simply wasnt ready. It was ready for host-FCoE switch-FC switchFC target. Now, its getting ready for array targets. Personally, thats why I disagreed with the approach of taking the QLE8000 series card (with custom pre-FIP standard elements), putting into a FAS head, and calling that a solution. While that was going on (and making marketing noise - but frankly a move that doesnt help the customer, because now they have a FAS head that needs a heavy hardware maintenance window to do a PCIe card upgrade), we were busy doing interop and working on the standard at the standard body (look at the meeting minutes they are all public). Were now, of course, developing an ultraflex IO module for FCoE, which are hot-swappable. But back to the larger question - why FCoE? People who know me, Im a SUPER fan of NAS and iSCSI, and naturally am biased in that direction, but as Ive worked with more and more customers, I have a growing understanding of the why. NFS is great and iSCSI are great, but theres no getting away from the fact that they depend on TCP retransmission mechanics (and in the case of NFS, potentially even higher in the protocol stack if you use it over UDP - though this is not supported in VMware environments today). because of the intrinsic model of the protocol stack, the higher you go, the longer the latencies in various operations. One example (and its only one) - this means always seconds, and normally many tens of seconds for state/loss of connection (assuming that the target fails over instantly, which is not the case of most NAS devices). Doing it in shorter timeframes would be BAD, as in this case the target is an IP, and for an IP address to be non-reachable for seconds - is NORMAL.

Theres also the question of the fact that anything dependent on TCP/IP also will have scenarioes that depend on ARPs, which can take time. This isnt a secret. Look at the Netapp TR-3428 (and upcoming TR-3749) and EMC H6337 docs which spell the timeouts for NFS datastores on FAS and Celerra platforms respectively - which are in many tens of seconds (refer to the latest currently it adds up to 125 seconds), and for iSCSI if you read the VMware guides, the recommendation is 60 seconds. FCoE expects most transmission loss handling to be done at the Ethernet layer, via 802.1Qbb (STILL NOT A STANDARD!) for lossless congestion handling and legacy CRC mechanisms for line errors. This means milliseconds - and in fact in many cases microseconds of link state sensitivity. Also, whereas we are seeing 30x performance increases for solid state disk on devices without filesystems, we see 4-6x in cases where they support a filesystem. That doesnt mean filesystems (or NAS devices are bad), but highlights that one answer isnt the answer all the time, for all workloads, all SLAs, and all use cases. These ARE NOT showstoppers for many, many (most?) applications, and many, many use cases, but they are for some - and often, those are for applications with hyper-stringent SLAs - but we want to virtualize everything, ever application possible, right? All FCoE adapters and switches can also be used for iSCSI and NAS, so dont think of it as an either/or, but an and. It means that it is possible to whittle the use cases that cant use an ethernet storage transport to near zero (its not zero, because there will always be mainframes and whatnot). The ultimate point on this (this being the point that its not an FC HBA, but rather a NIC feature) is that Intel has commited to supporting the final result of 802.1Qbb and then doing a software initiator - at that point, FCoE support will just be an attribute of every commodity NIC and switch on the market. Everyone in the FC HBA/switch market is rushing to it not because they want proprietary, but rather because were reaching the inflection point where if youre not doing this, youre going to be out of business (maybe not today, but at a relatively near tomorrow). The FCoE idea important (again as a NIC/switch feature, because it means that convergence (wire once, use for LAN/NAS/iSCSI/FCoE) is then applicable to a broader market, which only accelerates the broader use of ethernet storage, which many people (me included) want to see come sooner rather than later. Theres a far lesser IT value proposition of maintaining and integrating with exisitng tools and processes. I only say lesser because frankly, if theres a better way, it can over time change a process. Remember - this is coming from someone who: a) loves NAS b) loves iSCSI (came from iSCSI startup)

c) works for a storage company that is in the NAS, iSCSI, FC, and FCoE (and heck, COS and CAS as well) business - we just do what our customers tell us they need. At least in my personal experience, our customers are asking for FCoE for those reasons

Continuing the FCoE Discussion


Tuesday, December 9, 2008 in Storage by slowe | 17 comments A few weeks ago I examined FCoE in the context of its description as an I/O virtualization technology in my discussion of FCoE versus MR-IOV. (Despite protestations otherwise, Ill continue to maintain that FCoE is not an I/O virtualization technology.) Since that time, I read a few more posts about FCoE in various spots on the Internet: Is FCoE a viable option for SMB/Commercial? Is the FCoE Starting Pistol Aimed at iSCSI? Reality Check: The FCoE Forecast Tonight, after reading a blog post by Dave Graham regarding FCoE vs. InfiniBand, I started thinking about FCoE again, and I came up with a question I want to ask. Im not a storage expert, and I dont have decades of experience in the storage arena like many others that write about storage. The question Im about to ask, then, may just be the uneducated ranting of a fool. If so, youre welcome to enlighten me in the comments. Heres the question: how is FCoE any better than iSCSI? Now, before your head explodes with unbelief at the horror that anyone could ask that question, let me frame that question with more questions. Note that these are mostly rhetorical questions, but if the underlying concepts behind these questions are incorrect you are, again, welcome to enlighten me in the comments. Here are the framing questions that support my primary question above:
1. FCoE is always mentioned hand-in-hand with 10 Gigabit Ethernet. Cant iSCSI take advantage of 10 Gigabit Ethernet too? 2. FCoE is almost always mentioned in the same breath as low latency and lossless operation. Truth be told, its not FCoE thats providing that functionality, its CEE (Converged Enhanced Ethernet). Does that mean that FCoE without CEE would suffer from the same problems as iSCSI? 3. If iSCSI was running on a CEE network, wouldnt it exhibit predictable latencies and lossless operation like FCoE?

These questionsand the thoughts behind themare not necessarily mine alone. In October Stephen Foskett wrote:

And iSCSI isnt done evolving. Folks like Mellor, Chuck Hollis, and Storagebod are lauding FCoE at 10 gigabit speeds, but seem to forget that iSCSI can run at that speed, too. It can also run on the same CNAs and enterprise switches. If those Converged Network Adapters (CNAs) and enterprise switches are creating the lossless CEE fabric, then iSCSI benefits as much as FCoE. Dante Malagrino agrees on the Data Center Networks blog: I certainly agree that Data Center Ethernet (if properly implemented) is the real key differentiator and enabler of Unified Fabric, whether we like to build it with iSCSI or FCoE. Seems to me that all the things that FCoE has going for it10 Gigabit speeds, lossless operation, low latency operationare equally applicable to iSCSI as they are functions of CEE and not FCoE itself. So, with that in mind, I bring myself again to the main question: how is FCoE any better than iSCSI? You might read this and say, Oh, hes an FCoE hater and an iSCSI lover. No, not really; it just doesnt make any sense to me how FCoE is touted as so great and iSCSI is treated like the redheaded stepchild. I have nothing against FCoEjust dont say that its an enabler of the Unified Fabric. (Its not. CEE is what enables the Unified Fabric.) Dont say that its an I/O virtualization technology. (Its not. Its just a new transport option for Fibre Channel Protocol.) Dont say that it will solve world hunger or bring about world peace. (It wont, although I wish it would!) Of course, despite all these facts, its looking more and more like FCoE is VHS and iSCSI is Betamax. Sometimes the best technology doesnt always win

17 comments
Comments feed for this article Trackback link: http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/trackback/

1.

Justin on Tuesday, December 9, 2008 at 3:32 am Scott, Please forgive me if Im explaining stuff youve heard before and am insulting your intelligence, but From my experience the big deal with FCoE vs. iSCSI is that one is working at Layer 2 and one is working at Layer 3 (respectively). iSCSI is SCSI commands and data (1), encapsulated in TCP packets (2), sent using IP (3), over ethernet frames (4). FCoE is

SCSI commands and data (1), encapsulated in FC frames (2), sent over ethernet frames (3). Its just inherently less overhead. FCoE is essentially SCSI over Ethernet, whereas iSCSI is SCSI over IP Similarly, there does exist FCoIP, which is comparable to iSCSI although its used for a completely different purpose (tunneling FC from site to site). So theres less work to do for FCoE, and this means less work to do either in the software driver for it, or in the adapter. Also of note is that today ,a hardware iSCSI adapter will run you about the same as a FC HBA, so theres not that much cost savings. The other important point is that FCoE will work with existing FC storage arrays with no modification, which means you can start your unified fabric while still maintaining your old FC infrastructure. I do think that 10Gbit iSCSI will be better than 1GB iSCSI, obviously, but FCoE will be better. There will be support for existing enterprise storage arrays in much greater abundance with FCoE than iSCSI, and your performance will be better straight off the bat (a 4GB FC array via FCoE will be way faster than a 1GB iSCSI array on 10GBe). That being said, iSCSI will always be routable whereas FCoE wont be but do you really want to be routing your storage traffic? I think the market will eventually decide which technology will win, and just from the way I see it Im betting on FCoE due to its compatibility. Your thoughts?

2.

Rodos on Tuesday, December 9, 2008 at 5:12 am Great topic Scott. There is a thread on VMTN about this at the moment and it would be great if people who are thinking about this would be able to contribute. Its at http://communities.vmware.com/message/1119046 Rodos

3.

Rodos on Tuesday, December 9, 2008 at 5:19 am FCoE is better than iSCSI when you need to integrate into existing FC fabrics that may already exist.

Its a fruit comparison but not an apples to apples. One use case is if you have your existing FC storage fabric but want to bring in new equipment, you can use FCoE for that at the access layer and then transport it over FC in the core to get to the existing SANs. Not many SANs natively support FCoE, but yes they do support iSCSI. However many dont allow FC and iSCSI either at the same time or to the same LUNs. But my point is more about transition and integration to existing environments. Just a thought as to one difference.

4.

Gary on Tuesday, December 9, 2008 at 5:33 am I see where the confusion comes here. The main difference between iSCSI and FCoE is that iSCSI uses the TCP/IP stack and FCoE runs directly on ethernet. Ethernet has no flow control, therefore the new converged switches need to add flow control to the traffic (bit of a wikipedia lookup answer, but the interpretation is import). Like you I am not a storage guru, but as far as the other main activities that are associated with FC, zoning and mapping, the new converged switches will need to perform this process. iSCSI does not have these facilities, although they can be mimicked using access lists. FCoE is not routable, unlike iSCSI, so all devices from initiator to target need to be converged (FCoE aware) devices to create a fabric. The nice thing about iSCSI is that it can be implemented on an IP network. Its performance is going to be tied into the setup and any hardware used to provide the transport (doing iSCSI on hardware, rather than in software provides benefits as well as tech like jumo frames). iSCSI will also inherit any issues with the IP network it is running on top of. FCoE is going to add an additional cost to the infrastructure at any site in which it is deployed. Converged switches will not be cheap, and compatible HBAs will still be required. Whether the HBA will also be converged is another question (can I run FCoE and TCP/IP over the same port?). The bit of knowledge I am missing is how converged are these devices? Once the initiator connects to the converged switch, is the transport between these switches also converged (holding both TCP/IP networking and FCoE traffic), even further then if this is the case then how do the storage admins feel about their storage path being poluted with other traffic? I would almost consider that FCoE only really provides a few benefits 1. Brings FC to 10gig 2. Reduces the number of deployed devices (network + fabric vs. converged) 3. Changes the medium to a lower cost alternative and enables storage pathing over

infrastructure that might be more readily available (gets rid of fibre runs and replaces with copper) Im probably wrong about a lot of this stuff, so some clarification would help both myself and other readers of Scotts (excellent) blog. Gary

5.

Gary on Tuesday, December 9, 2008 at 6:24 am The difference between iSCSI and FCoE is mostly down to the transport. FCoE uses ethernet as the transport and iSCSI uses TCP/IP. Ethernet needs to be extended to support flow control (already part of the the TCP/IP standard). FCoE requires a converged device to perform the following 1. Map MAC address to WWW 2. Perform zoning/mapping/masking functions 3. Create fabric between initiator and target The nice thing about iSCSI is that the network doesnt need to change, standard switching and routing can provide a route from initiator to target (routing being the key word, as ethernet is not routable). iSCSI does not provide the zoning/mapping/masking functions, but some of this can be achieved via access lists and clever VLAN tagging. FCoE also supports VLAN tagging, so logical separation of traffic is still possible (rather than the guaranteed physical separation fabric provies). Adapters will also be converged, so both TCP/IP and FCoE can use the same medium. This is were I think the standard helps some implementations, but hinders others. heres my thinking. If you converge the two transports then you need to have some kind of QOS in place to insure the storage path is not interrupted. Storage admins like to know where the bottlenecks can exist in the transport (fabric) to identify throughput issues. The converged devices will need a management system for both QOS and throughput analysis to satisfy the needs of the storage admins and networking teams. Its great that FCoE reduces the number of devices, amount of cabling and power/cooling requirements but at the same time it is bad that data paths are shared to the degree available as it can lead to bleed between the networking guys and the storage guys. I would expect that a lot of early implementations will still keep the storage and networking paths logically separated, i.e. a network card for TCP/IP traffic and a HBA for FCoE, with separate trunks/paths all the way through the infrastructure (probably

using VLAN tagging). Its the only way to guarantee to both networking and storage teams that their traffic has equal priority. I work with a relatively small setup (VMWare, blades and Netapp). Im not a storage guru. I currently utilise both iSCSI and FC in my environment. FCoE would not change much for me, but I can see the datacenter taking a big advantage. The standard isnt going to be the issue, its the management of converged traffic that will be the big one. Its similar to when voice/video came onto TCP/IP, suddenly there was traffic that needed priority. Voice/Video is easier to manage as we know bandwidth requirements in advance. Storage is generally not so uniform. 10GB will quicly become 5G data 5G storage, or similar weighing. At least this way we can guarantee the throughput to each of the different teams. Gary

6.

slowe on Tuesday, December 9, 2008 at 7:34 am Justin, Ill grant you that FCoE has less overhead, theres no doubt about that. But I really have to question the compatibility of FCoE with existing FCPhow does having to use new adapters (CNAs) and new switches (Cisco Nexus or equivalent) equal compatibility? Sure, its still FC, but youll need a bridge to join FCoE and FCP fabrics. I think that a lot of people overlook the fact that new CNAs and new fabric switches are required. As for performance, of course a 4Gb FC array via FCoE over 10Gb Ethernet will be better than a 1Gb iSCSI array over 10Gb Ethernet. The bottleneck here is the array, not the fabric or the transport. Youre betting FCoE for compatibility, but to me iSCSI seems much more compatible. Rodos, As I mentioned to Justin, it seems to me that well need an FCoE-to-FCP bridge to join the physical fabrics into a single logical fabric. This bridge will introduce less latency and overhead than an iSCSI-to-FCP bridge, but it will be an additional piece of equipment that will be required. FCoE does seem much more of a transitional technology than anything elsehence my comment about VHS vs. Betamax. VHS was not the best technology, but it won. Will we see the same with FCoE and iSCSI?

7.

Roger Lund on Tuesday, December 9, 2008 at 11:06 am slowe, I tend to agree with you, additionally, it is very easy to scale iSCSI, where you are stuck with one controller on a EMC / Netapp FC array, each iSCSI array ( EQL) has its own controller. Therefore, if you have three racks of EMC or Three Racks of EQL, ( 12 per rack ) the array each have two controllers each ( or at least most that I have seen do ) where the , EQL iSCSI, would have something like 36 controllers, vs three EMC controllers for the same amount of storage. Now Even if you had 8 GB FC, wouldnt you be limited to 8GB X 4ports X 3 Controllers = 96 GB To make it even, lets say you had iSCSI sans with 10 GB controllers, 10 GB X 2port X 36 Controllers = 720 GB Hence, if you had a Six top end switches @ 10 GB, Connected to 36 10 GB Sans all on the same switch back plane, wouldnt the EQL have better throughput than FC Over Ethernet?

8.

Stu on Tuesday, December 9, 2008 at 11:07 am Scott, Sure there is new equipment (CNA, CNS), but from a management standpoint, the new servers get added to the existing FC fabric and can be zoned and given access just like they were more FC nodes this is the easy interoperability. There are plenty of customers running hundreds or thousands of nodes of FC. For customers that dont have a large investment in FC, iSCSI is a good solution (and sure, it will be able to take advantage of 10GbE and CEE). iSCSI has done very well in the commercial/SMB space, but the ecosystem and tools for a large (hundreds of nodes) hasnt developed yet. 2 paths to get customers to the 10GbE converged network. -Stu

9.

slowe on Tuesday, December 9, 2008 at 12:24 pm Roger,

Thanks for your comment. Not all iSCSI arrays scale in exactly the same fashion, so some of what you are discussing may be specific to Dell/EQL. In addition, not all iSCSI initiators will scale overall throughput linearly with more links as well (think VMware ESXs software initiator, for example). In this regard, I will say that FC (and presumably FCoE) have the advantage. Stu, Easy interoperability between FC and FCoE I will grant. As you describe, the ability to manage the FC and FCoE fabrics in much the same fashion, perhaps from the same tools, is quite compelling for large FC shops. But interoperability is not the same as compatibility, and to say that FC is compatible with FCoE is, in my opinion, incorrect. Perhaps Im being a stickler, but if I cant plug an FC HBA into an FCoE fabric then theyre not compatible. Interoperable, yes, but not compatible. Thanks to both of you for your comments!

10.

Nate on Tuesday, December 9, 2008 at 2:25 pm FCoE may be appealing to current FC users because they want interoperability, but I dont see it having significant value over iSCSI for anyone incoming to the storage market. FCoE may use ethernet, but that doesnt make it the easy plug into the network that iSCSI is. Particularly for SMBs and SMEs that may not have dedicated storage teams the ability to not need to learn an all new network is huge. A standard switch running standard configs is perfect for iSCSI, not so for FCoE. Routability is a big deal. When you want to be able to replicate data between data centers or even across a WAN link its nice to not have to take an extra step of tunneling or conversion. iSCSI maintains a leg up on cost as well because you dont need special switches. FCoE may get you to the same number of switches as iSCSI, but not necessarily the same commodity level of switches. HBAs are also not needed in many scenarios. If your servers arent heavily loaded (which most arent) they can easily handle the little bit of extra work to run a software initiator. The Microsoft iSCSI initiator is fantastic with great MPIO. Im biased because I was an early iSCSI adopter (started buying Equallogic back when they were just Equallogic), but I dont see any value in FCoE other than giving FC clingers a platform from which to claim they are keeping up with the times. 10GB iSCSI would have meant the end of FC, so they had to jump the ethernet bandwagon.

11.

Roger Lund on Tuesday, December 9, 2008 at 3:19 pm

slowe, Correct, and I think that really the largest bottle neck becomes the San and or Server / Servers. But it think that iSCSI is very flexible today, and will be more so in the future.

12.

Jose L. Medina on Tuesday, December 9, 2008 at 5:31 pm Scott: I agree with you: I cant see any reason to substitute iSCSI by FCoE. I think FCoE is another strategy to assure new bussiness to storage & networking vendors. Personally, I was using iSCSI for years in ESX environment without any special knowlegde of Pure Storage networking.. and its works good for me!. FCoE hide the manifest incapacity (or desire) of networking manufactures to improve ethernet with QoS capabilities (as an storage network requires). Im sure iSCSI over serious datacenter ethernet can provide the same solutions of FCoE without the expesive knowlegde & management of a FCxx network. 13. Trackback from Dave Graham's Weblog on Tuesday, December 9, 2008 at 6:04 pm
14.

Dan McConnell on Tuesday, December 9, 2008 at 11:38 pm Scott, Great question! always fun cutting through the spin of the day to get through to reality. Appreciate the post and your thoughts/insights as they do cut through the spin cycle. Apologize up front for the length of the postbut getting caught up on much of the great discussion in the thread. So, on to the questions: 1. FCoE is always mentioned hand-in-hand with 10 Gigabit Ethernet. Cant iSCSI take advantage of 10 Gigabit Ethernet too? A->>In short Yes. iSCSI will function on both non-DCB enabled as well as DCB enabled 10Gb Ethernet. For those that dont need DCB or dont want to invest/replace their infrastructure with DCB enabled switching, iSCSI will run just fine on standard 10Gb (or 1Gbps Ethernet for that matter unlike FCoE which requires 10Gbps Ethernet). For those that desire the DCB functionality, iSCSI will sit on top of a DCB enabled network and take full advantage of what DCB provides. (side note.. DCB-Data Center Bridging = CEE).

2. FCoE is almost always mentioned in the same breath as low latency and lossless operation. Truth be told, its not FCoE thats providing that functionality, its CEE (Converged Enhanced Ethernet). Does that mean that FCoE without CEE would suffer from the same problems as iSCSI? A->> DCB enabled networking (NICs, switches, and storage arrays) is required for FCoE. FCoE will not work without it. The reason for this is the fact that FCoE itself does not include a mechanism for ensuring reliable delivery. It therefore requires that functionality to exist in the network (ie flow control for Ethernet), which is what a DCB enabled network infrastructure is targeted to provide. iSCSI, on the other hand, has its own method for ensuring reliable transfer in the protocol layer (ie TCP). This enables iSCSI to run reliably on standard non-DCB enabled Ethernet switches (or remotely for that matter) 3. If iSCSI was running on a CEE network, wouldnt it exhibit predictable latencies and lossless operation like FCoE? A->>Yes Catching up on some of the interesting points/statements in the comments: Justin mentioned some additional work required for iSCSI. This additional work(ie TCP) is what ensures reliable delivery in non-DCB enabled networks. FCoE pushes this work into the network and is why it requires DCB enabled NICs, switches, and storage devices. I would argue that for many typical workloads this additional processing is not noticeable. But in either case, if it is a pain point, iSCSI HBAs are available that offload this additional work. With the iSCSI HBA, the host side processing is equivalent to FC or FCoE (all enter under a similar storage stack). I guess one way of looking at it is as follows: Both FCoE and iSCSI can leverage optimized HBAs(DCB enabled FCoE CNAs or iSCSI offloaded HBAs) and DCB enabled switches to achieve similar performance but iSCSI also has the flexibility to use standard NICs with standard non-DCB networks. As far as Rodos point for fitting into existing FC frameworks. One question that comes to mind would be if those frameworks are integrating manageability for the Ethernet switches/networks? I would guess that both FCoE and iSCSI are in the same boat here. Justin also brought up an interesting point that iSCSI is routable where FCoE wont be. This does have some interesting implications today with routings ability to enable remote mirroring/DR. I would also suspect that it may become an even more interesting differentiator with the growth of cloud computing. I guess Ill wind down with a tie to Nates point. FCoE might be appealing as a bridge back into existing Fibre Channel, but if the storage guys already have to swap out their network infrastructure toward ethernet, iSCSIs flexibility to solve both ends of the cost/performance question and the fact that it is already here would seem to give it a leg up. -Dan

15.

Aneel on Wednesday, December 10, 2008 at 12:58 am Im not a storage guy either, at all. 100% data networking background. For q2: FCoE without CEE is a non-thing. Practically, just consider FCoE short for FCoCEE. Getting FC into the E frame and getting lossless, nonblocking, PFC, etc., capabilities into E were just steps in making FCo(C)E(E) a viable technology. And q3: As things stand today with the standards in progress, iSCSI would ride on the lower order priority queue in CEE and not get the same nonblocking, etc., that FCoE will. A software hack or specialized CNAs could change that, but none are being publicly discussed afaik.

16.

Jeff Asher on Wednesday, December 10, 2008 at 5:13 am Let me start by saying that I am and have been for a long time an IP/ethernet proponent and regularly tell organizations not to invest in new FC infrastructure if they dont already have one. It just doesnt seem to make financial sense with the DataCenter Ethernet Initiatives in play at the moment. However While the technology debates are fun, other aspects must be considered for this technology to be deployed and accepted. At most large organizations politics drives technology decisions at least as much as the merits of the technologies being considered. This is sad, but mostly true. The technologies being debated here actually intensify the political debate. Fibre Channel over Fibre Channel (FCoFC?) solves a political problem. FCoE creates a political problem. FC switches and infrastructure are popular not only because of many of the benefits and technical superiority in some cases over legacy Ethernet, but because the storage group often got to own the switches and infrastructure rather than the network group. One group owning the infrastructure from end-to-end had the benefit of own group being able to manage all aspects of storage without dependence on another group. Service levels could theoretically improve and political empire builders were happy because they owned more stuff and possibly more people. Ive seen many 4Gbps FC deployments where iSCSI was more than adequate technically and the financial benefits were simply not debatable because the storage groups did not trust/like the network operations groups.

FCoE throws a kink in things because the network operations groups are more likely to own the switches rather than the storage groups. This breaks the end-to-end model and theoretically would drive down service levels because of the interfaces required between the 2 operations groups (I actually believe service levels would increase in well run shops, but that is another debate). The problem is that while 10GB and DCE benefit both iSCSI and FCoE, they have the same political problems that have slowed the adoption of iSCSI in large enterprises. If the storage group doesnt get to own the infrastructure from end-to-end, they are going to stick to FC regardless of the benefits of doing something else. And no, Role Based Access Controls for management doesnt cut it in terms of the political problem. Is this view cynical? Probably, however it was developed from not just my own experience and those of many people Ive talked to at many customers, various manufacturers, and resellers. Again, I say all this despite living clearly in the Ethernet camp.

17.

Nate on Monday, December 15, 2008 at 10:14 am Jeff, that is a good point. I would venture also that the political reasoning hampering iSCSI and FCoE in the large enterprise is what makes the two technologies more appealing in the SMb and SME market. The smaller shops are less likely to have the luxury of dedicating teams of people to only storage, so they need crossover knowledge. I personally think iSCSI offers more accessible crossover knowledge due to the fact it can run on any network. The one way around the political issue for the larger folks is still to run a seperate physical network. Cost effective? No. Most efficient? No. Like you said in a well run shop the two teams working together should actually be better, but we all know in some cases theyll still want to run their own gear. Basically at that point iSCSI and FCoE just become enablers of 10GB rather than convergance. Thats sort of ok though as I see it. When I first built out my iSCSI SAN I did so on the same standard Cisco switches I was using in the data network, but kept it physically seperate. I didnt have political reason of course unless I wanted to be in a political battle with myself since I also work on the data network. I just knew the data network was peaked out and not ideal to handle the load. Now we are upgrading the infrastructure and bringing the SAN back into the mix. Thats the kind of flexibility I like about iSCSI.

Is Unified Fabric an Inevitability?


Friday, February 20, 2009 in Gestalt, Storage by slowe | 11 comments

So heres another thinking out loud post. This time, Im thinking about Fibre Channel over Ethernet (FCoE) and unified fabric. I was going back through a list of blog posts and articles that I wanted to read and think on, and I came across a link to Dave Grahams article titled Moving a Fabric forward: FCoE Adoption and other Questions. His blog entry was partially in response to my FCoE discussion post. His post got me thinking again. It seems like anytime someone talks about FCoE, they end up also talking about unified fabric. After having read a number of different articles and posts regarding FCoE, I can see where FCoE would be attractive to shops with significant FCP installations. In my mind, though, this doesnt necessarily mean unified fabric. Given the political differences in organizationsthink the storage team and the networking teamhow likely is it that an organization may adopt FCoE, but not unified fabric? Or how likely is it that an organization may adopt FCoE, intending it to be a transitional technology leading to unified fabric, but never actually make it all the way? So heres my question: is unified fabric an inevitability? (Oh, and heres a related question: Most people cite VoIP as proof that the unified fabric is inevitable. More so than anything else, I believe VoIPs success was more a reflection of the rising importance of TCP/IP networking. If so, does that give iSCSI an edge over FCoE? Is iSCSI the VoIP of the storage world?) Tags: FCoE, FibreChannel, iSCSI, Storage

11 comments
Comments feed for this article Trackback link: http://blog.scottlowe.org/2009/02/20/is-unified-fabric-an-inevitability/trackback/

1.

Stephen Foskett on Friday, February 20, 2009 at 10:09 am I wouldnt say that a unified fabric is necessarily an inevitability, but I do think that the Data Center Bridging protocols and converged network adapters are inevitable. So whether you want to unify storage and networking or use FCoE, you will definitely have the capability to unify them. And once you have an FCoE-capable network, there will be an inevitable pull towards using it.

2.

Marc Farley on Friday, February 20, 2009 at 1:00 pm Hi Scott, I think FCoE is likely to survive and thrive mostly because iSCSI isnt getting the same amount of technology and market development. iSCSI is mostly cooked and even though there may be the potential to develop it further, nobody seems to have the motivation to do it. The amount of money spent to develop and sell technology matters a great deal. The storage industry is investing in FCoE today. Unified Fabric is another matter. This seems to be mostly Ciscos initiative. If it is slow selling initially and if Ciscos R&D expenses for it are high and the IT market contracts (as it appears it might be doing) the question is how long will Cisco continue to invest in it. During that time, competitors may be able to come up with alternative products that dont require as much investment and are less disruptive. If it turns out to be a big money loser for Cisco by mid 2010, it might not make it.

3.

David Magda on Friday, February 20, 2009 at 7:12 pm I would think its the opposite: I think iSCSI is likely to have a higher market share, mainly for the reason that its routable. The storage industry may be investing in FCoE, but theyve already invested in iSCSI as well. NetApp and EMC have targets (as well as Sun in their 7000 line), and initiators are available for all OSes as well as VMware. I dont think its going to be either/or, but different people will choose different things for their needs. I think things will generally standardize on Ethernet as well, though Infinibad will have a decent minority stake in specialized markets. Theyre talking Ethernet at speeds faster than 10Gb, but IB is already at 24 and 96 Gb. Some people need that.

4.

TimC on Sunday, February 22, 2009 at 9:16 pm The real question is, if were going to unify the fabric, why are we using ethernet? Why not infiniband? You can run all of the same protocols at about 4x the bandwidth.

5.

Omar Sultan on Monday, February 23, 2009 at 2:21 am Scott: I think unified fabric will continue to gain momentum because of the potential to reduce TCO by simplifying infrastructure and also for the functional advantages of having all your initiators be able to talk to all of your targets. That being said, I dont believe this needs to be an either/or debate. If you fast forward a few years, I think the typical enterprise DC will have a mix of FCoE, iSCSI and FCP. Each has its own place and I think they can happily co-exist the same way FC SANs and fliers co-exist today. If customers do not have existing FC SANs and are not good candidates for FC, then, my guess would be they will either go with iSCSI or wait not native-FCoE targets. I am not sure I can think of a scenario where I would see a customer deploy a parallel, dedicated 10GbE FCoE network (i.e. deploy FCoE but not as a unified fabric). I am not sure there is an upside for the storage team and I am pretty sure the network team would throw up all over it. Omar Sultan Cisco

6.

slowe on Monday, February 23, 2009 at 7:31 am Omar, Thanks for your response. Given that FCoE is inherently compatible with FCP (to my understanding they are almost identical except for the physical transport), it seems reasonable to me that an organization may deploy FCoE as an extension to an existing FCP SAN but not necessarily move to unified fabric (at least, not initially). Id be interested to hear why you dont think that is reasonable. Can you share your thoughts?

7.

Nate on Monday, February 23, 2009 at 12:45 pm

Is iSCSI the VoIP of the storage world? My answer, yes. Lets look at VOIP for a moment. What makes it great is not that it runs on ethernet, but that it runs on the tcp/ip stack. I dont think unified fabric has much value to organizations unless the stack is also unified. By running on the tcp/ip stack VOIP could happen with existing network equipment. iSCSI holds the same advatage (you also get routability). To unify FCoE you need special (read expensive) network equipment. FCoE almost seems like a gimmick from storage and networking vendors. I can see some value as a transitionary product, but I just dont see how it stands on its own.

8.

Kosh on Tuesday, February 24, 2009 at 7:42 pm Hi Im the infrastructure architect for a large and nationally recognized financial institution. We expect to converge storage and application networks at the fabric layer i.e. Layer 1 eventually and yes, converged voice & data via VOIP is seen as the strategic forerunner. Our next-generation network plans will be 10G end-to-end and we expect to run storage over that, for both OS and data. We already qualify some Silver and Bronze-class applications over NFS and iSCSI with GigE and LACP, and expect to be able to use FCoE in future for Gold-class applications. We would expect to maintain separation at layer 2 and above, via further deployment of 802.1q and related protocols end-to-end e.g. MPLS VPNs. Our timeframe for this is the next 3-5 years i.e. once the Cisco Nexus has reached the same level of maturity and $/port that made the 6509 so attractive.

9.

Kosh on Tuesday, February 24, 2009 at 7:46 pm I should add that at a recent storage summit I was speaking with other enterprise infrastructure managers & architects. We had a show of hands on various storage fabrics: FC2: Everyone using. FC4: Almost everyone using. FC8: Almost no-one using it or interested. 10G: Almost no-one using it but everyone interested.

Architects dont always get our way, but thats the way our informal poll showed us as leaning.

10.

David Magda on Tuesday, March 3, 2009 at 9:38 pm FCoE may be inherently compatible with FCP, but it seems that the switch companies will want you to buy special Ethernet+FCoE switches to get a unified fabric. In EMCs own words: http://www.youtube.com/watch?v=EZWaOda8mVY#t=3m40s

FCoE: Divergence vs convergence


Splitter! By Chris Mellor Get more from this author Posted in Storage, 25th June 2009 13:53 GMT Sign up for The Reg enterprise storage newsletter Comment FCoE seems to be a harbinger of network divergence rather than convergence. After discussion with QLogic and hearing about 16Gbit/s Fibre Channel and InfiniBand as well as FCoE, ideas about an all-Ethernet world seem as unreal as the concept of a flat earth. This train of thought started when talking with Scott Genereux, QLogic's SVP for w-w sales and marketing. It's not what he said but my take on that, and it began when Genereux's EMEA market director sidekick Henrik Hansen said QLogic was looking at developing 16Gbit/s Fibre Channel products. What? Doesn't sending Fibre Channel over Ethernet (FCoE) and 10Gbit/s, 40Gbit/s and 100Gbit/s Ethernet negate that? Isn't Fibre Channel (FC) development stymied because all FC traffic will transition to Ethernet? Well, no, not as it happens, because all FC traffic and FC boxes won't transition to Ethernet. We should be thinking FCaE - Fibre Channel and Ethernet, and not FCoE. FC SAN fabric users have no exit route into Ethernet for their FC fabric switches and directors and in-fabric SAN management functions. The Ethernet switch vendors, like Blade Network

Technologies, aren't going to take on SAN storage management functions. Charles Ferland, BNT's EMEA VP, said that BNT did not need an FC stack for its switches. All it needs to do with FCoE frames coming from server or storage FCoE endpoints is route the frames correctly, meaning a look at the addressing information but no more. Genereux said QLogic wasn't going to put a FC in its Ethernet switches. There is no need to put a FC stack in Ethernet switches unless they are going to be a FCoE endpoint and carry out some kind of storage processing. Neither BNT nor QLogic see their switches doing that. Cisco's Nexus routes FCoE traffic over FC cables to an MDS 9000 FC box. Brocade and Cisco have the FC switch and director market more or less sewn up and they aren't announcing a migration of their SAN storage management functionality to Ethernet equivalents of their FC boxes although, longer term, it has to be on Brocade's roadmap with the DCX. Genereux and Hansen said that server adapters would be where Ethernet convergence would happen. The FCoE market is developing much faster than iSCSI and all the major server and storage vendors will have FCoE interfaces announced by the end of the year. OK, so server Ethernet NICs and FC host bus adapters (HBAs) could turn into a single CNA (Converged Network Adapter) and send out FC messages on Ethernet. Where to? They go to a FC-capable device, either a storage product with a native FC interface or an FCOE switch, like QLogic's product or Brocade's 8000, a top-of-rack-switch which receives general Ethernet traffic from servers and splits off the FCoE frames to send them out through FC ports. There's no end-to-end convergence here, merely a convergence onto Ethernet at the server edge of the network. And even that won't be universal. Hansen said: "There is a market for for converged networks and it will be a big one. (But) converged networking is not an answer to all... Our InfiniBand switch is one of our fastest-growing businesses.... Fibre Channel is not going away; there is so much legacy. We're continuing to develop Fibre Channel. There's lots of discussion around 16Gbit/s Fibre Channel. We think the OEMs are asking for it... Will Ethernet replace InfiniBand? People using InfiniBand believe in it. Converged networking is not an answer to everyone." You get the picture. These guys are looking at the continuation of networking zones with, so-far, minor consolidation of some FC storage networking at the storage edge onto Ethernet. Is QLogic is positioning FCoE as a FC SAN extension technology? It seems that way. Other people suggest that the customer organisational boundaries will also inhibit any end-to-end convergence onto Ethernet. Will the FC storage networking guys smoothly move over to lossless and low-latency Ethernet even if end-to-end FCoE products are there? Why should they? Ethernet, particularly the coming lossless and low-latency version, is new and untried. Why fix something that's not broken? What is going to make separate networking and storage organisational units work together? Another question concerns native FCoE interfaces on storage arrays. If FC SAN storage management functions are not migrating to Ethernet platforms then they stay on FC platforms which do I/O on FC cables to/from storage arrays with FC ports. So what is the point of array

vendors adding FCoE ports? Are we looking at the possiblility of direct FCoE communication between CNA-equipped servers and FCoE-equipped storage arrays, simple FCoE SANs, conceptually similar to iSCSI SANs? Do we really need another block storage access method? Where's the convergence here, with block storage access protocols splintering into iSCSI, FCoE and FC, as well as InfiniBand storage access in supercomputing and high-performance computing (HPC) applications? Effectively FCoE convergence means just two things. Firstly and realistically, server edge convergence with the cost advantages being limited to that area, to a total cost of ownership comparison between NICs + HBAs on the one hand and CNAs on the other, with no other diminuition in your FC fabric estate. The second and possible thing is a direct FCoE link between servers and FCoE storage arrays with no equivalent of FC fabric SAN management functionality. This could come if IBM adds FCoE ports to its SVC (SAN Volume Controller) so that it can talk FCoE to accessing servers and to the storage arrays it manages. Another possible alternative would be for HDS to add FCoE interfaces to its USP-V and USP-VM controllers, which virtualise both HDS and other vendors' storage arrays. If customers have to maintain a more complex Ethernet, one doing general LAN access, WAN access, iSCSI storage and FCoE storage, possibly server clustering, as well as their existing FC infrastructure then where is the simplicity that some FCoE adherents say is coming? FCoE means, for the next few years, minimal convergence (and that limited to the server edge) and increased complexity. Is that a good deal? You tell me

Anda mungkin juga menyukai