FCoE? Thanks, but No Thanks!
I may be a bit “slow on the uptake”, but I’m struggling to understand industry claims that FCoE (Fibre Channel over Ethernet) is superior to having storage traffic sent over Fibre Channel. As a 34-year IT industry veteran and SAN storage specialist, it is my belief the only thing Ethernet data communications and SAN fabric transmissions may have in common is the label “network”. Therefore I’m puzzled why anyone feels “Unified Computing” is a more desirable solution for either Ethernet or SAN traffic. (Other than vendors who want you to buy their FCoE products.)
For the past couple of years we’ve been flooded with claims that “Unified Computing” (A.K.A. – Fibre Channel over Ethernet, or FCoE) is superior to separate Ethernet and SAN fabric networks. Webcasts and the trade press are awash with comments about the benefits and advantages of this new technology. If you believe everything you read, then FCoE should simply be sweeping the industry, making segregated Ethernet and SAN fabric channels a thing of the past. It’s not.
But will it? When I examined some of the claims in greater detail, they just don’t add up. The following is a matrix of popular “benefits” presented for FCoE, and my corresponding response as to why I question the validity of their claims.
Claimed Benefit | Response |
Reduces the number of adapters and cables that are deployed | On the surface this sounds logical, but it really doesn’t make much sense if you think about it. If a network (LAN or SAN) is designed for 30% average throughput with spikes of up to 70%, then it will still need (2) cables to support the configuration (70% + 70% = 140% of a single cable’s capacity). Unless your system is relatively small and/or the network is seriously underutilized, multiple cables will still be required. In addition FCoE will require some type of Quality-of-Service utility to ensure one service will not “starve” another, adding both additional complexity and greater expense. |
Higher performance from 10Gbps network | This is also a compelling argument if performance is compared to 4Gbps Fibre Channel. But why, when 8Gbps FC is the current standard? Due to its more efficient protocol, 8Gbps performance is very similar to that of 10Gbps Ethernet. More significantly, now that 16Gbps Fibre Channel is shipping FCoE over 10Gbps Ethernet is the technology playing “catch up” now. |
40Gbps and 100Gbps Ethernet interfaces are coming | This is a meaningless claim unless you’re doing extreme computing. 8Gbps Fibre Channel has been shipping for a couple of years, yet it is still being adopted at a leisurely pace. If there is no rush to upgrade from 4Gbps to 8Gbps FC (a 100% increase), why then will there be a rush to deploy 40Gbps Ethernet (a 400% increase) or 100Gbps (1000% increase) over 10Gbps Ethernet? Even 16Gbps Fibre Channel is a 160% over 10Gbps Ethernet.20Gbps and 40Gbps Infiniband have also been around for quite awhile. If raw channel speed is a major industry requirement, then why hasn’t Infiniband become a dominant network technology? |
More efficient 64/66 encoding | If throughput is crucial, there is a logical argument for using 10Gbps FCoE (that uses 64/66 encoding) rather than 4Gbps or 8Gbps Fibre Channel (which has the less efficient 8/10 encoding). However, the latest 16Gbps Fibre Channel (and above) employs 64/66 encoding too, so this “benefit” is no longer relevant. |
Greater flexibility | Hmmm… I’m not certain how merging two dissimilar technologies onto a single network medium will provide “greater flexibility”. In most cases just the opposite occurs. |
Lower power and cooling | Since their component count, general circuit layout, and optical drivers are very similar, just what is it that makes FCoE have “lower power and cooling”? (Please don’t say that it’s because it needs fewer cables. Passive Fibre cabling really doesn’t consume much power!) 🙂 |
Simplified Infrastructure | This might be true, as long as you’re running low demand systems that only require a single cable. However, if traffic load needs two or more cables, then all bets are off. |
Better compatibility with virtualized servers | Why? How is running multiple virtual servers over FCoE provide better compatibility than running multiple virtual servers over NPIV? What unique attribute is it that makes FCoE more compatible? |
Availability of network security tools | This is an interesting argument. The reason we have more Ethernet security tools is that as an external facing technology, more people are trying to hack it. It is true that fibre channel has fewer security tools, but if they are sufficient to provide excellent storage security, why does having more of them matter? |
Lower cost | Really? What numbers were they looking at? A quick search on Google Shopping shows both FCoE NICs and 8Gbps HBAs are in roughly priced the same.Several months ago we also estimated the total cost of an enterprise architecture using the both technologies, and found that the FCoE configuration ran about 50% higher than 8Gbps Fibre Channel! So much for being less inexpensive! |
Familiarity within the enterprise | True, but what does familiarity have to do with it? There are lots of people familiar with copying data to DVDs, but that doesn’t make DVDs a better choice for data center backup and recovery. A specialized application like NetBackup or TSM will do a far better job of enterprise backup and recovery, even if only a few IT backup specialists are familiar with them. “Dumbing down” an IT operation to save money is a questionable tactic if user performance is sacrificed in the process. |
Interface with the Cloud | In what way? The TCP/IP protocol is not native to WAN communications infrastructure, so 10Gbps Ethernet must be converted into something else on each end, just like Fibre Channel. For an internal Cloud connection, TPC/IP is not native to the SAN storage either, so 10Gbps Ethernet must be converted into a block storage format and back in the array, as well. |
Simplified management and integration with tools | Whoever claimed this as a “benefit” apparently knew little about the breadth and depth of storage management tools available on the market today. |
No proprietary tools needed to install | I have no idea what proprietary tools they’re referring to for installing Fibre Channel. Last time I did a Fibre Channel installation we used exactly the same tools that were used for high-speed Ethernet interconnections. |
Lossless Ethernet | Hmmm… If I push the Ethernet standard far enough to compensate for its inherent “best effort” characteristics, doesn’t it just end up looking a lot like the Fibre Channel Protocol (Which is a well established, proven technology)? |
Operational efficiencies and performance enhancements | If I run FCP (or any protocol) over any other protocol I incur two types of delays – conversion latency, and the consumption of extra CPU cycles. How does adding overhead improve either efficiency or performance? |
People and skill consolidation | This is an argument typically presented by people with a limited understanding of the complexity of modern SAN storage. Ethernet LANs and SAN FC Fabric have very little in common, other than both support data traffic. Assigning Ethernet LAN specialists to manage enterprise SAN fabric makes no more sense than having SAN specialists manage corporate network communications. |
Ubiquitous computing | This is a benefit? Stored data is the most valuable asset a corporation or Agency owns. While it may be important to offer ubiquitous computing to the user community, maintaining, protecting, and optimizing data assets should be carefully orchestrated activity provided by highly trained storage specialists! |
Cost-effective network | Do your own comprehensive cost comparison and see if you agree. My estimate indicated identical functionality from 10Gbps FCoE would cost around 150% more than an equivalent 8Gbps Fibre Channel configuration. |
Pervasive skill set | Like the “people and skill consolidation” myth above, this is based on a misguided assumption that operating a SAN fabric is somehow similar to operating an Ethernet data communications network. It is not. |
Simplified interoperability | This may be true – if you can tolerate the latency and performance penalties associated with having one technology host another. As long as server farms are fairly small and storage requirements are modest, making performance compromises for the sake of convenience isn’t an issue. However, it rapidly grows in difficulty as stored data volume increases. |
Reduces capital and operational costs | As above, do your own price estimates for identical functionality from 10Gbps Ethernet and 8Gbps FC. I think you may be surprised. |
What seems to be missing from these discussions is:
- Vulnerability created by having both data communications and storage traffic over the same medium. If there is an external attack on the Ethernet network, all computing activities will be brought to a halt. If there is a critical firmware bug, both data and SAN traffic is impacted. Troubleshooting becomes much more complex and time-consuming.
- The importance of keeping dissimilar technologies separate so they’re allowed to evolve at their own pace. If both storage traffic and data communications are dependent upon Ethernet, then each is constrained by the evolution of the other. If one requires more capacity and the other doesn’t, you’re forced to buy the consolidated infrastructure in its entirity.
- Dissimilar skill sets and areas of responsibility managed by different IT specialists. Ask most LAN specialists how to zone a fabric or allocate LUNs and you’ll get a blank stare. Ask most SAN specialists how the configure a router or use a packet sniffer, and you’ll probably get a similar response. SAN storage and SAN fabric management are activities that are inextricably linked. Splitting areas of responsibility between a LAN Group and SAN Group is a recipe for operational inefficiency, troubleshooting complexities, and reduced staff productivity.
- If industry adoption of FCoE has been widespread, then why do IT industry research Groups keep reporting sluggish sales? Also, why do Fibre Channel equipment sales remain robust?
I have no illusions there being lots of things I didn’t know, so I could be wrong about this too. If you feel there are other compelling reasons why FCoE will dominate the industry, I’d love to hear them.
Randy Cochran
Posted on November 28, 2011, in Topology and tagged 10Gbps Ethernet, 16Gbps, 8Gbps, FCoE, Fibre Channel, Fibre Channel over Ethernet, iSCSI, SAN, SAN storage, Unified Computing, Unified Networking. Bookmark the permalink. 6 Comments.
Randy you bring up some good points.
FCoE can (and will) be used (or abused) as is the case with any technology which will further enforce your thesis
However as a 34 yr IT veteran (Im only 30 yr veteran 😉 you probably recall some of the claims of the mid to late 90’s when the then upstart Fibre Channel was being touted for being a unified LAN and SAN network capable of carrying both IP traffic (aka IPFC which some vendors implemented for a few customers) as well as storage (SCSI_FCP/FCP + FCSB2/FICON) plus other ULPs for different usage including VI/RDMA, AVI etc. Likewise, InfiniBand was being touted or positioned for both internal and external challenging PCI.
Even though Fibre Channel was capable of supporting IP (depending on whose switches/directors used) only a handful of IT organizations used it. Likewise even though current Fibre Channel (and presumably in the future so to will FCoE) can and does support protocol intermix mode (PIM) where FCP and FICON can coexist concurrently, not all environments choose to deploy the technology that way.
FCoE is not a replacement per say for FC, rather an adjacent path for those ready to make the transition too while preserving the functionality of FC. With that being said, FCoE is on a relative basis maturity level of where FC was in the late 90s early 2000s which back then, Fibre Channel had a bright future despite people saying it was dead, same holds for FCoE today.
Fibre Channel is not going away tomorrow, next year or for many years (maybe a decade). Many environments need or are more comfortable with FC. Fibre Channel took several years to be fully adopted and deployed in some environments and will take equally long to be replaced or transitioned to something else. Likewise Im still amazed at the FUD I hear about Fibre Channel as well as Ethernet, FCoE and IP for that matter, some of which is decades old FUD.
Here is my point, just like traditional Ethernet and IP, or Fibre Channel, or FCoE for that matter, it will come down to how the user or designer decides to use it. That is where leveraging best practices and creating new ones come into play. Generally speaking, there are not bad technologies (there are bad implementations of course), however there are bad deployments and usage of them. So even though FCoE can be used to converge, just as Fibre Channel was touted by some to be able to do, and some will use it that way, I suspect (and hope) that most will leverage the commonality of components vs. trying to get to a single cable infrastructure.
Hope all is well
Cheers gs
Author: “Cloud and Virtual Data Storage Networking” (CRC Press), “The Green and Virtual Data Center” (CRC Press) and “Resilient Storage Networking” (Elsevier)
Thanks for your comment.
I have no specific bone to pick with FCoE. It’s a clever use of technology and certainly will have a place in the greater scheme of things.
What I’m concerned about is the “technological fog” surrounding this approach and the lack of storage specialists questioning these outlandish claims. As a long-time storage consultant, I have an opportunity to work on some pretty sizable SAN projects. On many of these I’m both stunned and dismayed by the number of “bright young technologists” just accept what the vendors are saying without questioning its accuracy or relevance. As a result, FCoE is now starting to show up in some “Big Data” designs where it is pretty poor fit.
After 30+ years, I’ve learned every technology (new or old) should be analyzed for its appropriateness for each project. What may work well for one implementation could deliver lack-luster performance on another. To use an old consulting phrase (since I’m an old consultant), “it all depends”.
What I’d also like to see is a bit more balance in the trade press. Too many IT Architects seem to be perfectly satisfied taking this “marketware” at face value. According to a quote by Nazi propagandist Joseph Goebbels, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” While this may increase vendor equipment sales, it does little to advance the industry or improve our client’s ability to compete in a global economy.
Anyway I doubt my ramblings will change the course of history, but I had to “run up a flag” and ask if anyone had actually stopped to think about these bogus claims.
Have a great holiday season!
Randy Cochran
Randy I do find it interesting that you point out the “marketware” and use or abuse of technology and yet you tie in the “big data” theme.
Not that there is nothing wrong with “big data” as it is all around us and has been for many years if not decades. However, it is also on a buzzword bingo hype cycle in some circles (particular storage) that could rival its predecessors on the circuit including virtualization, ILM, SAN, compliance, green, cloud and converged…
No worries, I get where you are coming from as well as the concern about use of technology or should I say misuse and abuse along with buzzword bingo.
After all, some of us were or have been involved with big data, big bandwidth, big backup and other topics sine before they were trendy and popular on the buzzword bingo circuit.
Ok, carry on…
Cheers
gs
It is nice to see you speaking from the other side, Randy. I have nothing against FCoE either. I like that you took the role of the devil’s advocate.
Happy holidays!
Why did Ethernet win over Token Ring or Windows over OS/2? Technology is driven by popularity. I’m puttying my money on FCoE, management will look at as saving money on infrastructure and not want to know if its the best solution.
Perhaps. There have been many products that have dominated their field because of powerful and relentless marketing, not technical capabilities. However, the jury is still out on whether FCoE will be a big winner, or just another blip in the trend line. It was being heavily promoted primarily for 1) speed, and 2) infrastructure cost savings. The speed advantage disappeared when 16Gbps FC emerged, and many of the claimed cost savings turned out to be more “smoke and mirrors” than factual. There are other forces at work here as well, so it’s too early to say if FCoE will be long-term success or failure.
Even more interesting now is a “dark horse” in the race – Infiniband. After languishing in the shadows for years, Infiniband is now gaining visibility due to the high speed and bandwidth demands of Big Data.
It’s a facinating industry to be involved in. Take care.