Hold up, I'm confused. How are you doing Storage Area NETWORK over SAS? I could have a SAN/NAS that uses SAS disks, and has a SAS connection between the disk tray and the controller, but surely the SAN itself has to be using a network-based protocol to connect to client machines, otherwise it's not a SAN?
You could have a disk array + controller that had multiple SAS ports and it could be connected to multiple clients, but you could only connect as many clients as there were SAS ports - thus it is DIRECT attached storage surely? My Ethernet based SAN can be plugged into a switch, and thus connected to as many clients as I like (via other switches) so it's a Storage Area NETWORK.
Happy to be hit with a CLUE bat if I've misunderstood.
So let me summarise;
Going down the FC route. This would require: SAN > FC Switch > Server > Core Switch? (Seeminly expensive)
Going down the iSCSI route. This would require SAN/NAS > Core switch over Ethernet. (Inexpesive compared to Fibre)
They are two main routes from my understanding. Both will have central management, and both can backup to tape drives. I need to determine whether SAN or NAS would be the best solution for our environment. I shall do more research.
From my understanding, if you want many TBs of storage, SAN is the best option to go down. Is this correct?
I would still recommend a seporate switch for the iSCSI stuff as pushing it all over your core switch even in a VLAN will put a lot of load on your switch and may slow it down due to congestion. I have used a seporate VLAN on a SAN before but these were with very beefy core switches and not much usage of the SAN - only a couple of servers.
SAN or NAS depends on how you want to access your storage, a SAN will offer RAW space that shows up as space you can allocate to servers, all file access is done through these servers. A NAS makes a network share itself so you don't go through the server to get the files, they are usually connected to the main network like a server. You can spec large amounts of storage for both. The SUN S7000 people keep talking about is both a SAN and a NAS and can take drive shelves for lots of storage.
I'm in most of the day, give me a call if you want to chat.
Yes you will need HBAs for fibre channel but you'll also want additional NICs for your servers if using iSCSI to that you can dedicate to storage traffic. You don't want your file access being held up in the network queue behind someones funny cat video. SAS also would need HBAs its just an add in card per server, whichh ones you need depends on your chosen solution. I would second avoiding fibrechannel, it is very expencive in comparision to what you can get with iSCSI or SAS which given the cheaper interface tech can be a higher up more able model.
As Duke mentions you really want a unified device. Look at something like the Oracle S7000 series, any of the Netapps (make sure they give you licenses for everything up front though!), or the new range of EMC VNXe's/VNX's (depending on how big you need to be). You could also look at something like the HP P4000 series with a HP NAS gateway, or a Dell Equalogic with a Dell NAS gateway. The NAS gateway uses the underlying block level (SAN) storage and presents it as file level CIFS/NFS/whatever (NAS) storage. You may also want to look at Dells Compellent offerings.
I know from when I looked at Compellent before they were bought by Dell they had a very interesting product (essentially block based, although you could get a NAS head based on their own customised Nexenta build), but they had a 100% support center satisfaction rating and certainly everyone I spoke to rated their support team very highly. How that'll change now they're a part of Dell is anyones guess
Something my boss made clear is that we want something that is scalable. I've just seen on Dell's website that they sell just the chassis (Dell EqualLogic PS6000XV iSCSI SAN). This would allow us to simply buy the storage we need. However, I have noticed that it is just a SAN, not NAS or Unified. It also allows for 10GoE.
The initial idea for this is that we want our servers to be servers, and our storage to be storage. We think when the kids log on and log off, at those times the servers are getting big hits and struggling to cope with the load.
Although Fibre channel is not all bad, it can be expensive, but I have found the failover to redundant channels much quicker with VMware compared to iSCSI. It also has better flow control and uses significantly less CPU than iSCSI (unless you have an also expensive iSCSI HBA).
My new storage is iSCSI, although if cost wasn't an issue I'd still be using FC.
I am moving to a new SAN, I can live migrate the VM storage from one to the other without taking anything offline, and as the servers remain the same VMs, no client reconfiguration is needed. Unified/NAS features are not something I would even look at on a SAN.
There are currently 1 users browsing this thread. (0 members and 1 guests)