Last edited by Duke; 28th January 2011 at 03:51 PM.
Last edited by CHiLL; 28th January 2011 at 03:29 PM.
It's very likely your NAS traffic will need to talk to devices on the core switch (other servers) so I don't see how you'd avoid it. If you're using the SAN/NAS for virtualisation (i.e. virtual machine storage), or perhaps to host OLTP-style databases then yes I would definitely put it on another VLAN away from my normal network traffic, but it could still be on the same physical switch?
In an ideal world you'd be able to keep different traffic on different physical switches - and for connecting a SAN to VMware/Xen/Hyper-V hosts it would certainly be possible - but other than that I think iSCSI via the core switch should be reasonable?
Last edited by Duke; 28th January 2011 at 03:35 PM.
It's not even a AD import - the SAN/NAS will communicate directly with one of your domain controllers and run queries in real-time - the same way a Windows server would. I can set permissions on a file or folder that's hosted on the NAS just the same way I would on a file or folder that's hosted on a Windows server, and when someone on the domain tries to access the file/folder the NAS will query their AD groups and work out the permissions.
BTW, what make are your switches?
Also, if you fancy a trip up the motorway to West Yorkshire, you're more than welcome to come and have a look at what you can do. We use both the NAS and SAN facilities of our Oracle boxes, so can demonstrate both to you.
Thought I'd chip into this thread. First things first there are only two forms of connectivity for SANs - Fibre Channel or Ethernet. SAS is only used for DAS setups (Direct Attached Storage). If you don't have an existing Fibre Channel setup (which you don't since your asking about it) it's easier and cheaper to go for iSCSI/Ethernet based connectivity. If you've made do without fibre channel until now, you won't miss it and you'll save the cost of Fibre Channel HBAs, and Fibre Channel switches/directors. As others have discussed there is the possibility of utilising a converged network infrastructure (Fibre Channel Over Ethernet or FCoE), but unless your running something like the nice fancy Cisco Nexus range of switches I doubt your kit is capable .
So iSCSI/Ethernet SANs. This tends to be where a large segment of the SAN market is heading. Having tendered for a new (large) SAN this summer, and running two different SANs here I've had the pleasure of looking at virtually every manufacturer on the market (IBM, Hitachi, Sun/Oracle, EMC, NetApp, HP EVAs, Dell, Dell Equalogic, Lefthand (HP P4000), Pillar, Huawei/Symantec, 3Par, Compellent and lots of others). As mentioned previously a unified storage system combining NAS and SAN functionality gives you the best of both worlds. You get to serve files directly from the device, while still getting to do block level access as well. There are lots of other nice things you can look for as well - automated tiering between storage tiers (moving commonly access files to fast disks, least access files to slow disk), snapshotting (allowing for quick access to previous versions), replication and finally the use of SSDs/Cache to speed everything up.
Like lots of other people I'm a big fan of the Oracle S7000 series of SANs. They're reasonably priced with a very easy to use GUI and good performance. The best thing you can do in this market place however is play the manufacturers off against each other. Unless your getting 60+% off list you haven't got a good enough deal . Dell have just bought Compellent, and EMC have just launched a new unified product targeted at SMBs so there are some very good deals available.
We currently run a Sun/Oracle 7110 and an EMC Celerra NS-480. Both are connected to a physically separate storage network that consists of 2 x Juniper EX2500 24-port 10GbE switches. Each server then has an Intel X520-DA2 10GbE card and we use direct-attached twinax SFP+ cables from the servers to the switches. The Sun 7110 has around 2tb raw while the EMC has around 120tb raw. Unless your pushing lots and lots of data then a good gigabit switch would be fine for most SANs. A separate switch is preferable, but a VLAN on your core switch can also work.
Finally I know you mentioned at the start about buying the SAN/disk shelves part empty and filling them up later. If you can buy any disk shelves full. They'll be a lot lot cheaper to buy at the start then to fill later, and unfortunately you can't just buy the disks and put them in yourself. I don't know of any vendor that allows you to buy the disk caddies and some of them use fibre channel disks which makes it even harder!
If you've got any questions feel free to ask or drop me a PM
We have 2 Sans, one is a sun 7110 which is now supported by Oracle but I have heard they will be dropping the hardware side of things but luckily Hitachi will support anyone with an Oracle San which is good news as we also have a Hitachi SMS 100 which gives us 6.4Tb formatted space for student and staff data is connected direct from DCs and user servers via iSCSI and and our Sun 7110 is our virtualisation backend to which we have 2 but soon to be 3 Dell 2950s attached via ethernet through 2 D-Link Gb managed switches (working ok at the moment but may change for Juniper of similar spec).
12k for the both of them which should last us for the next ten years if we get as much use out of them that we have from our previous server hardware (nearly 10 years on Windows server 2000, yes and solid as a rock)
Both are working very well, very quick and will suit our needs when we use them in anger in four weeks time when we migrate over to them.
I can highly recommend them if that is the way you want to go.
I looked at EMCs fibre SANs but to be honest I think unless you have huge storage needs and lots of students (more than 2000) then iSCSI will suffice.
Just to clear the support thing up, Oracle are continuing with the 7000 series unified storage range with new hadrware recently released as well, so no problems with ongoing support for those :-)
What they have dropped is their high end Sun SANS which were rebadged versions of someone elses SANs I believe. They are however about to launch some new high end gear, but that really isn't in the ballpark for schools.
What issue(s) are you trying to solve ? Performance woes ? Manageability ? Maintenance costs ?
I generally don't recommend FC for smaller shops. Yes it's the king of security and performance, but do you really need it and do you want to pay for it ?
Some parts of my shops are FC ( we're big on EMC and eminently satisfied ), with multi-TB transactional databases we need the performance. But for 4 of our satellite sites -- with 100% Windows and less that 50 servers in each -- a NAS-based solution (EMC Celeras, in fact) fills the bill nicely and leaves out the complications of additional fiber work. There's already 10G and gigabit and network expertise, so it's been mostly a plug-and-play operation. Several senior leaders "thought" we needed fiber, but I talked them out of it and now they're glad I did.
For FC work, if you really want uptime, you'll want two switches, with all critical servers dual-pathed to each in case one dies, and to permit regular firmware updates. That means 2x the number of servers in switch0ports (which you will pay for), some sort of multipathing (EMC PowerPath or similar for Windows or Unix, native multipathd for Linux), and connections to each FC appliance and a dual-port FC HBA in each server. Sketch it out, add up the costs and see if you think it worth it.
The NAS alternative is a second GigE NIC in each server, extra GigE blades in your network switches, and a separate VLAN for the NAS traffic to reduce contention. Do you really think you'll overwhelm that ? Do you have severe contention with your existing server-based files ?
Coraid makes some interesting ATAoE stuff I've been reading about, but I have no direct experience there. I keep hoping for a new site / new project to cough up some R&D money
There are currently 1 users browsing this thread. (0 members and 1 guests)