+ Post New Thread
Page 2 of 2 FirstFirst 12
Results 16 to 22 of 22
Hardware Thread, SAN Solution in Technical; i hardly think redhat going with kvm is in any way a serious problem for the xen product (open or ...
  1. #16
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    i hardly think redhat going with kvm is in any way a serious problem for the xen product (open or commercial implementations), kvm has a different architecture to both xen and esx and many industry experts don't consider it an enteprise grade virtualization solution at all..

    It's become quite obvious that the wins in virtualization are in the areas of the value-added by those trying to catchup with vmware. vmware with VI3 have such a strong grip in the enterprise virtualization market, that the only things the competitors can do is offer a disruptive technology....citrix are trying to do this using their considerable presence in the thin client space to push a complete virtual desktop solution....xenserver as a standalone product is a legitimate competitor to vmware for cost conscious department, but because it lacks the maturity of vmware as a corporate solution, it's very much a case of citrix developing add-ons for the product and pitching it at a price to appeal to those reluctant to fork out for full-blown vmware. Microsoft are strictly going after the small business market with hyper-v, it's really about being an additional feature for those looking to upgrade from 2003 rather than in any way a serious tilt at vmwared dominance. Redhat is the same, i think the move to kvm is based primariy on linux kernel developments in building in kvm...not sure how this affects support for xen, but kvm isn't a serious play either.

    One things for sure, the new competitiveness will bring vmware enteprise prices down, citrix xenapps will mature particularly in the realms of management functionality - and with both products likely to come down in price to grab market share in smb's....who needs or wants kvm ?

  2. #17


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    torledo,
    Xen wasn't a competitor in most peoples eyes until Citrix bought it - added a gui and suddenly it's a VMware rival. There isn't much difference between what citrix offer and what redhat offered in RHEL5 other than the gui and the price. Although kvm currently lacks paravirtualisation support I suspect it will get major backing as it is fully integrated into the kernel already. All distros after 2.6.20 will have kvm support and RHEL are confident that it will quickly catch up with xen. I don't think kvm is something to be underestimated just yet.

  3. #18

    dhicks's Avatar
    Join Date
    Aug 2005
    Location
    Knightsbridge
    Posts
    5,624
    Thank Post
    1,240
    Thanked 778 Times in 675 Posts
    Rep Power
    235
    Sorry, we seem to have hijacked another thread talking about generall virtualisation! I'm not that bothered if Red Hat stop supporting Xen:

    - CentOS 5.1 is perfectly adequate at the moment, and should be so until hardware advances so much that it no longer installs on new machines.

    - I can move to another distribution that uses Xen - the only reason I moved from Ubuntu in the first place is that 8.04 running Xen kept on segfaulting (seriously - segfaulting? In the 21st century? What's with that?).

    - Moving Virtual Machine technologies isn't that tricky, anyway - tar a filesystem up, untar it, sort out the kernel, away you go.

    --
    David Hicks

  4. #19
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 70 Times in 42 Posts
    Rep Power
    34
    Sorry to go back on topic again

    We've got an iSCSI SAN here with 4x1Gbit connections into a Gbit switch (aggregated to provide effectively a 4gbit connection). If I'd been looking to boot off the SAN I would have thought about fibre channel more seriously (fibre and iSCSI HBA cards are similarly priced), but that's not really on the horizon right now. It runs about twice as fast as the local disks in our fastest servers do, so I'm happy with that.

    Total cost of the SAN including the 3com Gbit switch was £6100, giving 6TB of SATA in RAID6 with 8 disk slots spare for SAS or SATA. There are cheaper chassis out there which would probably do the job acceptably, but it sounds like money is no object in your case! My selection was more at the budget end between DotHill, Infortrend (Eonstor) and AC&NC (Jetstor)...

  5. #20


    Join Date
    Jan 2006
    Posts
    8,202
    Thank Post
    442
    Thanked 1,032 Times in 812 Posts
    Rep Power
    339
    another hijack, scsi vs fibre chanel noob questions...

    How does an aggregated 4gb iSCSI connection compare in bandwidth to a 4gb fibre channel with no tcp/ip overhead? faster? twice as fast? 4 times?

    Can fibre channel and iscsi be combined on the SAN? with some machines using iscsi (for fileshare) and some using fibrechanel (for boot)

    if iSCSI uses tcp/ip (thats the cost saving..?) then why do I need a iSCSI card?
    rather than network card

  6. #21
    sahmeepee's Avatar
    Join Date
    Oct 2005
    Location
    Greater Manchester
    Posts
    795
    Thank Post
    20
    Thanked 70 Times in 42 Posts
    Rep Power
    34
    Quote Originally Posted by CyberNerd View Post

    Can fibre channel and iscsi be combined on the SAN? with some machines using iscsi (for fileshare) and some using fibrechanel (for boot)
    I'm not aware of any SANs with iSCSI and fibre connections on the same chassis. Maybe someone else has seen one. You could have 2 SANs, but then you would also need 2 lots of switching (fibre and ethernet) and 2 lots of connectivity in your servers (fibre and gbit ethernet on top of your existing network card) so there would likely be no cost saving over just using fibre.

    if iSCSI uses tcp/ip (thats the cost saving..?) then why do I need a iSCSI card?
    rather than network card
    You only need an iSCSI HBA if you plan to boot from an iSCSI SAN. That's because Windows typically accesses iSCSI using drivers and config settings loaded from your windows installation... and those files are on the disk you're not yet connected to! What the HBA does is give your server access to the SAN before the start of Windows boot. As you suggest, many people don't bother with the iSCSI HBA if they are only using the SAN for storage of data after Windows bootup.

  7. Thanks to sahmeepee from:

    CyberNerd (25th June 2008)

  8. #22
    torledo's Avatar
    Join Date
    Oct 2007
    Posts
    2,928
    Thank Post
    168
    Thanked 155 Times in 126 Posts
    Rep Power
    48
    Quote Originally Posted by CyberNerd View Post
    another hijack, scsi vs fibre chanel noob questions...

    How does an aggregated 4gb iSCSI connection compare in bandwidth to a 4gb fibre channel with no tcp/ip overhead? faster? twice as fast? 4 times?

    Can fibre channel and iscsi be combined on the SAN? with some machines using iscsi (for fileshare) and some using fibrechanel (for boot)

    if iSCSI uses tcp/ip (thats the cost saving..?) then why do I need a iSCSI card?
    rather than network card
    iscsi and fibre channel would normaly represent different (physical) storage fabrics.....boot over SAN can be accomplished using either fc or iscsi, application data e.g exchange stores and sql databases can be iscsi or fc (and cetainly in the case of exchange CIFS)... as for filesharing....certainly both iSCSI and FC can be used to present block storage to servers which would in turn present that storage using the filesharing protocols to client machines. I see little benefit or need to split a single servers data storage requirements between two different SAN technologies. Although doing multi-protocol in a single chassis and setting up how and through which fabric type boot and application data is accessed is certainly a viable option.

    i'd cetainy mix and match disk types in the SAN.... Boot LUNs could be SAS or fibre channel, high IOPS applications could use fibre channel disks and secondary storage could be SATA. This can all be done using a super expensive virtualization node which has the ability to connect to different vendor storage subsystems....high end stuff, with a price tag to match, and in can be done at the low-end with single vendor storage.

    At our level devices do exist that do multiprotocol for the purposes of choosing the appropriate connection type. Say you had a storage controller (not mentioning any vendors here) that supports the full house of Fc, iscsi and cifs/nfs....the storage that sits behind the controlers can be carved up and allocated based on best practice to the appropriate frond-end connections (fc-iscsi-nas)...the disks and the disk shelves take their cues from the disk controlers which can connect to more than one fabric type. if you have a reduandant disk controller doing mutiprotocol using a various array of ports then perfomance is a big concern, if you have seperate storage controlers for seperate storage arrays each connecting to fc or iscsi you've got no such concerns, but if i had two fabrics i personally would assign each server to one or the other not have one server connected to both. If i wanted an exchange server to connect to a boot lun for the OS and a 'data' lun i'd settle on a singe fabric type (fc or iscsi) and have a dedicated disk shelve for mirrored boot disks for all servers that boot from SAN and seperate shelves (possibly of a different disk type) to host the application's data storage (typically a RAID 5 or RAID 10).

    if i then added an ISA server, and i chose not to invest in costly fc hba's i'd connect the isa to my seperate iscsi SAN, and if required have it Boot from iSCSI SAN. (isa is just for exampe not saying it'd be an app i consier for SAN placement)

    the issue of haing costly, specific ISCSI adapters doesn't factor in anymore...onboard network adapters function fine as dedicated iSCSI hba's....used to be the case you had to buy an adaptec card to do iscsi for copper or fibre...not anymore, that's why iscsi is so attractive to small orgs...onboard server network adapters, off -shelf gigabit switches, cheap arrays...and hey presto. FC still kicks butt when it comes to performance mind and FC wil always be cooler.

  9. Thanks to torledo from:

    CyberNerd (25th June 2008)

SHARE:
+ Post New Thread
Page 2 of 2 FirstFirst 12

Similar Threads

  1. Wi-Fi Solution
    By Wirral_Wonderer in forum Wireless Networks
    Replies: 10
    Last Post: 8th May 2008, 08:26 PM
  2. VoIP Solution?
    By Samson in forum Wireless Networks
    Replies: 12
    Last Post: 2nd October 2007, 09:08 PM
  3. Ghost solution 2.0
    By LukeC in forum Windows
    Replies: 2
    Last Post: 5th July 2007, 09:29 AM
  4. GPO or Third Party Solution
    By edsa in forum Network and Classroom Management
    Replies: 7
    Last Post: 16th April 2007, 08:09 PM
  5. What Backup solution & RM
    By nicholab in forum Hardware
    Replies: 0
    Last Post: 2nd February 2007, 10:48 AM

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •