SAN Solution Blank Title 60398

Hi Ari,

Guest to SAN connection:

a. The ESX host has a fiber-channel HBA driver installed
b. The ESX host is configured so that it can see all volumes
in the SAN
c. The ESX host makes the appropriate SAN volume available as
a "raw device mapping"
d. The guest mounts the SAN volume as a local drive

**NOTE: no two servers, physical or virtual, should attempt to mount the
same SAN volume at the same time. Data corruption will result**

Guest VM image:

a. boots from ESX host, stored in the vmfs volume
b. backed up to a SAN volume are regular intervals using
the esXpress utility

In this fashion we have divorced the VM guest from any particular ESX
host. The caveat being that we can only move guests between hosts with
identical CPUs. That said, there are conversion utilities readily
available that will allow you to migrate a VM image between dissimilar
CPU families in minutes. However if you manage to buy your way into a
host farm with relatively identical hardware, you can drag-and-drop a
running VM from one host to another in real-time using VMotion. A slick
and sexy way to pro-actively handle hardware downtime. Or if the host
crashes unrecoverably, copy the VM image from the SAN to the host of
choice and launch it.



On Tue, 6 Nov 2007 16:13:01 -0600, "Ari Footlik" said:
> John -
>
> How does your VM guest-OS connect to the live array? Does it map a
> drive to a share, or is there a facility within ESX to allow direct
> mounting of SAN storage to the VM guest itself? (FWIW - in a past life,
> I was responsible for an EMC CLARiiON FC4700, the predecessor to the
> current CX series. That past company is now using the now-obsolete FC
> to host their VM guests.)
>
> I was going to do my upgrade from 6.04 into 8.03 using VMs, but our
> budget doesn't have room for any new storage, and the NAS we have now is
> just too slow to use for holding the VM guest image-files. I figure
> that hosting the VM images on the NAS would buy me some resiliency if I
> were to lose the VM host, since the age of the server itself is one of
> the driving factors behind my looking at VM in the first place. In
> other words, if I can't totally abstract the VM guest from the hardware
> itself, I don't really gain the "security blanket" I need.
>
> Thanks.
> --Ari
> ------------------------------------------------------------------------
I am considering Equallogic for an iSCSI SAN solution. I really like the
idea of moving all my servers to a VMWare virtual environment and using the
SAN to host the storage. Anyone operating like this? I am pretty familiar
with the pros. Any cons?



Thank you



Bruce Butler, IT Consultant

120B Basehill Road

Swanzey, NH 03446

603.562.5370

<mailto:bbutler@...> bbutler@...

<http://www.tangiblesolutions.biz> www.tangiblesolutions.biz



Tangible Solutions rubics attemptsmall





[Non-text portions of this message have been removed]
Bruce,

We're currently running the majority of our server farm in this manner.
2 of our 3 domain controllers are virtualized, as well as 1 of 2 primary
file servers, our email server, SQL server, Web server and our Terminal
server. We'll probably move the Progress server and the remaining DC
and file server over the holidays.

Our infrastructure is VMWare ESX 3.02, VMotion and Virtual Center. Our
SAN is an EMC CX300. We mirror the SAN throughout the day to a
home-brew high-perf SATA NAS using RoboCopy in an rsync-like fashion.

Things to consider:

1. You can't have one big storage pool and dynamically allocate it
between servers. Each server (virtual or otherwise) will require its
own volume to mount. So you'll end up carving your SAN into smaller
pieces.

2. You need a good SAN backup strategy. Ours consists of mirroring the
SAN to the NAS at short intervals and backing up the NAS to a local LTO4
drive once daily.

3. We boot ESX and the virtual images off their respective servers.
You'll need a solution for backing up your virtual images on a regular
basis.

4. ESX needs RAM, RAM and more RAM. Max your servers.

5. Make sure your HBAs are supported by your virtualization platform.
ESX needs to see the HBA in order to export it to the virtual, and the
virtual needs to have a driver for the exported HBA. 2GB FC is nice,
4GB is better, 8GB is on the horizon. I don't know if iSCSI or AOE are
credible contenders.

6. VMotion only works well if the servers have identical CPUs. Keep
this in mind. You'll also need a SQL database to support VMotion.

Despite the gotchas, I'm very happy with performance. On a
well-configured host, there is only a very minimal penalty for
virtualizing. In return you get excellent disaster recovery.

have fun,
john




On Thu, 1 Nov 2007 20:06:28 -0400, "Bruce Butler" said:

> I am considering Equallogic for an iSCSI SAN solution. I really like the
> idea of moving all my servers to a VMWare virtual environment and using
> the SAN to host the storage. Anyone operating like this? I am pretty
> familiar with the pros. Any cons?
A much appreciated insight. I truly believe this is a significant portion
of instantaneous disaster recovery.



- Bruce



From: vantage@yahoogroups.com [mailto:vantage@yahoogroups.com] On Behalf Of
John Sage
Sent: Friday, November 02, 2007 4:01 PM
To: vantage@yahoogroups.com
Subject: Re: [Vantage] SAN Solution



Bruce,

We're currently running the majority of our server farm in this manner.
2 of our 3 domain controllers are virtualized, as well as 1 of 2 primary
file servers, our email server, SQL server, Web server and our Terminal
server. We'll probably move the Progress server and the remaining DC
and file server over the holidays.

Our infrastructure is VMWare ESX 3.02, VMotion and Virtual Center. Our
SAN is an EMC CX300. We mirror the SAN throughout the day to a
home-brew high-perf SATA NAS using RoboCopy in an rsync-like fashion.

Things to consider:

1. You can't have one big storage pool and dynamically allocate it
between servers. Each server (virtual or otherwise) will require its
own volume to mount. So you'll end up carving your SAN into smaller
pieces.

2. You need a good SAN backup strategy. Ours consists of mirroring the
SAN to the NAS at short intervals and backing up the NAS to a local LTO4
drive once daily.

3. We boot ESX and the virtual images off their respective servers.
You'll need a solution for backing up your virtual images on a regular
basis.

4. ESX needs RAM, RAM and more RAM. Max your servers.

5. Make sure your HBAs are supported by your virtualization platform.
ESX needs to see the HBA in order to export it to the virtual, and the
virtual needs to have a driver for the exported HBA. 2GB FC is nice,
4GB is better, 8GB is on the horizon. I don't know if iSCSI or AOE are
credible contenders.

6. VMotion only works well if the servers have identical CPUs. Keep
this in mind. You'll also need a SQL database to support VMotion.

Despite the gotchas, I'm very happy with performance. On a
well-configured host, there is only a very minimal penalty for
virtualizing. In return you get excellent disaster recovery.

have fun,
john

On Thu, 1 Nov 2007 20:06:28 -0400, "Bruce Butler" said:

> I am considering Equallogic for an iSCSI SAN solution. I really like the
> idea of moving all my servers to a VMWare virtual environment and using
> the SAN to host the storage. Anyone operating like this? I am pretty
> familiar with the pros. Any cons?





[Non-text portions of this message have been removed]
We also have virtualized all of our servers except for the Vantage
server - we are on 6.1 with 90-100 users - we did testing and found
that the terminal server ran in the virtual at around 70% of the speed
of a native box - the progress database server did worse - we clocked
MRP runs and saw a 50% drop in speed. We are on ESX 2.5. I don't know
if 3.0 is more efficient. We do copy the backup of the database to the
virtual for backup/archive purposes.



Bob Booth





________________________________

From: vantage@yahoogroups.com [mailto:vantage@yahoogroups.com] On Behalf
Of John Sage
Sent: Friday, November 02, 2007 4:01 PM
To: vantage@yahoogroups.com
Subject: Re: [Vantage] SAN Solution



Bruce,

We're currently running the majority of our server farm in this manner.
2 of our 3 domain controllers are virtualized, as well as 1 of 2 primary
file servers, our email server, SQL server, Web server and our Terminal
server. We'll probably move the Progress server and the remaining DC
and file server over the holidays.

Our infrastructure is VMWare ESX 3.02, VMotion and Virtual Center. Our
SAN is an EMC CX300. We mirror the SAN throughout the day to a
home-brew high-perf SATA NAS using RoboCopy in an rsync-like fashion.

Things to consider:

1. You can't have one big storage pool and dynamically allocate it
between servers. Each server (virtual or otherwise) will require its
own volume to mount. So you'll end up carving your SAN into smaller
pieces.

2. You need a good SAN backup strategy. Ours consists of mirroring the
SAN to the NAS at short intervals and backing up the NAS to a local LTO4
drive once daily.

3. We boot ESX and the virtual images off their respective servers.
You'll need a solution for backing up your virtual images on a regular
basis.

4. ESX needs RAM, RAM and more RAM. Max your servers.

5. Make sure your HBAs are supported by your virtualization platform.
ESX needs to see the HBA in order to export it to the virtual, and the
virtual needs to have a driver for the exported HBA. 2GB FC is nice,
4GB is better, 8GB is on the horizon. I don't know if iSCSI or AOE are
credible contenders.

6. VMotion only works well if the servers have identical CPUs. Keep
this in mind. You'll also need a SQL database to support VMotion.

Despite the gotchas, I'm very happy with performance. On a
well-configured host, there is only a very minimal penalty for
virtualizing. In return you get excellent disaster recovery.

have fun,
john

On Thu, 1 Nov 2007 20:06:28 -0400, "Bruce Butler" said:

> I am considering Equallogic for an iSCSI SAN solution. I really like
the
> idea of moving all my servers to a VMWare virtual environment and
using
> the SAN to host the storage. Anyone operating like this? I am pretty
> familiar with the pros. Any cons?





[Non-text portions of this message have been removed]
EXS 3i utilizes a new thin hypervisor delivered as embedded firmware in the
CPU and is significantly more efficient than prior releases assuming that
your servers have the latest Xeon/Opteron.



Regards,



Michael





Michael Barry
Aspacia Systems Inc
866.566.9600
312.803.0730 fax
<http://www.aspacia.com/> http://www.aspacia.com/

This email, and any attachments thereto, is intended only for use by the
addressee(s) named herein and may contain legally privileged and/or
confidential information. If you are not the intended recipient of this
email, you are hereby notified that any dissemination, distribution or
copying of this email, and any attachments thereto, is strictly prohibited.
If you have received this email in error, please immediately notify me by
telephone and permanently delete the original and any copy of any email and
any printout thereof.






From: vantage@yahoogroups.com [mailto:vantage@yahoogroups.com] On Behalf Of
Bob Booth
Sent: Tuesday, November 06, 2007 10:23 AM
To: vantage@yahoogroups.com
Subject: RE: [Vantage] SAN Solution



We also have virtualized all of our servers except for the Vantage
server - we are on 6.1 with 90-100 users - we did testing and found
that the terminal server ran in the virtual at around 70% of the speed
of a native box - the progress database server did worse - we clocked
MRP runs and saw a 50% drop in speed. We are on ESX 2.5. I don't know
if 3.0 is more efficient. We do copy the backup of the database to the
virtual for backup/archive purposes.

Bob Booth

________________________________

From: vantage@yahoogroups.com <mailto:vantage%40yahoogroups.com>
[mailto:vantage@yahoogroups.com <mailto:vantage%40yahoogroups.com> ] On
Behalf
Of John Sage
Sent: Friday, November 02, 2007 4:01 PM
To: vantage@yahoogroups.com <mailto:vantage%40yahoogroups.com>
Subject: Re: [Vantage] SAN Solution

Bruce,

We're currently running the majority of our server farm in this manner.
2 of our 3 domain controllers are virtualized, as well as 1 of 2 primary
file servers, our email server, SQL server, Web server and our Terminal
server. We'll probably move the Progress server and the remaining DC
and file server over the holidays.

Our infrastructure is VMWare ESX 3.02, VMotion and Virtual Center. Our
SAN is an EMC CX300. We mirror the SAN throughout the day to a
home-brew high-perf SATA NAS using RoboCopy in an rsync-like fashion.

Things to consider:

1. You can't have one big storage pool and dynamically allocate it
between servers. Each server (virtual or otherwise) will require its
own volume to mount. So you'll end up carving your SAN into smaller
pieces.

2. You need a good SAN backup strategy. Ours consists of mirroring the
SAN to the NAS at short intervals and backing up the NAS to a local LTO4
drive once daily.

3. We boot ESX and the virtual images off their respective servers.
You'll need a solution for backing up your virtual images on a regular
basis.

4. ESX needs RAM, RAM and more RAM. Max your servers.

5. Make sure your HBAs are supported by your virtualization platform.
ESX needs to see the HBA in order to export it to the virtual, and the
virtual needs to have a driver for the exported HBA. 2GB FC is nice,
4GB is better, 8GB is on the horizon. I don't know if iSCSI or AOE are
credible contenders.

6. VMotion only works well if the servers have identical CPUs. Keep
this in mind. You'll also need a SQL database to support VMotion.

Despite the gotchas, I'm very happy with performance. On a
well-configured host, there is only a very minimal penalty for
virtualizing. In return you get excellent disaster recovery.

have fun,
john

On Thu, 1 Nov 2007 20:06:28 -0400, "Bruce Butler" said:

> I am considering Equallogic for an iSCSI SAN solution. I really like
the
> idea of moving all my servers to a VMWare virtual environment and
using
> the SAN to host the storage. Anyone operating like this? I am pretty
> familiar with the pros. Any cons?

[Non-text portions of this message have been removed]





[Non-text portions of this message have been removed]
Bob,

Just in case you haven't already done so: Your data should always be on
a "live" array. Under no circumstances would I put any data on a
virtual disk where performance, integrity and recoverability are prime
considerations.

And as I noted in my first reply, RAM is king. RAM, RAM and even more
RAM. Obscene amounts of RAM. Take what you think is reasonable and
multiply by four. That much RAM. Just so I'm making it clear, I
usually put 12GB to 16GB in our ESX hosts. It might also be worthwhile
for you to hang out on the VMWare community forums. Even while running
ESX 2.X, we didn't see the performance hit you're observing on your
terminal server.




On Tue, 6 Nov 2007 13:23:15 -0500, "Bob Booth" said:
> We also have virtualized all of our servers except for the Vantage
> server - we are on 6.1 with 90-100 users - we did testing and found
> that the terminal server ran in the virtual at around 70% of the speed
> of a native box - the progress database server did worse - we clocked
> MRP runs and saw a 50% drop in speed. We are on ESX 2.5. I don't know
> if 3.0 is more efficient. We do copy the backup of the database to the
> virtual for backup/archive purposes.
>
> Bob Booth
John -

How does your VM guest-OS connect to the live array? Does it map a
drive to a share, or is there a facility within ESX to allow direct
mounting of SAN storage to the VM guest itself? (FWIW - in a past life,
I was responsible for an EMC CLARiiON FC4700, the predecessor to the
current CX series. That past company is now using the now-obsolete FC
to host their VM guests.)

I was going to do my upgrade from 6.04 into 8.03 using VMs, but our
budget doesn't have room for any new storage, and the NAS we have now is
just too slow to use for holding the VM guest image-files. I figure
that hosting the VM images on the NAS would buy me some resiliency if I
were to lose the VM host, since the age of the server itself is one of
the driving factors behind my looking at VM in the first place. In
other words, if I can't totally abstract the VM guest from the hardware
itself, I don't really gain the "security blanket" I need.

Thanks.
--Ari
------------------------------------------------------------------------
Ari Footlik
IT Manager - R. A. Zweig


________________________________

From: vantage@yahoogroups.com [mailto:vantage@yahoogroups.com] On Behalf
Of John Sage
Sent: Tuesday, November 06, 2007 3:36 PM
To: vantage@yahoogroups.com
Subject: RE: [Vantage] SAN Solution



Bob,

Just in case you haven't already done so: Your data should always be on
a "live" array. Under no circumstances would I put any data on a
virtual disk where performance, integrity and recoverability are prime
considerations.

And as I noted in my first reply, RAM is king. RAM, RAM and even more
RAM. Obscene amounts of RAM. Take what you think is reasonable and
multiply by four. That much RAM. Just so I'm making it clear, I
usually put 12GB to 16GB in our ESX hosts. It might also be worthwhile
for you to hang out on the VMWare community forums. Even while running
ESX 2.X, we didn't see the performance hit you're observing on your
terminal server.

On Tue, 6 Nov 2007 13:23:15 -0500, "Bob Booth" said:
> We also have virtualized all of our servers except for the Vantage
> server - we are on 6.1 with 90-100 users - we did testing and found
> that the terminal server ran in the virtual at around 70% of the speed
> of a native box - the progress database server did worse - we clocked
> MRP runs and saw a 50% drop in speed. We are on ESX 2.5. I don't know
> if 3.0 is more efficient. We do copy the backup of the database to the
> virtual for backup/archive purposes.
>
> Bob Booth






[Non-text portions of this message have been removed]