Upgrade to version 9.05

We just upgraded servers with a cost hit even higher than you describe, after about two weeks, I can say it seems more stable and it is faster, but still not speedy.

We do run raid 10, we have 50+ users and 40+ MES stations though.

We moved from 8.0 Vantage to 9.05 and were testing on an 4 year old server. We reluctantly followed the recommendations and it seems to be working better.

My advice is to follow the recommendations.

Brad


--- In vantage@yahoogroups.com, "Caster" <casterconcepts@...> wrote:
>
> I need a little input here. We are looking to move from Vista 8.03 to 9.05 soon. We consulted Epicor and their recommendations for hardware would cost about 20 grand. We are on progress now and want to move to SQL. Our SQL server is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production environment because of the absense of fault tolerance. This leaves a raid 1. THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This means 12 drives and a very expensive server or SAN to make this happen. Is all this really necessary? What are others out there running? Is anyone out there running on raid 5 or something less than recommended? I can't believe that a system for 26 users should require such high hardware requirements. Any input here would be welcome.
>
> Rob
>
I need a little input here. We are looking to move from Vista 8.03 to 9.05 soon. We consulted Epicor and their recommendations for hardware would cost about 20 grand. We are on progress now and want to move to SQL. Our SQL server is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production environment because of the absense of fault tolerance. This leaves a raid 1. THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This means 12 drives and a very expensive server or SAN to make this happen. Is all this really necessary? What are others out there running? Is anyone out there running on raid 5 or something less than recommended? I can't believe that a system for 26 users should require such high hardware requirements. Any input here would be welcome.

Rob
We built a pretty meaty SQL server for about $11k. With SQL server, you need to put some thought into the layout of your disk subsystem. This is typically the biggest bottleneck in most cases. Don't skimp on your hardware or you will certainly hear it from your users. As a general rule of thumb, any operation that makes a user wait more than 4 seconds = complaints, and Epicor 9 is a bit of a beast.

Use two drives in RAID 1 for the OS. Use RAID 10 (RAID 0+1) for your .MDF data file. RAID 10 gives you the highest Input/Output Operations Per Second (IOPS) with fault tolerance. The downside is you need a minumum of 4 drives, and you take a 50% capacity hit. If possible, put your transaction logs (.LDF) files on a separate RAID 10 array. Even better if you can put it on a separate controller on the PCIe bus. The more drives (spindles) you add, the better the performance. A RAID 10 array with 6 drives is better than 4 drives etc.

I wouldn't use RAID 5 at all anymore. RAID 5 can only suffer a single drive failure, and during the rebuild process, you are vulnerable. With the current size of drives in the terabyte range, you run the risk of encountering an Unrecoverable Read Error (URE) while the array is rebuilding if you suffer a drive failure, and thus the whole array fails and people are really unhappy with you.

RAID 6 is an option, but I wouldn't use it with anything less than 9 drives. You can suffer two drive failures, but the same vulnerability as RAID 5 exists. RAID 10 is really the only way to go in my opinion.

We couldn't afford a FusionIO, so we built a budget version. We used 6 Intel 32 GB X25-E Solid State Drives in a RAID 10 array for the .MDF data file (Our Epicor database is just over 10 GB). A typical database I/O pattern is 75% random reads, 25% writes. SSDs make sense here as these drives are unholy when it comes to random data reads. We placed the .LDF log files on a 6 drive RAID 10 array of Western Digital RE4 2TB SATA II drives. These drives are actually snappier than the SSDs as transaction logs are written sequentially. I also moved the SQL server tempdb database to a 4 drive RAID 0 array. The risk here is that if one of those drives fails, SQL server will shut down, however, no data is actually lost as tempdb is recreated each time SQL server starts. Worst case we're down until the drive is replaced, or the RAID 0 array is recreated without the failed drive. You could also use RAID 10 here, but I think RAID 0 is worth the performance boost. Just make sure to ONLY put tempdb files here. As always a hotspare drive is a great idea.

If you can afford it, go for 15k rpm SAS drives. We chose to save a little money and used higher end SATA drives. You find that with the larger capacity drives, the increased platter aerial density on a 2TB 7200rpm SATA drive can keep pace with a 15k rpm 300 GB SAS drive. But like I said, SAS is preferred.

SQL server is very greedy and will attempt to use all avaialable system memory. Ideally you want enough RAM to cache your entire database in memory. With your 25 users you are right at the edge of when Epicor wants you to break apart your SQL server and Application server to separate physical machines. We're about the same with 22 users and we get by with a single server doing both roles. It's never a bad idea to overcompensate with RAM here.

Also, SQL server 2008 reads and writes in 64kb pages, so you will get the best performance using a default allocation size of 64kb when formating your partition for your .MDF array. During testing, I got the best performance using a stripe size of 256kb, but your mileage may vary depending on your RAID controller.

So here's a rough breakdown of what we spent, and we used eBay pretty heavily for components.

$800 - Supermicro chassis and motherboard
$800 - Adaptec 51645 Unified SAS RAID controller
$400 - Adaptec 5805 Unified SAS RAID controller
$2,600 - 2 x Intel X5570 2.93 GHz Quad Core Xeon Nehalem processor
$2,400 - 6 x Intel X25-E SSDs
$2,700 - 11 x Western Digital RE4 2TB SATA II drives
$500 - 2 x Western Digital 300 GB Velociraptor drives
$800- 24 GB DDR3 RAM

$11,000 Total *(ballpark figure as some rounding involved)

If you start talking SANs, they usually start at $5k, so we chose the direct attached storage route.

So yeah, I could see them telling you $20k for a real high end server. I think what I just listed would easily be over that if purchased complete from an HP or Dell, but if you do some research on what components work well together (and it's hard to find those that don't if you stick with late model brand name components) it's not real difficult to build your own. Assuming you are handy with a phillips head screwdriver of course!

I wrote a lengthy post on using SSDs in the enterprise with SQL server here awhile ago, but I think it's too old for the group search to find. I still have a copy in my email, so contact me offline if you want to check it out.

Hope that helps. Feel free to contact me offline if you want any clarification.
Jared
-----------------------
Jared Allmond
jallmond@...
Wright Coating Technologies

--- In vantage@yahoogroups.com, "Caster" <casterconcepts@...> wrote:
>
> I need a little input here. We are looking to move from Vista 8.03 to 9.05 soon. We consulted Epicor and their recommendations for hardware would cost about 20 grand. We are on progress now and want to move to SQL. Our SQL server is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production environment because of the absense of fault tolerance. This leaves a raid 1. THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This means 12 drives and a very expensive server or SAN to make this happen. Is all this really necessary? What are others out there running? Is anyone out there running on raid 5 or something less than recommended? I can't believe that a system for 26 users should require such high hardware requirements. Any input here would be welcome.
>
> Rob
It's not generally RAID 1 or 0 they recommend, it's usually 1 plus 0, which is a
mirrored stripe.

Without knowing all the modules you run makes it tough to say what you need,
however based on your user load of 21+5, I don't think you need anywhere close
to 12 drives. 8 drives would probably be the best bang for the buck giving you 2
RAID 1 for your OS, 2 RAID 1 for your logs and backups etc, and then 4 for a
RAID 10 for the DB and program.

I am curious to know why you are interested in going with SQL? With what sounds
like a small company, and I assume a small IT Staff. SQL adds a considerable
amount of administration overhead, as well as needing to learn a whole bunch of
new technology unless you already have that knowledge in house or you are going
to hire that knowledge.

Yes, it does have some benefits, however many of those benefits can also be
taken advantage with the replication of a Progress DB to SQL. Performances can
be better with Progress on less hardware, thus less money invested on that side.

And just from my point of view, the SQL versions are definitely more buggy than
the strictly progress versions.

-----Original Message-----
From: vantage@yahoogroups.com [mailto:vantage@yahoogroups.com] On Behalf Of
Caster
Sent: Wednesday, June 16, 2010 4:51 PM
To: vantage@yahoogroups.com
Subject: [Vantage] Upgrade to version 9.05

I need a little input here. We are looking to move from Vista 8.03 to 9.05
soon. We consulted Epicor and their recommendations for hardware would cost
about 20 grand. We are on progress now and want to move to SQL. Our SQL server
is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support
it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production
environment because of the absense of fault tolerance. This leaves a raid 1.
THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This
means 12 drives and a very expensive server or SAN to make this happen. Is all
this really necessary? What are others out there running? Is anyone out there
running on raid 5 or something less than recommended? I can't believe that a
system for 26 users should require such high hardware requirements. Any input
here would be welcome.

Rob



------------------------------------

Useful links for the Yahoo!Groups Vantage Board are: ( Note: You must have
already linked your email address to a yahoo id to enable access. )
(1) To access the Files Section of our Yahoo!Group for Report Builder and
Crystal Reports and other 'goodies', please goto:
http://groups.yahoo.com/group/vantage/files/.
(2) To search through old msg's goto:
http://groups.yahoo.com/group/vantage/messages
(3) To view links to Vendors that provide Vantage services goto:
http://groups.yahoo.com/group/vantage/linksYahoo! Groups Links
I'll just echo Ned's comments about Progress performance being higher with the same hardware. It's basically true from my experience. And Progress tends to be more fire-and-forget. I've talked to companies that don't really have an IT department, and a "super user" acts as a sysadmin stand-in. That approach is more feasible with the Progress backend than SQL server.

Ned you're bang on in that RAID 0+1 (RAID 10) is the way to go for any DB.

With respect to SQL overhead and skill sets, it's not that bad. You really just have to know enough T-SQL to perform backups, or buy something like Symantec Backup Exec with the SQL server agent for a GUI based backup approach. A decent sysadmin can visit the MSDN library for all the T-SQL they need. And once a SQL Agent job for backups is established, you can more or less forget about it.

The only real babysitting SQL server needs (relative to Progress) is the transaction logs. In Full Recovery or Bulk Logged recovery models, those can grow out of control if you're not performing regular log backups. Not a regular full backup, but a full backup in conjunction with a log backup. A log backup is the only thing that flushes the transaction log and allows free space to be reclaimed. If you don't need explicit point-in-time recovery, use the Simple recovery model. That way your transaction logs become a nonissue. Ideally, you would take a daily full backup, and a transaction log backup say every 15 minutes or 30 min or hourly. It all depends on what sort of pain you want to expose yourself to. In our organization, we use the Simple recovery model, and perform daily backups. Worst case, we can lose 1 day of data, but that isn't too hard for us to recreate.

As far as buggy-ness, I'm not sure you can fault that to the database backend. Each version of Epicor has it's issues, regardless of the DB backend.

The true benefit in going the SQL server route is the reporting flexibility. At your disposal, you have Crystal Reports, SQL Server Reporting Services (SSRS, and that is 9.05 specific), and OLE DB (or ODBC) to things like Microsoft Excel or Access. That's pretty powerful. With SQL server you can build custom views and query those using Excel or an ASP.NET website. You can't do that with Progress, but admittedly you do have ODBC access. However, SQL server does some wonderful caching on those views, and performance far exceeds the internal BAQ approach. I've seen BAQ's that take minutes to run, and the same data is returned from a SQL view in seconds.

Agreed, you do need someone with the technical skills to create those custom reports, but if you're ready to spend the money and invest in a midrange ERP package like Epicor, you should also cowboy up and spend the money on skilled support personnel. Skilled SQL server DBAs and C# developers are readily available. I would argue that the pool of skilled Progress 4GL developers is relatively smaller. The ability to generate true Business Intelligence with a variety of reporting options is worth the SQL server investment, in my opinion of course.

Jared
-----------------------
Jared Allmond
jallmond@...
Wright Coating Technologies
I can't believe Epicor told you Raid 1 or 0. Normally they would tell you to use Raid 10. At the moment that is the most efficient and gives you the speed. Raid 10 is actually a Raid 1+0, i.e. a mirrored stripe set. It does require more drives, i.e. you only have 50% of drive space available, however, the performance outweighs the cost especially now that the cost per GB has come down drastically in the last couple of years. Even 15K SCSI drives are now cost effective. Vantage is very disk intensive so don't skimp.


Charles Carden
IT Manager
Manitex, Inc.
Georgetown, Texas
----- Original Message -----
From: Caster
To: vantage@yahoogroups.com
Sent: Wednesday, June 16, 2010 3:50 PM
Subject: [Vantage] Upgrade to version 9.05



I need a little input here. We are looking to move from Vista 8.03 to 9.05 soon. We consulted Epicor and their recommendations for hardware would cost about 20 grand. We are on progress now and want to move to SQL. Our SQL server is on a 4 drive RAID 5 array. Epicor says not to do this, they won't support it. They say we need a RAID 1 or 0. I won't put a RAID 0 in a production environment because of the absense of fault tolerance. This leaves a raid 1. THey recommend we span (for our 21 office and 5 mes users) over 6 drives. This means 12 drives and a very expensive server or SAN to make this happen. Is all this really necessary? What are others out there running? Is anyone out there running on raid 5 or something less than recommended? I can't believe that a system for 26 users should require such high hardware requirements. Any input here would be welcome.

Rob





[Non-text portions of this message have been removed]