You may remember the Home Virtualization Project from last year. In that project, I converted my existing server, based on a Shuttle XPC (SP35P2 Pro, to be more precise) from a Linux server running VMware Server 2.0 to a VMware ESXi 3.5 server. It worked well, but left a few things to be desired, such as..
- No RAID
- Onboard NIC required significant fiddling to get working under ESXi 3.5u4
- No onboard video, so I needed a video card, plus a network card to get going (the real root cause of #1 above).
- A bit loud. The system wasn’t terribly loud, but for something that’s on full-time in the background in my office, it could be distracting at times.
So here we are, it’s a brand-new year, so the big project was an upgrade, inspired by some requirements I found with a project at work. In the end, the old server was converted into a workstation and now has a happy home. So what’s the current system? Another Shuttle XPC. This time, it’s the SG45H7. This is a slightly smaller chassis than the already small SP35P2 Pro. The SP line has space for 2 hard drives up top, above the optical drive that the SG line lacks, resulting in a shorter case. The SG45H7 is targeted as an HTPC, and includes onboard video with both SVGA and HDMI outputs. Further, it includes 2 expansion slots, one PCIe x16 and one PCI.
System preparation was pretty straight forward. I followed the basic Shuttle directions for installing the Intel Core 2 Quad 9550 CPU, and the 8GB of DDR2-800 RAM (the ram shifted over from the SP35P2 Pro). With the latest BIOS upgrade, (which I applied first) the SG45H7 can handle 4x 4GB DIMMs, for a total of 16GB of RAM. Not too shabby!
The system came with 3 internal SATA-II ports, only 2 of which were pre-cabled (1 extra cable in the box though), and an IDE cable for use with an IDE optical drive. I removed the IDE cable and one of the 2 SATA cables, which gave me room to run a SAS SFF-8087 to 4x SATA break-out cable through the wiring channel along with the remaining system SATA cable. The SATA cable was used for a DVD-RW drive (I picked the cheapest Lite-On DVD-RW drive that Newegg was selling at the time), and 2 of the 4 SATA connectors on the break-out cable were for the 2x 1TB Samsung HDDs I installed. Starring as the RAID controller was an LSI 8344ELP PCIe x4 card. I got a new card that was actually an HP OEM version. It’s really HP only as far as the sticker goes. It still runs the LSI firmware, and happily accepted the latest firmware from the LSI website. I’ve used this card a number of times now in other system builds, and it works very well, in addition to the low cost (around $150). That’s right, $150 for a PCIe x4 RAID card that’s real, honest-to-goodness hardware RAID with 8 ports, supports use of SAS expanders to accommodate up to 32 total drives and supports RAID 0 / 1 / 5 / 10 / 50. Yes, I see the word “obselete” in the URL on LSI’s website, but I’m not terribly bothered by that, since there is excellent support for the card. Driver support is excellent, and there was a firmware update in November, 2009. Doesn’t seem all that obselete to me. 😎
Since the onboard NIC was also not supported by ESXi 4.0 update 1, I opted for an Intel Pro 1000 GT PCI NIC. This was my only quibble with the build. If this system had both onboard video, and 2 PCIe slots, I could have used a multi-port NIC from Intel. Now that this build is complete, Shuttle will probably truck out a new version of this model that includes 2 PCIe x16 slots, keeping the onboard video. I’ll save that for the 3.0 version of the project, maybe in 12-18 months. 😎
After I completed the hardware build, I formatted a USB stick to be bootable, using Method 2, as shown on bootdisk.com. On that bootable USB stick, I dropped the Shuttle BIOS upgrade files, as well as the RAID controller firmware update. Then, I booted from the stick and did the upgrades needed. That took a couple of reboots and probably 10 minutes to complete. After that, I built the RAID, from the LSI card BIOS, which only took a few minutes, since I was just simply creating a blank RAID-1 volume from a couple of empty drives.
Lastly, I installed VMware ESXi 4.0 update 1, and frankly, it was the most uneventful part of the process. Why? It took a good 5 minutes and almost no interaction to complete. The final migration step was to move VMs over, update the VMware Tools and upgrade Virtual Hardware.