Dell XC730-16G

Dell XC730-16G

I recently acquired two Dell XC730-16G servers. According to the Dell web site, they are a Hyper-Converged Appliance: –

The Dell XC730-16G system is web-scale converged appliance based on the Dell PowerEdge R730 that supports two Intel Xeon E5-2600 v3 processors, up to 24 DIMMs, and 16 hard drives or solid-state drives (SSDs).

Dell

These servers are fairly old now and are limited in the type of CPU they can use, so they can’t be upgraded to a modern Xeon, but they are a considerable improvement over the Dell 710, 310, and 510 servers I currently use.

Documentation

Dell’s documentation for the servers can be found here: – https://www.dell.com/support/manuals/en-uk/dell-xc730/xc730-16g_om_pub-v1/system-memory?guid=guid-5ed1adab-204d-4f18-b178-dbb4e470b42e&lang=en-us

Hardware Configuration

The servers have 2 E5-2680 v3 CPUs running at 2.50GHz. They aren’t that fast, but they run at 120W at full tilt, making them more economical than many others in the range. The bus speed is 9.6 GT/s, much faster than the Dell x10 range of machines, so that should give a much-needed boost.

Each server can have 16 SSD/HDDs fitted in the front. They came with a few HDDs and SSDs, but I upgraded both machines to have 10 1TB SSDs for VMs, 4x2TB HDDs for data/backups and a 120GB SSD for Proxmox. The idea is to run these machines as compute nodes with the Disks used as temporary storage. The VMs will be backed up every day, and if needed, they can be moved to another host at the click of a button – this will allow me to update and patch the OS, do hardware maintenance, etc., with the services up and running somewhere else.

RAM

With my previous servers, I rarely hit the CPU limits, but I repeatedly hit memory and disk speed limits. As with my previous servers, the new ones came with 128 GB of RAM. So, as there were plenty of free slots, I looked to eBay to improve the situation. Eventually, I found a listing for 768 GB of ECC RAM. It was the first time I ‘stalked’ the listing and put in my final bid at the last minute. I won! I had 768 GB of RAM for about £350. I thought this was too good to be true, and within a few minutes, I received a message from the seller saying they had made a mistake in the listing. 2 of the sticks weren’t the same as the others. For me, this didn’t really matter, and so I replied to say that it was fine. The seller then said he would send me an extra couple of sticks for free, so I received nearly 800 GB for £350 – a bargain!

When the RAM arrived, I found that it was LRDIMM and not RDIMM – and so it would not work alongside the existing RAM. However, I had so much RAM I just swapped it out so that both servers had 512 GB of LRDIMM RAM. I have 256 GB or RDIMM RAM as a spare should I need to troubleshoot.

Disk Performance

Improving the disk performance issues was a little more involved as I couldn’t just throw additional hardware at the problem. Each server had 10 1TB SSDs installed. These are pretty fast but aren’t nearly as fast as NME disks. The XC730 has a dedicated hardware RAID controller called the PERC H730 Mini.

I experimented with a few different RAID configurations but eventually decided on RAID10. I did consider RAID0 for a short period but concluded the additional speed did not outweigh the hassle of restoring all of the VMs should there be a failure. RAID10 gives good redundancy but maximum speed for the workloads I put onto the machines.

Networking

The XC730s came with an Ethernet 10G 4P X520/I350 rNDC network card installed. This has 2x10Gb SFP+ ports, which I connect to the 10Gb switch using DAC cables. The card also has 2x1Gb ethernet ports on it, but I don’t use those.

iDRAC

As with all decent enterprise servers, the XC730s came with enterprise edition iDRAC cards to enable the systems to be managed ‘out-of-band’. All my servers have the iDRAC card connected to the network using a thin CAT6 cable.

GPUs

The XC730-16G has something that my other servers have never had, and that is the ability to have high-power graphics cards installed. Most servers lack the necessary power leads/sockets needed for modern graphics cards. This hasn’t been a problem for me previously, but systems like Plex and JellyFin need more horsepower when transcoding video streams. I would like to run every application I have in Kubernetes, though, so I will look for a solution for passing graphics cards through to containers like you would in Docker.

Stephen

Hi, my name is Stephen Finchett. I have been a software engineer for over 30 years and worked on complex, business critical, multi-user systems for all of my career. For the last 15 years, I have been concentrating on web based solutions using the Microsoft Stack including ASP.Net, C#, TypeScript, SQL Server and running everything at scale within Kubernetes.