1st build advice Dell 720

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
Hi,

I have a Dell 720 going to setup as a home lab, server has 32GB of ram just purchased another 96GB has 4 x 4TB SAS drives, think it has a perc H310 mini might be wrong what ever it was flashed it a few weeks ago to IT mode.

Just purchased a cheap PCIE to dual NVMe adapter for boot drives as have loads of 128GB NVMe drives laying about. Not used these adapters before but didn't want to take up another couple of HDD slots thoughts? anyone using these for boot?

Thinking Proxmox or VMWare not used Proxmox before any advantages between the two or best to stick bare metal?

Liking the idea of Scale as use Core at work still waiting for the road map to go a little further down the road before thinking of upgrading just yet.

Looking to run a few VM's ideally and a few popular open source apps as well as Scale.

Any recommended raid config to go for on the 4 drives at moment and or additional drives required to help with speed if required?

Sorry for so many questions! Thanks in advance for any replies.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't answer some of your questions... here are some answers.

Some cheap PCIe dual NVMe adapters require the PCIe slot & CPU to support bifurcation. Check that before trying to install and use.

For people who use TrueNAS as a VM their is less time on Proxmox than VMWare. Be sure to pass the disk controller through to TrueNAS. Their are Resources for improving the reliability of using TrueNAS a VM.

I would not categorize a change from Core to SCALE as an upgrade.

With only 4 disks, their are really only a few choices: 2, 2 way Mirrors or 4 disk RAID-Z2. Both use 1/2 the storage for redundancy, with different characteristics. In theory you can use RAID-Z1 but that is not recommended for large disks.

If the intent is to run VMs and Apps, then 2, 2 way Mirrors are suggested. RAID-Zx is better for bulk and shared storage. Also, that would suggest using TrueNAS SCALE on bare metal instead of as a VM it's self.


Don't forget that you can play with TrueNAS in a simple VM on your desktop. That would get you more familiar with the GUI and command line.
 

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
I can't answer some of your questions... here are some answers.

Some cheap PCIe dual NVMe adapters require the PCIe slot & CPU to support bifurcation. Check that before trying to install and use.

For people who use TrueNAS as a VM their is less time on Proxmox than VMWare. Be sure to pass the disk controller through to TrueNAS. Their are Resources for improving the reliability of using TrueNAS a VM.

I would not categorize a change from Core to SCALE as an upgrade.

With only 4 disks, their are really only a few choices: 2, 2 way Mirrors or 4 disk RAID-Z2. Both use 1/2 the storage for redundancy, with different characteristics. In theory you can use RAID-Z1 but that is not recommended for large disks.

If the intent is to run VMs and Apps, then 2, 2 way Mirrors are suggested. RAID-Zx is better for bulk and shared storage. Also, that would suggest using TrueNAS SCALE on bare metal instead of as a VM it's self.


Don't forget that you can play with TrueNAS in a simple VM on your desktop. That would get you more familiar with the GUI and command line.

Thanks Arwen,

Some cheap PCIe dual NVMe adapters require the PCIe slot & CPU to support bifurcation. Check that before trying to install and use.

Thank you, bifurcation new one to me. Yes doesn't seem BIOS does support. Will look at other alternatives for Boot as probably will go bare metal as suggested. Will see what other Raid cards laying about maybe be able to float a couple off SSD's inside loose.

I would not categorize a change from Core to SCALE as an upgrade.

Yes :) my bad choice of words. I am looking to move over to Scale on production eventually.

With only 4 disks, their are really only a few choices: 2, 2 way Mirrors or 4 disk RAID-Z2. Both use 1/2 the storage for redundancy, with different characteristics. In theory you can use RAID-Z1 but that is not recommended for large disks.

Is it worth me getting all disks first or a couple more? I do want to max out all available bays eventually, I am not so sure on how easy it is to add more disks once raid set.
 
Joined
Jun 15, 2022
Messages
674
I would not categorize a change from Core to SCALE as an upgrade.
I would. Its installation base is growing rapidly, to the point functioning CORE installations are migrating to SCALE. There have to be good reasons for this for such a situation to happen.
 
Joined
Jun 15, 2022
Messages
674
Is it worth me getting all disks first or a couple more? I do want to max out all available bays eventually, I am not so sure on how easy it is to add more disks once raid set.
It depends on your goals, ZFS doesn't grow easily like LVM. This video should go a long way to explaining ZFS and how it works, hopefully greatly reducing your time invested both now and especially later:

 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I would. Its installation base is growing rapidly, to the point functioning CORE installations are migrating to SCALE. There have to be good reasons for this for such a situation to happen.
Perhaps in the free side. But, iXsystems would have to comment on the Enterprise side, (which would make more of a difference to code development than the free side).

Remember, for the storage side, Core is likely more stable than SCALE.

Some people have even said Core is faster than SCALE, on the same hardware, (just the storage side). This might make sense because ZFS is integrated into FreeBSD better. This might be helped because FreeBSD can use more than 50% free memory for ZFS ARC, (where Linux is limited to 50%).
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Perhaps in the free side. But, iXsystems would have to comment on the Enterprise side, (which would make more of a difference to code development than the free side).

Remember, for the storage side, Core is likely more stable than SCALE.

Some people have even said Core is faster than SCALE, on the same hardware, (just the storage side). This might make sense because ZFS is integrated into FreeBSD better. This might be helped because FreeBSD can use more than 50% free memory for ZFS ARC, (where Linux is limited to 50%).
The biggest issue with TrueNAS Scale at an enterprise level right now, is the lack of an ability to send ZFS snapshots from a scale cluster to anything else.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The biggest issue with TrueNAS Scale at an enterprise level right now, is the lack of an ability to send ZFS snapshots from a scale cluster to anything else.
By anything else, do you mean something other than TrueNAS SCALE?

I would think if the pools on source and destination servers were limited to common feature sets, SCALE would easily be able to replicate to Core. Or generic FreeBSD or Linux with ZFS.

If you think pool feature sets on SCALE are the problem, well that problem has existed since ZFS was created. It has always been required that the destination have the same or more features as the source. Their are some slight exceptions when a feature on the source is not active.

The common way to solve that particular problem is to only activate features common to all your ZFS servers. Meaning if a feature is not available on one server, but is available on another, you still don't activate it. I routinely don't upgrade my pool features on my home Linux computers, (desktop, media server, 2 x laptops, each with 2 pools: 1 OS and the other just storage). Plus I use ZFS on my backup disks so I can detect corruption. That is another 14 pools.
 

Penobscot

Dabbler
Joined
Nov 22, 2021
Messages
15
I can't help with most of your questions but I have run Core on an R720 since 2020. The first thing I would do is get all the machine's firmware up-to-date. You will probably need to install Windows Server temporarily to do that and then pop the service tag into Dell's site. An iDRAC will also be helpful if the machine doesn't have one; note the iDRAC can no longer see the drives once you flash the controller but TN takes care of that anyway.

Anyhow, all I wanted from my setup was to replace a failed Seagate Black Armor NAS. I had two former ESXi hosts so I combined their drives (main storage was previously a SAN) and ended up with 7 1TB SATA Constellations and a 128GB SSD from a Dell workstation. Total cost, not including my time, was about $7 for tray and 2.5 to 3.5 adapter. That got me about 4TB of Z2 storage along with 128GB RAM and two Xeon CPUs.

Main use is still storage (and replication to the cloud), but after I discovered BHYVE we put a BDC and assorted Debian VMs on it. Works great.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Hi,

I have a Dell 720 going to setup as a home lab, server has 32GB of ram just purchased another 96GB has 4 x 4TB SAS drives, think it has a perc H310 mini might be wrong what ever it was flashed it a few weeks ago to IT mode.

Just purchased a cheap PCIE to dual NVMe adapter for boot drives as have loads of 128GB NVMe drives laying about. Not used these adapters before but didn't want to take up another couple of HDD slots thoughts? anyone using these for boot?

Thinking Proxmox or VMWare not used Proxmox before any advantages between the two or best to stick bare metal?

Liking the idea of Scale as use Core at work still waiting for the road map to go a little further down the road before thinking of upgrading just yet.

Looking to run a few VM's ideally and a few popular open source apps as well as Scale.

Any recommended raid config to go for on the 4 drives at moment and or additional drives required to help with speed if required?

Sorry for so many questions! Thanks in advance for any replies.
We're replacing several R720xd servers that have faithfully run FreeNAS and TrueNAS core for the better part of a decade now. While we've not run Scale on them, I suspect our lessons learned apply to Scale as well. These servers were run with PowerVault MD3060e shelves attached to them.
  1. Do not use the onboard PERC H310 with original Dell firmware. While the system will work with the drives in "passthrough" or "non-raid" mode, they won't be hot swappable. Dell has been rebranding LSI\Broadcom hardware for years and the PERC H310 can be flashed to IT mode. Either use a PCIe HBA from the TrueNAS hardware list or flash the PERC H310 to IT mode if hot swappable drives in the chassis matter to you.
  2. In the server's BIOS, disable the OS watchdog. We do this on all our Dell servers that run TrueNAS, not just the R720xd. If you don't do this, at some point, Dell's watchdog will incorrectly detect an OS hang in TrueNAS and forcibly reboot a perfectly good and working server, causing an outage. We don't know why, but it's been a consistent issue across server models and FreeNAS\TrueNAS versions so we disable watchdog on our TrueNAS servers.
  3. Depending on your server's configuration, use Intel network cards if you can. We've consistently had issues with the older Broadcom network cards that could come in some configurations of the R720xd. I can't remember the model numbers as we've long replaced them with Intel cards but we've not had any issues since we did.
  4. Remember that the backplane of the R720xd is only 6Gb SAS. This is probably fine for most home users but you're talking about SATA speeds. Don't expect too much from it.
  5. I don't know if the NVME to PCIe adapters will boot on the R720xd. Dell doesn't officially support that, but they also don't support it on the R730xd and we've proven that the R730xd can be booted from a PCIe card. Our R720xd servers are due to be pulled in the next couple weeks so I'm willing test that one out before we recycle them, if you are willing to wait.
  6. If you want speed, especially for running VMs and with the 4 drives you mention, I'd suggest going with a pool config of two vdevs with two drives each, and mirrored drives in each vdev. That's going to be better performance than having all four drives in a RAIDZ.

Hopefully that's all useful.
 

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
I can't help with most of your questions but I have run Core on an R720 since 2020. The first thing I would do is get all the machine's firmware up-to-date. You will probably need to install Windows Server temporarily to do that and then pop the service tag into Dell's site. An iDRAC will also be helpful if the machine doesn't have one; note the iDRAC can no longer see the drives once you flash the controller but TN takes care of that anyway.

Anyhow, all I wanted from my setup was to replace a failed Seagate Black Armor NAS. I had two former ESXi hosts so I combined their drives (main storage was previously a SAN) and ended up with 7 1TB SATA Constellations and a 128GB SSD from a Dell workstation. Total cost, not including my time, was about $7 for tray and 2.5 to 3.5 adapter. That got me about 4TB of Z2 storage along with 128GB RAM and two Xeon CPUs.

Main use is still storage (and replication to the cloud), but after I discovered BHYVE we put a BDC and assorted Debian VMs on it. Works great.
After deliberating for a week. I have now settled on an additional Perc H710 card for a couple small SSD boot drives. Going to try and power them from the GPU power from the riser card, failing that I will order the CD\DVD power cable and split the power. I did also look at BOSS, PCIE switch card and the dual SD card option.

Will update all firmware's as probably been a few years! Pretty sure I have iDRAC. Will also get the Dell custom ISO as also decided to stick with Core on ESXi now as more familiar with than Proxmox and Scale. I have purchased some additional 4tb drives. So now 6 drives.

I used to have Napp-it on ESXi but broke down the build a couple of years ago, so main purpose is storage also which I will backup to the cloud.

Mainly use Hyper-V, I have not ever used BHYVE and very little on Proxmox so hopefully once I get my lab up I can play with various VM's.

Apart from additional ram some drives a H710 and a cheap PCIe NVMe card which will end up in a box. A fairly cheap lab on the whole.
 

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
We're replacing several R720xd servers that have faithfully run FreeNAS and TrueNAS core for the better part of a decade now. While we've not run Scale on them, I suspect our lessons learned apply to Scale as well. These servers were run with PowerVault MD3060e shelves attached to them.
  1. Do not use the onboard PERC H310 with original Dell firmware. While the system will work with the drives in "passthrough" or "non-raid" mode, they won't be hot swappable. Dell has been rebranding LSI\Broadcom hardware for years and the PERC H310 can be flashed to IT mode. Either use a PCIe HBA from the TrueNAS hardware list or flash the PERC H310 to IT mode if hot swappable drives in the chassis matter to you.
  2. In the server's BIOS, disable the OS watchdog. We do this on all our Dell servers that run TrueNAS, not just the R720xd. If you don't do this, at some point, Dell's watchdog will incorrectly detect an OS hang in TrueNAS and forcibly reboot a perfectly good and working server, causing an outage. We don't know why, but it's been a consistent issue across server models and FreeNAS\TrueNAS versions so we disable watchdog on our TrueNAS servers.
  3. Depending on your server's configuration, use Intel network cards if you can. We've consistently had issues with the older Broadcom network cards that could come in some configurations of the R720xd. I can't remember the model numbers as we've long replaced them with Intel cards but we've not had any issues since we did.
  4. Remember that the backplane of the R720xd is only 6Gb SAS. This is probably fine for most home users but you're talking about SATA speeds. Don't expect too much from it.
  5. I don't know if the NVME to PCIe adapters will boot on the R720xd. Dell doesn't officially support that, but they also don't support it on the R730xd and we've proven that the R730xd can be booted from a PCIe card. Our R720xd servers are due to be pulled in the next couple weeks so I'm willing test that one out before we recycle them, if you are willing to wait.
  6. If you want speed, especially for running VMs and with the 4 drives you mention, I'd suggest going with a pool config of two vdevs with two drives each, and mirrored drives in each vdev. That's going to be better performance than having all four drives in a RAIDZ.

Hopefully that's all useful.
Yes very useful, thank you.

1. Yes, I flashed to IT mode :)
2. I'm running Core on TrueNas R20's they are not a quiet unit for anyone out there. So not come across this before on Core. Thanks for the heads up will do.
3. I have an additional 4 port Intel Nic card installed
4. Yes just a home lab that fits my requirements.
5. I think the only way is to use a PCIe switch card, bifurcation not supported on the 720 (please correct me if I am wrong) the cheap card I bought has now gone to the box of it might come in useful one day.
6. Now gone to 6 drives would like to lean towards 2 drive failure
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Yes very useful, thank you.

1. Yes, I flashed to IT mode :)
2. I'm running Core on TrueNas R20's they are not a quiet unit for anyone out there. So not come across this before on Core. Thanks for the heads up will do.
3. I have an additional 4 port Intel Nic card installed
4. Yes just a home lab that fits my requirements.
5. I think the only way is to use a PCIe switch card, bifurcation not supported on the 720 (please correct me if I am wrong) the cheap card I bought has now gone to the box of it might come in useful one day.
6. Now gone to 6 drives would like to lean towards 2 drive failure
You should be able to change the fan modes in the BIOS. I can't remember exactly where it's set. I have an R620 at home that runs my lab, virtual TrueNAS, PFSense, and a Space Engineers server at times without being too noisy. One of our 720xd servers was evacuated yesterday. Let me poke it with a stick and see what I can figure out. I know there's a way to quiet the fans.
 

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
You should be able to change the fan modes in the BIOS. I can't remember exactly where it's set. I have an R620 at home that runs my lab, virtual TrueNAS, PFSense, and a Space Engineers server at times without being too noisy. One of our 720xd servers was evacuated yesterday. Let me poke it with a stick and see what I can figure out. I know there's a way to quiet the fans.
Sorry, I am running Core at work on iXsystems TrueNAS R20's they are a loud unit, The Dell R720 is quiet in comparison.

I did have a couple of Dell R900's for a lab a while ago they make the TrueNAS R20's seem quiet! Although I did swap out the fans very quickly which caused a constant alert but was then nearly silent.
 

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
Been a while! So I purchased an additional raid card and added two small SSD for boot. Project went dead after I ordered a couple more drives and the order was never fulfilled.

So moving on just ordered 4 x 4TB drives and installed Scale last night. Installed direct onto SSD as have another server coming for me to play with XCP-NG, XOA etc but out of scope here.

  • 8 x 4TB drives any recommendation on Raid setup with Raid-Z2 \ Main use SMB shares
 
Joined
Jun 15, 2022
Messages
674
@R1c4ard06 : Once you get started and find out the server is awesome and upgrades are competitively inexpensive it's easy to keep going on building out a box.
 
Last edited:

R1c4ard06

Dabbler
Joined
Apr 28, 2022
Messages
11
Looking to setup my Raid-Z tonight

Any recommendations thinking 6 x 4tb Raid-Z2 for SMB shares 2 x 4TB Mirror for NFS share for shared storage for VM's
 
Top