Back to Docs Hub
This guide collects various how-tos for both simple and complex tasks using primarily the TrueNAS web interface. Tutorials are organized parallel to the TrueNAS web interface structure and grouped by topic. Tutorials are living articles and continually updated with new content or additional in-depth tutorials that guide in unlocking the full potential of TrueNAS.
To display all tutorials in a linear HTML format, export it to PDF, or physically print it, please select ⎙ Download or Print.
The SCALE top navigation top toolbar provides access to functional areas of the UI that you might want to directly access while on other screens in the UI. Icon buttons provide quick access to dropdown lists of options, dropdown panels with information on system alerts or tasks, and can include access to other information or configuration screens. It also shows the name of admin user currently logged into the system to the left of the Settings and Power icons.
You can also collapse or expand the main function menu on the left side of the screen.
The API Keys option on the top right toolbar Settings (user icon) dropdown menu displays the API Keys screen. This screen displays a list of API keys added to your system and allows you to add, edit, or delete keys.
Click Add to display a dialog window that lets users add a new API key.
Type a descriptive name and click Add. The system displays a confirmation dialog and adds a new API key to the list.
Select the icon for any API key on the list to display options to manage that API key. Options are Edit or Delete.
Select the Reset to remove the existing API key and generate a new random key. The dialog displays the new key and the Copy to Clipboard option to copy the key to the clipboard.
Always back up and secure keys. The key string displays only one time, at creation!
To delete, select Confirm on the delete dialog to activate the Delete button.
Click API Docs to access API documentation that is built into the system.
TrueNAS Enterprise
This procedure applies to SCALE Enterprise High Availability (HA) systems only.
If you need to power down your SCALE Enterprise system with HA enabled, this is the procedure:
While logged into the SCALE Web UI using the virtual IP (VIP), click the power button in the top right corner of the screen.
Select Shut Down from the dropdown list.
This shuts down the active controller.
The system fails over to the standby controller.
When the SCALE Web UI login screen displays, log back in to the system. This logs you in to the standby controller.
Click the power button in the top right corner of the screen.
Select Shut Down from the dropdown list.
This shuts down the standby controller.
This section contains tutorials involving the SCALE Dashboard.
TrueNAS SCALE allows users to synchronize SCALE and system server time when they get out of sync. This function does not correct time differences over 30 days out of alignment.
The System Information widget on the Dashboard displays a message and provides an icon button that executes the time-synchronization operation only when SCALE detects a discrepancy between SCALE and system server time.
Click the Synchronize Time
icon button to initiate the time-synchronization operation.The SCALE Storage section has controls for pools, snapshots, and disk management. This section also provides access to datasets, zvols, quotas, and permissions.
Use the Import Pool button to reconnect pools exported/disconnected from the current system or created on another system. This also reconnects pools after users reinstall or upgrade the TrueNAS system.
Use the Disks button to manage, wipe, and import storage disks that TrueNAS uses for ZFS data storage.
Use the Create Pool to create ZFS data storage “pools” from physical disks. Pools efficiently store and protect data.
The Storage screen displays all the pools added to the system. Each pool shows statistics and status, along with buttons to manage the different elements of the pool.
The articles in this section offer specific guidance for the different storage management options.
ZFS pool importing works for pools exported or disconnected from the current system, those created on another system, and for pools you reconnect after reinstalling or upgrading the TrueNAS system.
The import procedure only applies to disks with a ZFS storage pool.
To import a pool, go to the Storage Dashboard and click Import Pool at the top of the screen.
TrueNAS detects the pools that are present but not connected and adds them to the Pools dropdown list.
Select a pool from the Pool dropdown list, then click Import.
To manage disks, go to Storage and click Disks on the top right of the screen to display the Storage Disks screen.
Select the disk on the list, then select Edit.
The Disks page lets users edit disks, perform manual tests, and view S.M.A.R.T. test results. Users may also delete obsolete data off an unused disk.
Select the disk(s) you want to perform a S.M.A.R.T. test on and click Manual Test.
Click Start to begin the test. Depending on the test type you choose, the test can take some time to complete. TrueNAS generates alerts when tests discover issues.
For information on automated S.M.A.R.T. testing, see the S.M.A.R.T. tests article.
To review test results, expand the disk and click S.M.A.R.T. Test Results.
Hard drives and solid-state drives (SSDs) have a finite lifetime and can fail unexpectedly. When a disk fails in a Stripe (RAID0) pool, you must recreate the entire pool and restore all data backups. We always recommend creating non-stripe storage pools that have disk redundancy.
To prevent further redundancy loss or eventual data loss, always replace a failed disk as soon as possible! TrueNAS integrates new disks into a pool to restore it to full functionality.
TrueNAS requires you to replace a disk with another disk of the same or greater capacity as a failed disk. You must install the disk in the TrueNAS system. It should not be part of an existing storage pool. TrueNAS wipes the data on the replacement disk as part of the process.
Disk replacement automatically triggers a pool resilver.
If you configure your main SCALE Dashboard to include individual Pool or the Storage widgets they show the status of your system pools as on or offline, degraded, or in an error condition.
The Storage Dashboard pool widgets also show the status of each of your pools.
From the main Dashboard, you can click the on either the Pool or Storage widget to go to the Storage Dashboard screen, or you can click Storage on the main navigation menu to open the Storage Dashboard and locate the pool in the degraded state.
To replace a failed disk:
Locate the failed drive.
a. Go to the Storage Dashboard and click Manage Devices on the Topology widget for the degraded pool to open the Devices screen for that pool.
b. Click anywhere on the VDEV to expand it and look for the drive with the Offline status.
Take the disk offline.
Click Offline on the ZFS Info widget to take the disk offline. The button toggles to Online.
Pull the disk from your system and replace it with a disk of at least the same or greater capacity as the failed disk. V:
a. Click Replace on the Disk Info widget on the Devices screen for the disk you off-lined.
b. Select the new drive from the Member Disk dropdown list on the Replacing disk diskname dialog.
Add the new disk to the existing VDEV. Click Replace Disk to add the new disk to the VDEV and bring it online.
Disk replacement fails when the selected disk has partitions or data present. To destroy any data on the replacement disk and allow the replacement to continue, select the Force option.
When the disk wipe completes, TrueNAS starts replacing the failed disk. TrueNAS resilvers the pool during the replacement process. For pools with large amounts of data, this can take a long time. When the resilver process completes, the pool status returns to Online status on the Devices screen.
We recommend users off-line a disk before starting the physical disk replacement. Off-lining a disk removes the device from the pool and can prevent swap issues.
Click on Manage Devices to open the Devices screen, click anywhere on the VDEV to expand VDEV and show the drives in the VDEV.
Click Offline on the ZFS Info widget. A confirmation dialog displays. Click Confirm and then Offline. The system begins the process to take the disk offline. When complete, the disk displays the status of the failed disk as Offline. The button toggles to Online.
Use Replace to bring the new disk online in the same VDEV.
After a disk fails, the hot spare takes over. To restore the hot spare to waiting status after replacing the failed drive, remove the hot spare from the pool, then re-add it to the pool as a new hot spare.
The disk wipe option deletes obsolete data from an unused disk.
Wipe is a destructive action and results in permanent data loss! Back up any critical data before wiping a disk.
TrueNAS only shows the Wipe option for unused disks.
Ensure you have backed-up all data and are no longer using the disk. Triple check that you have selected the correct disk for the wipe. Recovering data from a wiped disk is usually impossible.
Click Wipe to open a dialog with additional options:
After selecting the appropriate method, click Wipe and confirm the action. A Confirmation dialog opens.
Verify the name to ensure you have chosen the correct disk. When satisfied you can wipe the disk, set Confirm and click Continue.
Continue starts the disk wipe process and opens a progress dialog with the Abort button.
Abort stops the disk wipe process. At the end of the disk wipe process a success dialog displays. Close closes the dialog and returns you to the Disks screen.
TrueNAS Enterprise
Over-provisioning an SSD distributes the total number of writes and erases across more flash blocks on the drive. Seagate provides a thoughtful investigation into over-provisioning SSDs here: https://www.seagate.com/blog/ssd-over-provisioning-benefits-master-ti/.
For more general information on SLOG disks, see SLOG Devices.
Because this is a potentially disruptive procedure, contact iXsystems Support to review your overprovisioning needs and schedule a maintenance window.
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
iXsystems Customer Support Support Portal https://support.ixsystems.com support@ixsystems.com Telephone and Other Resources https://www.ixsystems.com/support/
Pyrite Version 1 SEDs do not have PSID support and can become unusable if the password is lost.
See this Trusted Computing Group and NVM Express® joint white paper for more details about these specifications.
TrueNAS implements the security capabilities of camcontrol for legacy devices and sedutil-cli for TCG devices.
When managing a SED from the command line, it is recommended to use the sedhelper
wrapper script for sedutil-cli
to ease SED administration and unlock the full capabilities of the device. See provided examples of using these commands to identify and deploy SEDs below.
You can configure a SED before or after assigning the device to a pool.
By default, SEDs are not locked until the administrator takes ownership of them. Ownership is taken by explicitly configuring a global or per-device password in the web interface and adding the password to the SEDs. Adding SED passwords in the web interface also allows TrueNAS to automatically unlock SEDs.
A password-protected SED protects the data stored on the device when the device is physically removed from the system. This allows secure disposal of the device without having to first wipe the contents. Repurposing a SED on another system requires the SED password.
For TrueNAS High Availability (HA) systems, SED drives only unlock on the active controller!
Enter command sedutil-cli --scan
in the Shell to detect and list devices. The second column of the results identifies the drive type:
Character | Standard |
---|---|
no | non-SED device |
1 | Opal V1 |
2 | Opal V2 |
E | Enterprise |
L | Opalite |
p | Pyrite V1 |
P | Pyrite V2 |
r | Ruby |
Example:
root@truenas1:~ # sedutil-cli --scan
Scanning for Opal compliant disks
/dev/ada0 No 32GB SATA Flash Drive SFDK003L
/dev/ada1 No 32GB SATA Flash Drive SFDK003L
/dev/da0 No HGST HUS726020AL4210 A7J0
/dev/da1 No HGST HUS726020AL4210 A7J0
/dev/da10 E WDC WUSTR1519ASS201 B925
/dev/da11 E WDC WUSTR1519ASS201 B925
TrueNAS supports setting a global password for all detected SEDs or setting individual passwords for each SED. Using a global password for all SEDs is strongly recommended to simplify deployment and avoid maintaining separate passwords for each SED.
Go to System Settings > Advanced > Self-Encrypting Drive and click Configure. A warning displays stating Changing Advanced settings can be dangerous when done incorrectly. Please use caution before saving. Click Close to display the settings form. Enter the password in SED Password and Confirm SED Password and click Save.
Now configure the SEDs with this password. Go to the Shell and enter commandRecord this password and store it in a safe place!
sedhelper setup <password>
, where <password>
is the global password entered in System > Advanced > SED Password.sedhelper
ensures that all detected SEDs are properly configured to use the provided password:
root@truenas1:~ # sedhelper setup abcd1234
da9 [OK]
da10 [OK]
da11 [OK]
Rerun command sedhelper setup <password>
every time a new SED is placed in the system to apply the global password to the new SED.
Go to Storage click the Disks dropdown in the top right of the screen and select Disks. From the Disks screen, click the
for the confirmed SED, then Edit. Enter and confirm the password in the SED Password fields to override the global SED password.You must configure the SED to use the new password. Go to the Shell and enter command sedhelper setup --disk <da1> <password>
, where <da1>
is the SED to configure and <password>
is the created password from Storage > Disks > Edit Disks > SED Password.
Repeat this process for each SED and any SEDs added to the system in the future.
Remember SED passwords! If you lose the SED password, you cannot unlock SEDs or access their data. After configuring or modifying SED passwords, always record and store them in a secure place!
When SED devices are detected during system boot, TrueNAS checks for configured global and device-specific passwords.
Unlocking SEDs allows a pool to contain a mix of SED and non-SED devices. Devices with individual passwords are unlocked with their password. Devices without a device-specific password are unlocked using the global password.
To verify SED locking is working correctly, go to the Shell. Enter command sedutil-cli --listLockingRange 0 <password> <dev/da1>
, where <dev/da1>
is the SED and <password>
is the global or individual password for that SED. The command returns ReadLockEnabled: 1
, WriteLockEnabled: 1
, and LockOnReset: 1
for drives with locking enabled:
root@truenas1:~ # sedutil-cli --listLockingRange 0 abcd1234 /dev/da9
Band[0]:
Name: Global_Range
CommonName: Locking
RangeStart: 0
RangeLength: 0
ReadLockEnabled: 1
WriteLockEnabled:1
ReadLocked: 0
WriteLocked: 0
LockOnReset: 1
This section contains command line instructions to manage SED passwords and data. The command used is sedutil-cli(8). Most SEDs are TCG-E (Enterprise) or TCG-Opal (Opal v2.0). Commands are different for the different drive types, so the first step is to identify the type in use.
These commands can be destructive to data and passwords. Keep backups and use the commands with caution.
Check SED version on a single drive, /dev/da0 in this example:
root@truenas:~ # sedutil-cli --isValidSED /dev/da0
/dev/da0 SED --E--- Micron_5N/A U402
To check all connected disks at once:
root@truenas:~ # sedutil-cli --scan
Scanning for Opal compliant disks
/dev/ada0 No 32GB SATA Flash Drive SFDK003L
/dev/ada1 No 32GB SATA Flash Drive SFDK003L
/dev/da0 E Micron_5N/A U402
/dev/da1 E Micron_5N/A U402
/dev/da12 E SEAGATE XS3840TE70014 0103
/dev/da13 E SEAGATE XS3840TE70014 0103
/dev/da14 E SEAGATE XS3840TE70014 0103
/dev/da2 E Micron_5N/A U402
/dev/da3 E Micron_5N/A U402
/dev/da4 E Micron_5N/A U402
/dev/da5 E Micron_5N/A U402
/dev/da6 E Micron_5N/A U402
/dev/da9 E Micron_5N/A U402
No more disks present ending scan
root@truenas:~ #
TrueNAS uses ZFS data storage pools to efficiently store and protect data.
We strongly recommend that you review your available system resources and plan your storage use case before creating a storage pool. Consider the following:
Security requirements can mean the pool must be created with ZFS encryption.
RAIDz pool layouts are well-suited for general use cases and especially smaller (<10) data VDEVS or storage scenarios that involve storing multitudes of small data blocks.
dRAID pool layouts are useful in specific situations where large disk count (>100) arrays need improved resilver times due to increased disk failure rates and the array is intended to store large data blocks.
TrueNAS recommends defaulting to a RAIDz layout generally and whenever a dRAID vdev would have fewer than 10 data storage devices.
Determining your specific storage requirements is a critical step before creating a pool. The ZFS and dRAID primers provide a starting point to learn about the strengths and costs of different storage pool layouts. You can also use the ZFS Capacity Calculator and ZFS Capacity Graph to compare configuration options.
Click Create Pool to open the Pool Creation Wizard.
Enter a name of up to 50 lowercase alpha-numeric characters. Use only the permitted special characters that conform to ZFS naming conventions. The pool name contributes to the maximum character length for datasets, so it is limited to 50 characters.
You cannot change the pool name after creation.
Create the required data VDEV.
Select the layout from the Layout dropdown list, then either use the Automated Disk Selection fields to select and add the disks, or click Manual Disk Selection to add specific disks to the chosen Layout.
dRAID layouts do not show the Manual Disk Selection button but do show additional Automated Disk Selection fields. When configuring a dRAID data VDEV, first choose a Disk Size then select a Data Devices number. The remaining fields update based on the Data Devices and dRAID layout selections.
Click Save And Go To Review if you do not want to add other VDEV types to the pool, or click Next to move to the next wizard screens.
Add any other optional VDEVs as determined by your specific storage redundancy and performance requirements.
Click Create Pool on the Review wizard screen to add the pool.
Fusion Pools are also known as ZFS allocation classes, ZFS special vdevs, and metadata vdevs (Metadata vdev type on the Pool Manager screen.).
Go to the Storage Dashboard and click Create Pool.
A pool must always have one normal (non-dedup/special) VDEV before you assign other devices to the special class.
Enter a name for the pool using up to 50 lowercase alpha-numeric and permitted special characters that conform to ZFS naming conventions. The pool name contributes to the maximum character length for datasets, so it is limited to 50 characters.
Click ADD VDEV and select Metadata to add the VDEV to the pool layout.
Add disks to the primary Data VDevs, then to the Metadata VDEV.
Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.
Metadata VDEVs are critical for pool operation and data integrity. Protect them with redundancy measures such as mirroring, and optionally hot spare(s) for additional fault tolerance. It is suggested to use an equal or greater level of failure tolerance in each of your metadata VDEVs; for example, if your data VDEVs are configured as RAIDZ2, consider the use of 3-way mirrors for your metadata VDEVs.
Using special VDEVs identical to the data VDEVs (so they can use the same hot spares) is recommended, but for performance reasons, you can make a different type of VDEV (like a mirror of SSDs). In that case, you must provide hot spare(s) for that drive type as well. Otherwise, if the special VDEV fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.
While the metadata VDEV can be adjusted after its addition by attaching or detaching drives, the entire metadata VDEV itself can only be removed from the pool when the pool data VDEVs are mirrors. If the pool uses RAIDZ data VDEVs, a metadata VDEV is a permanent addition to the pool and cannot be removed.
When more than one metadata VDEV is created, then allocations are load-balanced between all these devices. If the special class becomes full, then allocations spill back into the normal class. Deduplication table data is placed first onto a dedicated Dedup VDEV, then a Metadata VDEV, and finally the data VDEVs if neither exists.
Create a fusion pool and Status shows a Special section with the metadata SSDs.
The Storage Dashboard widgets provide access to pool management options to keep the pool and disks healthy, upgrade pools and VDEVs, open datasets, snapshots, data protection screens, and manage S.M.A.R.T. tests. This article provides instructions on pool management functions available in the SCALE UI.
Select Storage on the main navigation panel to open the Storage Dashboard. Locate the ZFS Health widget for the pool, then click the Edit Auto TRIM. The Pool Options for poolname dialog opens.
Select Auto TRIM.
Click Save.
With Auto TRIM selected and active, TrueNAS periodically checks the pool disks for storage blocks it can reclaim. Auto TRIM can impact pool performance, so the default setting is disabled.
For more details about TRIM in ZFS, see the autotrim
property description in zpool.8.
Use the Export/Disconnect button to disconnect a pool and transfer drives to a new system where you can import the pool. It also lets you completely delete the pool and any data stored on it.
Click on Export/Disconnect on the Storage Dashboard.
A dialog displays showing any system services affected by exporting the pool, and options based on services configured on the system.
To delete the pool and erase all the data on the pool, select Destroy data on this pool. Enter the pool name in the field shown at the bottom of the window. Do not select this option if only exporting the pool.
Select Delete saved configurations from TrueNAS? to delete shares and saved configurations on this pool.
Select Confirm Export/Disconnect
Click Export/Disconnect. A confirmation dialog displays when the export/disconnect completes.
ZFS supports adding VDEVs to an existing ZFS pool to increase the capacity or performance of the pool.
You cannot change the original encryption or data VDEV configuration.
To add a VDEV to a pool: Click Manage Devices on the Topology widget to open the Devices screen. Click Add VDEV on the Devices screen to open the Add Vdevs to Pool screen.
Adding a vdev to an existing pool follows the same process as documented in Create Pool. Click on the type of vdev you want to add, for example, to add a spare, click on Spare to show the vdev spare options.
To use the automated option, select the disk size from the Automated Disk Selection > Disk Size dropdown list, then select the number of vdevs to add from the Width dropdown. To add the vdev manually, click Manual Disk Selection to open the Manual Selection screen.
Click Add to show the vdev options available for the vdev type. The example image shows adding a stripe vdev for the spare. Vdev options are limited by the number of available disks in your system and the configuration of any existing vdevs of that type in the pool. Drag the disk icon to the stripe vdev, then click Save Selection.
The Manual Selection screen closes and returns to the Add Vdev to Pool wizard screen (in this case the Spare option.)
You have the option to accept the change or click Edit Manual Disk Selection to change the disk added to the strip vdev for the spare, or click Reset Step to clear the strip vdev from the spare completely. Click either Next or a numbered item to add another type of vdev to this pool.
Repeat the same process above for each type of vdev to add.
Click Save and Go to Review to go to the Review screen when ready to save your changes.
To make changes, click either Back or the vdev option (i.e., Log, Cache, etc.) to return to the settings for that vdev. To clear all changes, click Start Over. Select Confirm then click Start Over to clear all changes.
To save changes click Update Pool.
You cannot add more drives to an existing data VDEV but you can stripe a new VDEV of the same type to increase the overall pool size.
To extend a pool, you must add a data VDEV of the same type as existing VDEVs. For example, create another mirror, then stripe the new mirror VDEV to the existing mirror VDEV. While on the Devices screen, click on the data vdev, then click Extend.
You can always remove the L2ARC (cache) and SLOG (log) VDEVs from an existing pool, regardless of topology or VDEV type. Removing these devices does not impact data integrity, but can significantly impact performance for reads and writes.
In addition, you can remove a data VDEV from an existing pool under specific circumstances. This process preserves data integrity but has multiple requirements:
device_removal
feature flag.
The system shows the Upgrade button after upgrading SCALE when new ZFS feature flags are available.ashift
).When a RAIDZ data VDEV is present, it is generally not possible to remove a device.
To remove a VDEV from a pool:
The VDEV removal process status shows in the Task Manager (or alternately with the zpool status
command).
Avoid physically removing or attempting to wipe the disks until the removal operation completes.
Use Scrub on the ZFS Health pool widget to start a pool data integrity check.
Click Scrub to open the Scrub Pool dialog. Select Confirm, then click Start Scrub.
If TrueNAS detects problems during the scrub operation, it either corrects them or generates an alert in the web interface.
By default, TrueNAS automatically checks every pool on a recurring scrub schedule.
The ZFS Health widget displays the state of the last scrub or disks in the pool. To view scheduled scrub tasks, click View all Scrub Tasks on the ZFS Health widget.
The Storage Dashboard screen Disks button and the Manage Disks button on the Disk Health widget both open the Disks screen.
Manage Devices on the Topology widget opens the Devices screen. To manage disks in a pool, click on the VDEV to expand it and show the disks in that VDEV.
Click on a disk to see the devices widgets for that disk. Use the options on the disk widgets to take a disk offline, detach it, replace it, manage the SED encryption password, or perform other disk management tasks.
See Replacing Disks for more information on the Offline, Replace and Online options.
Click Expand on the Storage Dashboard to increase the pool size to match all available disk space. An example is expanding a pool when resizing virtual disks apart from TrueNAS.
Storage pool upgrades are typically not required unless the new OpenZFS feature flags are deemed necessary for required or improved system operation.
Do not do a pool-wide ZFS upgrade until you are ready to commit to this SCALE major version and lose the ability to roll back to an earlier major version!
The Upgrade button displays on the Storage Dashboard for existing pools after an upgrade to a new TrueNAS major version that includes new OpenZFS feature flags. Newly created pools are always up to date with the OpenZFS feature flags available in the installed TrueNAS version.
The upgrade itself only takes a few seconds and is non-disruptive. It is not necessary to stop any sharing services to upgrade the pool. However, the best practice is to upgrade when the pool is not in heavy use. The upgrade process suspends I/O for a short period but is nearly instantaneous on a quiet pool.
This section has several tutorials about dataset configuration and management.
A TrueNAS dataset is a file system within a data storage pool. Datasets can contain files, directories, and child datasets, and have individual permissions or flags.
Datasets can also be encrypted. TrueNAS automatically encrypts datasets created in encrypted pools, but you can change the encryption type from key to passphrase. You can create an encrypted dataset if the pool is not encrypted and set the type as either key or passphrase.
We recommend organizing your pool with datasets before configuring data sharing, as this allows for more fine-tuning of access permissions and using different sharing protocols.
To create a basic dataset, go to Datasets. Default settings include those inherited from the parent dataset.
Select a dataset (root, parent, or child), then click Add Dataset.
Enter a value in Name.
Select the Dataset Preset option you want to use. Options are:
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset. If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators. Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
If creating an SMB or multi-protocol (SMB and NFS) share the dataset name value auto-populates the share name field with the dataset name.
If you plan to deploy container applications, the system automatically creates the ix-applications dataset, but this dataset is not used for application data storage. If you want to store data by application, create the dataset(s) first, then deploy your application. When creating a dataset for an application, select Apps as the Dataset Preset. This optimizes the dataset for use by an application.
If you want to configure advanced setting options, click Advanced Options. For the Sync option, we recommend production systems with critical data use the default Standard choice or increase to Always. Choosing Disabled is only suitable in situations where data loss from system crashes or power loss is acceptable.
Select either Sensitive or Insensitive from the Case Sensitivity dropdown. The Case Sensitivity setting is found under Advanced Options and is not editable after saving the dataset.
Click Save.
Review the Dataset Preset and Case Sensitivity under Advanced Options on the Add Dataset screen before clicking Save. You cannot change these or the Name setting after clicking Save.
Compression encodes information in less space than the original data occupies. We recommend choosing a compression algorithm that balances disk performance with the amount of saved space.
Select the compression algorithm that best suits your needs from the Compression dropdown list of options.
LZ4 maximizes performance and dynamically identifies the best files to compress. LZ4 provides lightning-fast compression/decompression speeds and comes coupled with a high-speed decoder. This makes it one of the best Linux compression tools for enterprise customers.
ZSTD offers highly configurable compression speeds, with a very fast decoder.
Gzip is a standard UNIX compression tool widely used for Linux. It is compatible with every GNU software which makes it a good tool for remote engineers and seasoned Linux users. It offers the maximum compression with the greatest performance impact. The higher the compression level implemented the greater the impact on CPU usage levels. Use with caution especially at higher levels.
ZLE or Zero Length Encoding, leaves normal data alone but only compresses continuous runs of zeros.
LZJB compresses crash dumps and data in ZFS. LZJB is optimized for performance while providing decent compression. LZ4 compresses roughly 50% faster than LZJB when operating on compressible data, and is greater than three times faster for uncompressible data. LZJB was the original algorithm used by ZFS but it is now deprecated.
You can set dataset quotas while adding datasets using the quota management options in the Add Dataset screen under Advanced Options. You can also add or edit quotas for an existing dataset, by clicking Edit on the Dataset Space Management widget to open the Capacity Settings screen.
Setting a quota defines the maximum allowed space for the dataset. You can also reserve a defined amount of pool space to prevent automatically generated data like system logs from consuming all of the dataset space. You can configure quotas for only the new dataset or both the new dataset and any child datasets of the new dataset.
Define the maximum allowed space for the dataset in either the Quota for this dataset or Quota for this dataset and all children field. Enter 0 to disable quotas.
Dataset quota alerts are based on the percentage of storage used. To set up a quota warning alert, enter a percentage value in Quota warning alert at, %. When consumed space reaches the defined percentage it sends the alert.
To set up the quota critical level alerts, enter the percentage value in Quota critical alert at, %.
When setting quotas or changing the alert percentages for both the parent dataset and all child datasets, use the fields under This Dataset and Child Datasets.
Enter a value in Reserved space for this dataset to set aside additional space for datasets that contain logs, which could eventually take all available free space. Enter 0 for unlimited.
For more information on quotas, see Managing User or Group Quotas.
By default, many dataset options inherit their values from the parent dataset. When settings on the Advanced Options screen are set toInherit the dataset uses the setting from the parent dataset. For example, the Encryption or ACL Type settings.
To change any setting that datasets inherit from the parent, select an available option other than Inherit.
For information on ACL settings see Setting Up Permissions.
First, add the pool with a Metadata VDEV.
Select the root dataset of the pool (with the metadata VDEV), then click Add Dataset to add the dataset. Click Advanced Options. Enter the dataset name, select the Dataset Preset, then scroll down to Metadata (Special) Small Block Size setting to set a threshold block size for including small file blocks into the special allocation class (fusion pools).
Blocks smaller than or equal to this value are assigned to the special allocation class while greater blocks are assigned to the regular class. Valid values are zero or a power of two from 512B up to 1M. The default size 0 means no small file blocks are allocated in the special class. Enter a threshold block size for including small file blocks into the special allocation class (fusion pools).
After creating a dataset, users can manage additional options from the Datasets screen. Select the dataset, then click Edit on the dataset widget for the function you want to manage. The Datasets Screen article describes each option in detail.
Select the dataset on the tree table, then click Edit on the Dataset Details widget to open the Edit Dataset screen and change the dataset configuration settings. You can change all settings except Name, Case Sensitivity, or Device Preset.
To edit the dataset ACL permissions, click Edit on the Permissions widget. If the ACL type is NFSv4, the Permissions widget shows ACE entries for the dataset. Each entry opens a checklist of flag options you can select/deselect without opening the Edit ACL screen. To modify ownership, configure new or change existing ACL entries, click Edit to open the ACL Editor screen.
To edit a POSIX ACL type, click Edit on the Permissions widget to open the Unix Permissions Editor screen. To access the Edit ACL screen for POSIX ACLs, select Create a custom ACL on the Select a preset ACL window.
For more information, see the Setting Up Permissions article.
Select the dataset on the tree table, then click Delete on the Dataset Details widget. This opens a delete window where you enter the dataset path (root/parent/child) and select Confirm to delete the dataset, all stored data, and any snapshots from TrueNAS.
To delete a root dataset, use the Export/Disconnect option on the Storage Dashboard screen to delete the pool.
Deleting datasets can result in unrecoverable data loss! Move any critical data stored on the dataset off to a backup copy or obsolete the data before performing the delete operation.
A ZFS Volume (zvol) is a dataset that represents a block device or virtual disk drive. TrueNAS requires a zvol when configuring iSCSI Shares. Adding a virtual machine also creates a zvol to use for storage.
Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused. Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume. Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see SCALE Hardware Guide for CPU, memory and storage capacity information.
To create a zvol, go to Datasets. Select the root or non-root parent dataset where you want to add the zvol, and then click Add Zvol.
To create a basic zvol with default options, enter a name and a value in Size for the zvol, then click Save.
Options to manage a zvol are on the zvol widgets shown on the Dataset screen when you select the zvol on the dataset tree table.
Delete Zvol removes the zvol from TrueNAS. Deleting a zvol also deletes all snapshots of that zvol. Click Delete on the Zvol Details widget.
Deleting zvols can result in unrecoverable data loss! Remove critical data from the zvol or verify it is obsolete before deleting a zvol.
Edit on the Zvol Details widget opens the Edit Zvol screen where you can change settings. Name is read-only and you cannot change it.
To create a snapshot, click Create Snapshot on the Data Protection widget.
To clone a zvol from an existing snapshot, select the zvol on the Datasets tree table, then click Manage Snapshots on the Data Protection widget to open the Snapshots screen. You can also access the Snapshots screen from the Periodic Snapshot Tasks widget on the Data Protection screen. Click Snapshots to open the Snapshots screen.
Click on the snapshot you want to clone and click Clone to New Dataset. Enter a name for the new dataset or accept the one provided, then click Clone.
The cloned zvol displays on the Datasets screen.
TrueNAS allows setting data or object quotas for user accounts and groups cached on, or connected to the system. You can use the quota settings on the Add Dataset or Edit Dataset configuration screens in the Advanced Options settings to set up alarms and set aside more space in a dataset. See Adding and Managing Datasets for more information.
To manage the dataset overall capacity, use Edit on the Dataset Space Management widget to open the Capacity Settings screen.
To view and edit user quotas, go to Datasets and click Manage User Quotas on the Dataset Space Management widget to open the User Quotas screen.
Click Add to open the Add User Quota screen.
Click in the field to view a list of system users including any users from a directory server that is properly connected to TrueNAS. Begin typing a user name to filter all users on the system to find the desired user, then click on the user to add the name. Add additional users by repeating the same process. A warning dialog displays if there are no matches found.
To edit individual user quotas, click anywhere on a user row to open the Edit User Quota screen where you can edit the User Data Quota and User Object Quota values.
User Data Quota is the amount of disk space that selected users can use. User Object Quota is the number of objects selected users can own.
Click Add to open the Add Group Quota screen.
Click in the Group field to view a list of system groups on the system. Begin typing a name to filter all groups on the system to find the desired group, then click on the group to add the name. Add additional groups by repeating the same process. A warning dialog displays if there are not matches found.
To edit individual group quotas, click anywhere on a group name to open the Edit Group Quota screen where you can edit the Group Data Quota and Group Object Quota values.
Group Data Quota is the amount of disk space that the selected group can use. Group Object Quota is the number of objects the selected group can own.
Snapshots are one of the most powerful features of ZFS. A snapshot provides a read only point-in-time copy of a file system or volume. This copy does not consume extra space in the ZFS pool. The snapshot only records the differences between storage block references whenever the data is modified.
Taking snapshots requires the system have all pools, datasets, and zvols already configured.
Consider making a Periodic Snapshot Task to save time and create regular, fresh snapshots.
There are two ways to access snapshot creation:
To access the Snapshots screen, go to Data Protection > Periodic Snapshot Tasks and click the Snapshots button in the lower right corner of the widget.
Existing snapshots display as a list.
From the Datasets screen select the dataset to snapshot, then click Create Snapshot on the Data Protection widget.
If you click Create Snapshot the Snapshots screen opens filtered for the selected dataset. Clear the dataset from the search field to see all snapshots.
You can also click the Manage Snapshots link on the Data Protection widget to open the Snapshots screen.
Click Add at the top right of the screen to open the Add Snapshot screen.
Select a dataset or zvol from the Dataset dropdown list.
Accept the name suggested by the TrueNAS software in the Name field or enter any custom string to override the suggested name.
(Optional) Select an option from the Naming Schema dropdown list that the TrueNAS software populated with existing periodic snapshot task schemas. If you select an option, TrueNAS generates a name for the snapshot using that naming schema from the selected periodic snapshot and replicates that snapshot.
You cannot enter a value in both Naming Schema and in Name as selecting or entering a value in Naming Schema populates the other field.
(Optional) Select Recursive to include child datasets with the snapshot.
Click Save to create the snapshot.
File Explorer limits the number of snapshots Windows presents to users. If TrueNAS responds with more than the File Explorer limit, File Explorer shows no available snapshots. TrueNAS displays a dialog stating the dataset snapshot count has more snapshots than recommended and states performance or functionality might degrade.
There are two ways to view the list of snapshots:
The Snapshots screen displays a list of snapshots on the system. Use the search bar at the top to narrow the selection. Clear the search bar to list all snapshots.
Click
to view snapshot options.Use the Clone to New Dataset button to create a clone of the snapshot. The clone appears directly beneath the parent dataset in the dataset tree table on the Datasets screen. Click Clone to New Dataset to open a clone confirmation dialog.
Click Clone to confirm.
The Go to Datasets button opens the Datasets screen.
Click on the clone name in the dataset listing to populate the Dataset Details widget and display the Promote button.
After clicking the Promote button, the dataset clone is promoted and this button no longer appears.
Promote now displays on the Dataset Details widget when you select the demoted parent dataset.
See zfs-promote.8 for more information.
The Delete option destroys the snapshot. You must delete child clones before you can delete their parent snapshot. While creating a snapshot is instantaneous, deleting one is I/O intensive and can take a long time, especially when deduplication is enabled.
Click the Delete button. A confirmation dialog displays. Select Confirm to activate the Delete button.
To delete multiple snapshots, select the left column box for each snapshot to include. Click the delete Delete button that displays.
To search through the snapshots list by name, type a matching criteria into the search Filter Snapshots text field. The list now displays only the snapshot names that match the filter text.
Confirm activates the Delete button. If the snapshot has the Hold options selected, an error displays to prevent you from deleting that snapshot.
The Rollback option reverts the dataset to the point in time saved by the snapshot.
Rollback is a dangerous operation that causes any configured replication tasks to fail. Replications use the existing snapshot when doing an incremental backup, and rolling back can put the snapshots out of order.
A less disruptive method to restore data from a point in time is to clone a specific snapshot as a new dataset:
- Clone the desired snapshot.
- Share the clone with the share type or service running on the TrueNAS system.
- Allow users to recover their needed data.
- Delete the clone from Datasets.
This approach does not destroy any on-disk data or disrupt automated replication tasks.
TrueNAS asks for confirmation before rolling back to the chosen snapshot state. Select the radio button for how you want the rollback to operate.
Click Confirm to activate the Rollback button.
All dataset snapshots are accessible as an ordinary hierarchical file system, accessed from a hidden
A snapshot and any files it contains are not accessible or searchable if the snapshot mount path is longer than 88 characters. The data within the snapshot is safe but to make the snapshot accessible again shorten the mount path.
A user with permission to access the dataset contents can view the list of snapshots by going to the dataset
When creating a snapshot, permissions or ACLs set on files within that snapshot might limit access to the files. Snapshots are read-only, so users do not have permission to modify a snapshot or its files, even if they had write permissions when creating the snapshot.
From the Datasets screen, select the dataset and click Edit on the Dataset Details widget. Click Advanced Options and set Snapshot Directory to Visible.
To access snapshots:
Using a share, configure the client system to view hidden files.
For example, in a Windows SMB share, enable Show hidden files, folders, and drives in Folder Options.
From to the dataset root folder, open the
Using the TrueNAS SCALE CLI, enter storage filesystem listdir path="/PATH/TO/DATASET/.zfs/PATH/TO/SNAPSHOT"
to view snapshot contents.
TrueNAS SCALE offers ZFS encryption for your sensitive data in pools and datasets or Zvols.
Users are responsible for backing up and securing encryption keys and passphrases! Losing the ability to decrypt data is similar to a catastrophic data loss.
Data-at-rest encryption is available with:
The local TrueNAS system manages keys for data-at-rest. Users are responsible for storing and securing their keys. TrueNAS SCALE includes the Key Management Interface Protocol (KMIP).
Encryption is for users storing sensitive data. Pool-level encryption does not apply to the storage pool or the disks in the pool. It only applies to the root dataset that shares the same name as the pool. Child datasets or zvols inherit encryption from the parent dataset.
TrueNAS automatically generates a root dataset when you create a pool. This root dataset inherits the encryption state of the pool through the Encryption option on the Pool Creation Wizard screen when you create the pool. Because encryption is inherited from the parent, all data within that pool is encrypted. Selecting the Encryption option for the pool (root dataset) forces encryption for all datasets and zvols created within the root dataset.
You cannot create an unencrypted dataset within an encrypted pool or dataset. This change does not affect existing datasets created in earlier releases of SCALE but does affect new datasets created in 22.12.3 and later releases.
Leave the Encryption option on the Pool Creation Wizard screen cleared to create an unencrypted pool. You can create both unencrypted and encrypted datasets within an unencrypted pool (root dataset). If you create an encrypted dataset within an unencrypted dataset, all datasets or zvol created within that encrypted dataset are automatically encrypted.
If you have only one pool on your system, do not select the Encryption option for this pool.
If your system loses power or you reboot the system, the datasets, zvols, and all data in an encrypted pool automatically lock to protect the data in that encrypted pool.
SCALE uses lock icons to indicate the encryption state of a root, parent, or child dataset in the tree table on the Datasets screen. Each icon shows a text label with the state of the dataset when you hover the mouse over the icon.
The Datasets tree table includes lock icons and descriptions that indicate the encryption state of datasets.
Icon | State | Description |
---|---|---|
![]() | Locked | Displays for locked encrypted root, non-root parent and child datasets. |
![]() | Unlocked | Displays for unlocked encrypted root, non-root parent and child datasets. |
![]() | Locked by ancestor | Displays for locked datasets that inherit encryption properties from the parent. |
![]() | Unlocked by ancestor | Displays for unlocked datasets that inherit encryption properties from the parent. |
A dataset that inherits encryption shows the mouse hover-over label Locked by ancestor or Unlocked by ancestor.
Select an encrypted dataset to see the ZFS Encryption widget on the Datasets screen.
The dataset encryption state is unlocked until you lock it using the Lock button on the ZFS Encryption widget. After locking the dataset, the icon on the tree table changes to locked, and the Unlock button appears on the ZFS Encryption widget.
Before creating a pool with encryption decide if you want to encrypt all datasets, zvols, and data stored on the pool.
If your system does not have enough disks to allow you to create a second storage pool, we recommend that you not use encryption at the pool level. Instead, apply encryption at the dataset level to non-root parent or child datasets. You can mix encrypted and unencrypted datasets on an unencrypted pool.You cannot change a pool from encrypted to non-encrypted. You can only change the dataset encryption type (key or passphrase) for the encrypted pool.
All pool-level encryption is key-based encryption. When prompted, download the encryption key and keep it stored in a safe place where you can back up the file. You cannot use passphrase encryption at the pool level.
Go to Storage and click Create Pool on the Storage Dashboard screen. You can also click Add to Pool on the Unassigned Disks widget and select the Add to New to open the Pool Creation Wizard.
Enter a name for the pool, select Encryption next to Name, then select the layout for the data VDEV and add the disks. A warning dialog displays after selecting Encryption.
Read the warning, select Confirm, and then click I UNDERSTAND.
A second dialog opens where you click Download Encryption Key for the pool encryption key.
Click Done to close the window. Move the encryption key to safe location where you can back up the file.
Add any other VDEVS to the pool you want to include, then click Save to create the pool with encryption.
To add an encrypted dataset, go to Datasets.
Select the dataset on the tree table where you want to add a new dataset. The default dataset selected when you open the Datasets screen is the root dataset of the first pool on the tree table list. If you have more than one pool and want to create a dataset in a pool other than the default, select the root dataset for that pool or any dataset under the root where you want to add the new dataset.
Click Add Dataset to open the Add Dataset screen, then click Advanced Options.
Enter a value in Name.
Select the Dataset Preset option you want to use. Options are:
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset. If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators. Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
To add encryption to a dataset, scroll down to Encryption Options and select the inherit checkbox to clear the checkmark. If the parent dataset is unencrypted and you want to encrypt the dataset, clear the checkmark to show the Encryption option. If the parent dataset is encrypted and you want to change the type, clearing the checkmark shows the other encryption options. To keep the dataset encryption settings from the parent, leave inherited checkmarked.
Decide if you want to use the default key type encryption and if you want to let the system generate the encryption key. To use key encryption and your own key, clear the Generate key checkbox to display the Key field. Enter your key in this field.
To change to passphrase encryption, click the down arrow and select Passphrase from the Encryption Type dropdown.
You can select the encryption algorithm to use from the Encryption Standard dropdown list of options or use the recommended default.
Leave the default selection if you do not have a particular encryption standard you want to use.
The passphrase must be longer than 8 and less than 512 characters.
Keep encryption keys and/or passphrases safeguarded in a secure and protected place. Losing encryption keys or passphrases can result in permanent data loss!
You cannot add encryption to an existing dataset. You can change the encryption type for an already encrypted dataset using the Edit option on the ZFS Encryption widget for the dataset.
Save any change to the encryption key or passphrase, and update your saved passcodes and keys file, and then back up that file.
To change the encryption type, go to Datasets:
Select the encrypted dataset on the tree table, then click Edit on the ZFS Encryption widget. The Edit Encryption Options dialog for the selected dataset displays.
You must unlock a locked encrypted dataset before you can make changes.
If the dataset inherits encryption settings from a parent dataset, to change this, clear the Inherit encryption properties from parent checkbox to display the key type encryption setting options.
If the encryption type is set to passphrase, you can change the passphrase, or change Encryption Type to key. You cannot change a dataset created with a key as the encryption type to passphrase.
Key type options are Generate Key (pre-selected) or clear to display the Key field. Enter your new key in this field.
To change the passphrase for passphrase-encryption, enter a new passphrase in Passphrase and Confirm Passphrase.
Use a complex passphrase that is not easy to guess. Store in a secure location subject to regular backups.
Leave the other settings at default, then click Confirm to activate Save.
Click Save to close the window and update the ZFS Encryption widget to reflect the changes made.
You can only lock and unlock an encrypted dataset if it is secured with a passphrase instead of a key file. Before locking a dataset, verify that it is not currently in use.
Select the encrypted dataset on the tree table, then click Lock on the ZFS Encryption widget to open the Lock Dataset dialog with the dataset full path name.
Use the Force unmount option only if you are certain no one is currently accessing the dataset. Force unmount boots anyone using the dataset (e.g. someone accessing a share) so you can lock it. Click Confirm to activate Lock, then click Lock.
You cannot use locked datasets.
To unlock a dataset, go to Datasets then select the locked dataset on the tree table. Click Unlock on the ZFS Encryption widget to open the Unlock Dataset screen.
Enter the key if key-encrypted, or the passphrase into Dataset Passphrase and click Save.
Select Unlock Child Encrypted Roots to unlock all locked child datasets if they use the same passphrase.
Select Force if the dataset mount path exists but is not empty. When this happens, the unlock operation fails. Using Force allows the system to rename the existing directory and file where the dataset should mount. This prevents the mount operation from failing. A confirmation dialog displays.
Click CONTINUE to confirm you want to unlock the datasets. Click CLOSE to exit and keep the datasets locked. A second confirmation dialog opens confirming the datasets unlocked. Click CLOSE. TrueNAS displays the dataset with the unlocked icon.
Encryption is for securing sensitive data.
You can only encrypt a Zvol if you create the Zvol from a dataset with encryption.
Users are responsible for backing up and securing encryption keys and passphrases! Losing the ability to decrypt data is similar to a catastrophic data loss.
Zvols inherit encryption settings from the parent dataset.
To encrypt a Zvol, select a dataset configured with encryption and then create a new Zvol.
Next, go to Datasets and click on the Zvol.
If you do not see the ZFS Encryption widget, you created the Zvol from an unencrypted dataset. Delete the Zvol and start over.
The Zvol is encrypted with settings inherited from the parent dataset.
To change inherited encryption properties from passphrase to key, or enter a new key or passphrase, select the zvol, then click Edit on the ZFS Encryption widget.
If Encryption Type is set to Key, type an encryption key into the Key field or select Generate Key. If using Passphrase, enter a passphrase of eight to 512 characters. Use a passphrase complex enough to not easily guess. After making any changes, select Confirm, and then click Save.
Save any change to the encryption key or passphrase, update your saved passcodes and keys file, and back up the file.
There are two ways to manage the encryption credentials, with a key file or passphrase. Creating a new encrypted pool automatically generates a new key file and prompts users to download it.
Always back up the key file to a safe and secure location.
To manually back up a root dataset key file, click Export Key on the ZFS Encryption widget.
See Changing Dataset-Level Encryption for more information on changing encryption settings.
A passphrase is a user-defined string of eight to 512 characters that is required to decrypt the dataset.
The pbkdf2iters is the number of password-based key derivation function 2 (PBKDF2) iterations to use for reducing vulnerability to brute-force attacks. Users must enter a number greater than 100000.
TrueNAS SCALE users should either replicate the dataset/Zvol without properties to disable encryption at the remote end or construct a special JSON manifest to unlock each child dataset/zvol with a unique key.
Replicate every encrypted dataset you want to replicate with properties.
Export key for every child dataset that has a unique key.
For each child dataset construct a proper json with poolname/datasetname of the destination system and key from the source system like this:
{"tank/share01": "57112db4be777d93fa7b76138a68b790d46d6858569bf9d13e32eb9fda72146b"}
Save this file with the extension
On the remote system, unlock the dataset(s) using properly constructed
Uncheck properties when replicating so that the destination dataset is not encrypted on the remote side and does not require a key to unlock.
Go to Data Protection and click ADD in the Replication Tasks window.
Click Advanced Replication Creation.
Fill out the form as needed and make sure Include Dataset Properties is NOT checked.
Click Save.
Go to Datasets on the system you are replicating from. Select the dataset encrypted with a key, then click Export Key on the ZFS Encryption widget to export the key for the dataset.
Apply the JSON key file or key code to the dataset on the system you replicated the dataset to.
Option 1: Download the key file and open it in a text editor. Change the pool name/dataset part of the string to the pool name/dataset for the receiving system. For example, replicating from tank1/dataset1 on the replicate-from system to tank2/dataset2 on the replicate-to system.
Option 2: Copy the key code provided in the Key for dataset window.
On the system receiving the replicated pool/dataset, select the receiving dataset and click Unlock.
Unlock the dataset. Either clear the Unlock with Key file checkbox, paste the key code into the Dataset Key field (if there is a space character at the end of the key, delete the space), or select the downloaded Key file that you edited.
Click Save.
Click Continue.
TrueNAS SCALE provides basic permissions settings and an access control list (ACL) editor to define dataset permissions. ACL permissions control the actions users can perform on dataset contents and shares.
An Access Control List (ACL) is a set of account permissions associated with a dataset that applies to directories or files within that dataset. TrueNAS uses ACLs to manage user interactions with shared datasets and creates them when users add a dataset to a pool.
TrueNAS SCALE offers two ACL types: POSIX and NFSv4. For a more in-depth explanation of ACLs and configurations in TrueNAS SCALE, see our ACL Primer.
The Dataset Preset setting on the Add Dataset screen determines the type of ACL for the dataset. To see the ACL type, click Edit on the Dataset Details widget to open the Edit Dataset. Click on the Advanced Options screen and scroll down to the ACL Type field. Preset options are:
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset. If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators. Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
SCALE POSIX or NFSv4 ACL types, show different options on the ACL Editor screen. Both the POSIX and NFSv4 ACL Editors screens allow you to define the owner user and group, and add ACL entries (ACEs) for individual user accounts or groups to customize the permissions for the selected dataset.
The owner user and group should remain set to either root or the admin account with full privileges.
Add ACE items for other users, groups, directories, or other options to grant access permissions to the dataset. Click in the Who field and select the item (like User or Group) and to display the User or Group fields where you select the user or group accounts.
Basic ACL permissions are viewable and configurable from the Datasets screen. Select a dataset, then scroll down to the Permissions widget to view owner and individual ACL entry permissions.
To view the Edit ACL screen, either select the dataset and click Edit on the Permissions widget or go to Sharing and click on the share widget header to open the list of shares. Select the share, then click the options icon and select Edit Filesystem ACL.
You can view permissions for any dataset, but the edit option only displays on the Permissions widget for non-root datasets.
Configuring advanced permissions overrides basic permissions configured on the add and edit dataset screens.
Select a non-root dataset, scroll down to the Permissions widget, then click Edit to open the Unix Permissions Editor screen.
If the dataset has an NFSv4 ACL, the Edit ACL screen opens.
Enter or select the Owner user from the User dropdown list, then set the read/write/execute permissions, and select Apply User to confirm changes. User options include users created manually or imported from a directory service. Repeat for the Group field. Select the group name from the dropdown list, set the read/write/execute permissions, and then select Apply Group to confirm the changes.
To prevent errors, TrueNAS only submits changes after the apply option is selected.
A common misconfiguration is not adding or removing the Execute permission from a dataset that is a parent to other child datasets. Removing this permission results in lost access to the path.
To apply ACL settings to all child datasets, select Apply permissions recursively. Change the default settings to your preferred primary account and group and select Apply permissions recursively before saving any changes.
Click Save now if you do not want to use an ACL preset.
See Edit ACL Screen for information on the ACL editor screens and setting options.
From the Unix Permissions Editor screen:
Click Set ACL. The Select a preset ACL dialog opens.
Select Select a present ACL to use a pre-configured set of permissions. Select the preset to use from the Default ACL Options dropdown list, or click Create a custom ACL to configure your own set of permissions. Click Continue.
Each default preset loads different permissions to the Edit ACL screen. The Create a custom preset option opens the Edit ACL screen with no default permission settings. Enter the ACL owner user and group, and add new ACE for users, groups, etc. that you want to grant access permissions to for this dataset
Select or enter the administrative user name in Owner, then click Apply Owner. The owner controls which TrueNAS user and group has full control of the dataset. You can leave this set to root but we recommend changing this to the admin user with the Full Control role.
Repeat for the Owner Group, then click Apply Group.
Select the ACE entry on the Access Control List list on the left of the screen just below Owner and Owner Group. If adding a new entry, click Add Item.
Click on Who and select the value from the dropdown list. Whatever is selected in Who highlights the Access Control List entry on the left side of the screen.
If selecting User, the User field displays below the Who field. Same for Group.
Select a name from the dropdown list of options in the User (or Group) field or begin typing the name to see a narrowed list of options to select from.
Select the Read, Modify, and/or Execute permissions.
(Optional) Select Apply permissions recursively, below the list of access control entries, to apply this preset to all child datasets.
(Optional) Click Use Preset to display the ACL presets window and select a predefined set of permissions from the list of presets. See Using Preset ACL Entries (POSIX ACL) for the list of presets.
Click Save as Preset to add this to the list of ACL presets. Click Save Access Control List to save the changes made to the ACL.
An NFS4 ACL preset loads pre-configured permissions to match general permissions situations.
Changing the ACL type affects how TrueNAS writes and reads on-disk ZFS ACL.
When the ACL type changes from POSIX to NFSv4, internal ZFS ACLs do not migrate by default, and access ACLs encoded in posix1e extended attributes convert to native ZFS ACLs.
When the ACL type changes from NFSv4 to POSIX, native ZFS ACLs do not convert to posix1e extended attributes, but ZFS uses the native ACL for access checks.
To prevent unexpected permissions behavior, you must manually set new dataset ACLs recursively after changing the ACL type. Setting new ACLs recursively is destructive. We suggest creating a ZFS snapshot of the dataset before changing the ACL type or modifying permissions.
To change NFSv4 ACL permissions:
Go to Datasets, select the dataset, scroll down to the Permissions widget, and click Edit. The Edit ACL screen opens.
Select or enter the administrative user name in Owner, then click Apply Owner. The owner controls which TrueNAS user and group has full control of the dataset. You can leave this set to root but we recommend changing the owner user and group to the admin user with the Full Control role.
Select or enter the group name in Owner Group, then click Apply Group.
Select the ACE entry on the Access Control List list on the left of the screen below Owner and Owner Group. If adding a new entry, click Add Item.
Click on Who and select the value from the dropdown list. If selecting User, the User field displays below the Who field. Same for Group. Select a name from the dropdown list of options or begin typing the name to see a narrowed list of options to select from. The selection in Who highlights the Access Control List entry on the left side of the screen.
Select permission type from the Permissions dropdown list. If Basic is selected, the list displays four options: Read, Modify, Traverse and Full Control. Basic flags enable or disable ACE inheritance.
Select Advanced to select more granular controls from the options displayed. Advanced flags allow further control of how the ACE applies to files and directories in the dataset.
(Optional) Select Apply permissions recursively, below the list of access control entries, to apply this preset to all child datasets. This is not generally recommended as recursive changes often cause permissions issues (see the warning at the top of this section).
(Optional) Click Use Preset to display the ACL presets window to select a predefined set of permissions from the list of presets. See Using Preset ACL Entries (NFS ACL).
(Optional) Click Save as Preset to add this to the list of ACL presets.
Click Save Access Control List to save the changes for the user or group selected.
To rewrite the current ACL with a standardized preset, follow the steps above in Configuring an ACL to step 6 where you click Use Preset, and then select an option:
Click Save Access Control List to add this ACE entry to the Access Control List.
If the file system uses a POSIX ACL, the first option presented is to select an existing preset or the option to create a custom preset.
To rewrite the current ACL with a standardized preset, click Use Preset and then select an option:
If creating a custom preset, a POSIX-based Edit ACL screen opens. Follow the steps in Adding a New Preset (POSIX ACL) to set the owner and owner group, then the ACL entries (user, group) and permissions from the options shown.
File sharing is one of the primary benefits of a NAS. TrueNAS helps foster collaboration between users through network shares.
TrueNAS SCALE allows users to create and configure Windows SMB shares, Unix (NFS) shares, and block (iSCSI) shares targets.
When creating zvols for shares, avoid giving them names with capital letters or spaces since they can cause problems and failures with iSCSI and NFS shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
Since the Apple Filing Protocol (AFP) for shares is deprecated and no longer receives updates, it is not in TrueNAS SCALE.
However, users can sidegrade a TrueNAS CORE configuration into SCALE, so TrueNAS SCALE migrates previously-saved AFP configurations into SMB configurations.
To prevent data corruption that could result from the sidegrade operation, in TrueNAS SCALE, go to Windows (SMB) Shares, select the
for the share, then select Edit to open the Edit SMB screen. Click Advanced Options and scroll down to the Other Options section. Select Legacy AFP Compatibility to enable compatibility for AFP shares migrated to SMB shares. Do not select this option if you want a pure SMB share with no AFP relation.Netatalk service is no longer in SCALE as of version 21.06. AFP shares automatically migrate to SMB shares with the Legacy AFP Compatibility option enabled. Do not clear the Legacy AFP Compatibility checkbox, as it impacts how data is written to and read from shares. Any other shares created to access these paths after the migration must also have Legacy AFP Compatibility selected.
Once you have sidegraded from CORE to SCALE, you can find your migrated AFP configuration in Shares > Windows Shares (SMB) with the prefix AFP_. To make the migrated AFP share accessible, start the SMB service.
Since AFP shares migrate to SMB in SCALE, you must use SMB syntax to mount them.
On your Apple system, press +K or go to Go > Connect to Server….
Enter smb://ipaddress/mnt/pool/dataset, where:
Internet Small Computer Systems Interface (iSCSI) represents standards for using Internet-based protocols for linking binary data storage device aggregations. IBM and Cisco submitted the draft standards in March 2000. Since then, iSCSI has seen widespread adoption into enterprise IT environments.
iSCSI functions through encapsulation. The Open Systems Interconnection Model (OSI) encapsulates SCSI commands and storage data within the session stack. The OSI further encapsulates the session stack within the transport stack, the transport stack within the network stack, and the network stack within the data stack. Transmitting data this way permits block-level access to storage devices over LANs, WANs, and even the Internet itself (although performance may suffer if your data traffic is traversing the Internet).
The table below shows where iSCSI sits in the OSI network stack:
OSI Layer Number | OSI Layer Name | Activity as it relates to iSCSI |
---|---|---|
7 | Application | An application tells the CPU that it needs to write data to non-volatile storage. |
6 | Presentation | OSI creates a SCSI command, SCSI response, or SCSI data payload to hold the application data and communicate it to non-volatile storage. |
5 | Session | Communication between the source and the destination devices begins. This communication establishes when the conversation starts, what it talks about, and when the conversion ends. This entire dialogue represents the session. OSI encapsulates the SCSI command, SCSI response, or SCSI data payload containing the application data within an iSCSI Protocol Data Unit (PDU). |
4 | Transport | OSI encapsulates the iSCSI PDU within a TCP segment. |
3 | Network | OSI encapsulates the TCP segment within an IP packet. |
2 | Data | OSI encapsulates the IP packet within the Ethernet frame. |
1 | Physical | The Ethernet frame transmits as bits (zeros and ones). |
Unlike other sharing protocols on TrueNAS, an iSCSI share allows block sharing and file sharing. Block sharing provides the benefit of block-level access to data on the TrueNAS. iSCSI exports disk devices (zvols on TrueNAS) over a network that other iSCSI clients (initiators) can attach and mount.
There are a few different approaches for configuring and managing iSCSI-shared data:
TrueNAS Enterprise
TrueNAS Enterprise customers that use vCenter to manage their systems can use the TrueNAS vCenter Plugin to connect their TrueNAS systems to vCenter and create and share iSCSI datastores. This is all managed through the vCenter web interface.
TrueNAS CORE web interface: the TrueNAS web interface is fully capable of configuring iSCSI shares. This requires creating and populating zvol block devices with data, then setting up the iSCSI Share. TrueNAS Enterprise licensed customers also have additional options to configure the share with Fibre Channel.
TrueNAS SCALE web interface: TrueNAS SCALE offers a similar experience to TrueNAS CORE for managing data with iSCSI; create and populate the block storage, then configure the iSCSI share.
To get started with iSCSI shares, make sure you have already created a zvol or a dataset with at least one file to share.
Go to Shares and click Configure in the Block (iSCSI) Shares Targets window. You can either use the creation wizard or set one up manually.
SCALE has implemented administrator roles to further comply with FIPS security hardening standards. The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
TrueNAS SCALE offers two methods to add an iSCSI block share: the setup wizard or the manual steps using the screen tabs. Both methods cover the same basic steps but have some differences.
The setup wizard requires you to enter some settings before you can move on to the next screen or step in the setup process. It is designed to ensure you configure the iSCSI share completely, so it can be used immediately.
The manual process has more configuration screens over the wizard and allows you to configure the block share in any order. Use this process to customize your share for special uses cases. It is designed to give you additional flexibility to build or tune a share to your exact requirements.
Have the following ready before you begin adding your iSCSI block share:
This section walks you through the setup process using the wizard screens.
This procedure walks you through adding each configuration setting on the seven configuration tab screens. While the procedure places each tab screen in order, you can select the tab screen to add settings in any order.
TrueNAS SCALE allows users to add iSCSI targets without having to set up another share.
When adding an iSCSI share the system prompts you to start, or restart, the service. You can also do this by clicking the
on the Block (iSCSI) Shares Targets widget and selecting Turn On Service. You can also go to System Settings > Services and locate iSCSI on the list and click the Running toggle to start the service.Set iSCSI to start when TrueNAS boots up, go to System Settings > Services and locate iSCSI on the list. Select Start Automatically.
Clicking the edit returns to the options in Shares > Block (iSCSI) Shares Targets.
Connecting to and using an iSCSI share can differ between operating systems.
This article provides instructions on setting up a Linux and Windows system to use the TrueNAS iSCSI block share.
In this section, you start the iSCSI service, log in to the share, and obtain the configured basename and target. You also partition the iSCSI disk, make a file system for the share, mount it, and share data.
This section provides instructions on setting up Windows iSCSI Initiator Client to work with TrueNAS iSCSI shares.
TrueNAS lets users expand Zvol and file-based LUNs to increase the available storage in an iSCSI share.
To expand a Zvol LUN, go to Datasets and click the Zvol LUN name. The Zvol Details widget displays. Click the Edit button.
Enter a new size in Size for this zvol, then click Save.
TrueNAS prevents data loss by not allowing users to reduce the Zvol size. TrueNAS also does not allow users to increase the Zvol size past 80% of the pool size.
Go to Shares and click Configure in the Block (iSCSI) Shares Targets screen, then select the Extents tab.
Click the more_vert next to the file-based LUN and select Edit.
Enter a new size in Filesize. Enter the new value as an integer that is one or more multiples of the logical block size (default 512) larger than the current file size. Click Save.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
Creating a Network File System (NFS) share on TrueNAS makes a lot of data available for anyone with share access. Depending on the share configuration, it can restrict users to read or write privileges.
NFS treats each dataset as its own file system. When creating the NFS share on the server, the specified dataset is the location that client accesses. If you choose a parent dataset as the NFS file share location, the client cannot access any nested or child datasets beneath the parent.
If you need to create shares that include child datasets, SMB sharing is an option. Note that Windows NFS Client versions currently support only NFSv2 and NFSv3.
The UDP protocol is deprecated and not supported with NFS. It is disabled by default in the Linux kernel. Using UDP over NFS on modern networks (1Gb+) can lead to data corruption caused by fragmentation during high loads.
SCALE has implemented administrator roles to further comply with FIPS security hardening standards. The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares. Sharing an entire pool makes it more difficult to later restrict access if needed.
If creating a dataset and share from the Add Dataset screen, we recommend creating a new dataset with the Dataset Preset set to Generic for the new NFS share. Or you can set it to Multiprotocol and select only the NFS share type.
To create the share and dataset from the Add NFS Share screen:
Go to Shares > Unix (NFS) Shares and click Add to open the Add NFS Share configuration screen.
Enter the path or use the
icon to the left of /mnt to locate the dataset and populate the path.Click Create Dataset, enter a name for the dataset and click Create. The system creates the dataset optimized for an NFS share, and populates the share Name and updates the Path with the dataset name. The dataset name is the share name.
Enter text to help identify the share in Description.
If needed, enter allowed networks and hosts.
If needed, adjust access permissions.
Click Save to create the share.
After adding the first NFS share, the system opens an enable service dialog.
Enable Service turns the NFS service on and changes the toolbar status to Running. If you wish to create the share without immediately enabling it, select Cancel.
If you want to enter allowed networks, click Add to the right of Networks. Enter an IP address in Network and select the mask CIDR notation. Click Add for each network address and CIDR you want to define as an authorized network. Defining an authorized network restricts access to all other networks. Leave empty to allow all networks.
If you want to enter allowed systems, click Add to the right of Hosts. Enter a host name or IP address to allow that system access to the NFS share. Click Add for each allowed system you want to define. Defining authorized systems restricts access to all other systems. Press the X to delete the field and allow all systems access to the share.
If you want to tune the NFS share access permissions or define authorized networks, click Advanced Options.
Select Read-Only to prohibit writing to the share.
To map user permissions to the root user, enter a string or select the user from the Maproot User dropdown list. To map the user permissions to all clients, enter a string or select the user from the Mapall User dropdown list.
To map group permissions to the root user, enter a string or select the group from the Maproot Group dropdown list. To map the group permissions to all clients, enter a string or select the group from the Mapall Group dropdown list.
Select an option from the Security dropdown. If you select KRB5 security, you can use a Kerberos ticket. Otherwise, everything is based on IDs.
To edit an existing NFS share, go to Shares > Unix Shares (NFS) and click the share you want to edit. The Edit NFS screen settings are identical to the share creation options, but you cannot create a new dataset.
To begin sharing, click the
on the toolbar and select Turn On Service. Turn Off Service displays if NFS is on. Turn On Service displays if NFS is off.Or you can go to System Settings > Services, locate NFS, and click the toggle to running. Select Start Automatically if you want NFS to activate when TrueNAS boots.
The NFS service does not automatically start on boot if all NFS shares are encrypted and locked.
You can configure the NFS service from either the System Settings > Services or the Shares > Unix Shares (NFS) widget.
To configure NFS service settings from the Services screen, click edit on the System Settings > Services screen to open the NFS service screen.
To configure NFS service settings from the Shares > Unix Shares (NFS) widget, click the Config Service from the
dropdown menu on the widget header to open the NFS service screen. Unless you need specific settings, we recommend using the default NFS settings.When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.
Although you can connect to an NFS share with various operating systems, we recommend using a Linux/Unix OS.
First, download the nfs-common
kernel module.
You can do this using the installed distribution package manager.
For example, on Ubuntu/Debian, enter command sudo apt-get install nfs-common
in the terminal.
After installing the module, connect to an NFS share by entering sudo mount -t nfs {IPaddressOfTrueNASsystem}:{path/to/nfsShare} {localMountPoint}
.
Where {IPaddressOfTrueNASsystem} is the remote TrueNAS system IP address that contains the NFS share, {path/to/nfsShare} is the path to the NFS share on the TrueNAS system, and {localMountPoint} is a local directory on the host system configured for the mounted NFS share.
For example, sudo mount -t nfs 10.239.15.110:/mnt/Pool1/NFS_Share /mnt
mounts the NFS share NFS_Share to the local directory /mnt.
You can also use the Linux nconnect
function to let your NFS mount support multiple TCP connections.
To enable nconnect
, enter sudo mount -t nfs -o rw,nconnect=16 {IPaddressOfTrueNASsystem}:{path/to/nfsShare} {localMountPoint}
.
Where {IPaddressOfTrueNASsystem}, {path/to/nfsShare}, and {localMountPoint} are the same ones you used when connecting to the share.
For example, sudo mount -t nfs -o rw,nconnect=16 10.239.15.110:/mnt/Pool1/NFS_Share /mnt
.
By default, anyone that connects to the NFS share only has read permission. To change the default permissions, edit the share, open the Advanced Options, and change the Access settings.
You must have ESXI 6.7 or later for read/write functionality with NFSv4 shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
A multiprotocol or mixed-mode NFS and SMB share supports both NFS and SMB protocols for sharing data. Multiprotocol shares allow clients to use either protocol to access the same data. This can be useful in environments with a mix of Windows systems and Unix-like systems, especially if some clients lack an SMB client.
Carefully consider your environment and access requirements before configuring a multiprotocol share. For many applications, a single protocol SMB share provides better user experience and ease of administration. Linux clients can access SMB shares usingmount.cifs
.
It is important to properly configure permissions and access controls to ensure security and data integrity when using mixed-mode sharing. To maximize security on the NFS side of the multiprotocol share, we recommend using NFSv4 and Active Directory(AD) for Kerberos authentication. It is also important that NFS clients preserve extended attributes when copying files, or SMB metadata could be discarded in the copy.
Before adding a multiprotocol SMB and NFS share to your system:
Configure and start the SMB and NFS services. Configure the NFS service to require Kerberos authentication.
Join the TrueNAS server to an existing Active Directory domain. Configure a container, Kerberos admin, and user accounts in AD.
Create the dataset and share with Dataset Preset set to Multiprotocol.
Before joining AD and creating a dataset for the share to use, start both the SMB and NFS services and configure the NFS service for Kerberos authentication. Configure the NFS service before joining AD for simpler Kerberos credential creation.
You can either use theShares screen Configure Service option on both the Windows (SMB) Share and on the UNIX (NFS) Shares widgets, or go to System Settings > Services and select the Edit option on the SMB and NFS services.
Unless you need a specific setting or are configuring a unique network environment, we recommend using the default SMB service settings.
After configuring the share services, start the services.
From the Sharing screen, click on the Windows (SMB) Shares
to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.After adding a share, use the toggle to enable or disable the service for that share.
To enable the service from the System Settings > Services screen, click the toggle for the service and set Start Automatically if you want the service to activate when TrueNAS boots.
Open the NFS service screen, then select only NFSv4 on the Enabled Protocols dropdown list. For security hardening, we recommend disabling the NFSv3 protocol.
Select Require Kerberos for NFSv4 to enable using a Kerberos ticket.
If Active Directory is already joined to the TrueNAS server, click Save and then reopen the NFS service screen. Click Add SPN to open the Add Kerberos SPN Entry dialog.
Click Yes when prompted to add a Service Principal Name (SPN) entry. Enter the AD domain administrator user name and password in Name and Password.
TrueNAS SCALE automatically applies SPN credentials if the NFS service is enabled with Require Kerberos for NFSv4 selected before joining Active Directory.
Click Save again, then start the NFS service.
From the Sharing screen, click on the Unix Shares (NFS)
to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.Each NFS share on the list also has a toggle to enable or disable the service for that share.
To enable the service from the System Settings > Services screen, click the toggle for the service and set Start Automatically if you want the service to activate when TrueNAS boots.
The NFS service does not automatically start on boot if all NFS shares are encrypted and locked.
Mixed-mode SMB and NFS shares greatly simplify data access for client running a range of operating systems. They also require careful attention to security complexities not present in standard SMB shares. NFS shares do not respect permissions set in the SMB Share ACL. Protect the NFS export with proper authentication and authorization controls to prevent unauthorized access by NFS clients.
We recommend using Active Directory to enable Kerberos security for the NFS share. Configure a container (group or organizational unit), Kerberos admin, and user accounts in AD.
You can create the dataset and add a multiprotocol (SMB and NFS) share using the Add Dataset screen.
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares. Sharing an entire pool makes it more difficult to later restrict access if needed.
Select the dataset you want to be the parent of the multimode dataset, then click Add Dataset.
Enter a name for the dataset. The dataset name populates the SMB Name field and becomes the name of the SMB and NFS shares.
Select Multiprotocol from the Dataset Preset dropdown. The share configuration options display with Create NFS Share and Create SMB Share preselected.
(Optional) Click Advanced Options to customize other dataset settings such as quotas, compression level, encryption, and case sensitivity. See Creating Datasets for more information on adding and customizing datasets.
Click Save. TrueNAS creates the dataset and the SMB and NFS shares. Next edit both shares. After editing the shares, edit the dataset ACL.
After creating the multimode share on the Add Dataset screen, go to Shares and edit the SMB share.
Select the share on the Windows Shares (SMB) widget and then click Edit. The Edit SMB screen opens showing the Basic Options settings.
Select Multi-protocol (NFSv4/SMB) shares from the Purpose dropdown list to apply pre-determined Advanced Options settings for the share.
(Optional) Enter a Description to help explain the share purpose.
Click Save.
Restart the service when prompted.
After creating the multimode share on the Add Dataset screen, go to Shares and edit the NFS share.
Select the new share listed on Unix (NFS) Shares widget and then click Edit. The Edit NFS screen opens showing the Basic Options settings.
Enable Kereberos security. Click Advanced Options. Select KRB5 from the Security dropdown to enable the Kerberos ticket that generated when you joined Active Directory.
If needed, select Read-Only to prohibit writing to the share.
Click Save.
Restart the service when prompted.
After joining AD, creating a multimode dataset and the SMB and NFS shares, adjust the dataset/file system ACL to match the container and users configured in AD.
You can modify dataset permissions from the Shares screen using the Edit ACL screen for each share (SMB and NFS). Using this method you select the share on the Windows (SMB) Share widget, then click the icon to edit the dataset properties for the SMB share, but you must repeat this for the NFS share.
Edit Filesystem ACL icon to open theOr you can go to Datasets, select the name of the dataset created for the multiprotocol share to use and scroll down to the Permissions widget for the dataset. Click Edit to open the Edit ACL screen.
Check the Access Control List to see if the AD group you created is on the list and has the correct permissions. If not, add this Access Control Entry (ACE) item on the Edit ACL screen for the multimode dataset (or each share).
Enter Group in the Who field or use the dropdown list to select Group.
Type or select the appropriate group in the Group field.
Verify Full Control displays in Permissions. If not, select it from the dropdown list.
Click Save Access Control List to add the ACE item or save changes.
See Permissions for more information on editing dataset permissions.
After setting the dataset permission, connect to the share.
After creating and configuring the shares, connect to the mulit-protocol share using either SMB or NFS protocols from a variety of client operating systems including Windows, Apple, FreeBSD, and Linux/Unix systems.
For more information on accessing shares, see Mounting the SMB Share and Connecting to the NFS Share.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
SMB (also known as CIFS) is the native file-sharing system in Windows. SMB shares can connect to most operating systems, including Windows, MacOS, and Linux. TrueNAS can use SMB to share files among single or multiple users or devices.
SMB supports a wide range of permissions, security settings, and advanced permissions (ACLs) on Windows and other systems, as well as Windows Alternate Streams and Extended Metadata. SMB is suitable for managing and administering large or small pools of data.
TrueNAS uses Samba to provide SMB services. The SMB protocol has multiple versions. An SMB client typically negotiates the highest supported SMB protocol during SMB session negotiation. Industry-wide, SMB1 protocol (sometimes referred to as NT1) usage is deprecated for security reasons.
As of SCALE 22.12 (Bluefin) and later, TrueNAS does not support SMB client operating systems that are labeled by their vendor as End of Life or End of Support. This means MS-DOS (including Windows 98) clients, among others, cannot connect to TrueNAS SCALE SMB servers.
The upstream Samba project that TrueNAS uses for SMB features notes in the 4.11 release that the SMB1 protocol is deprecated and warns portions of the protocol might be further removed in future releases. Administrators should work to phase out any clients using the SMB1 protocol from their environments.
However, most SMB clients support SMB 2 or 3 protocols, even when not default.
Legacy SMB clients rely on NetBIOS name resolution to discover SMB servers on a network. TrueNAS disables the NetBIOS Name Server (nmbd) by default. Enable it on the Network > Global Settings screen if you require this functionality.
MacOS clients use mDNS to discover SMB servers present on the network. TrueNAS enables the mDNS server (avahi) by default.
Windows clients use WS-Discovery to discover the presence of SMB servers, but you can disable network discovery by default depending on the Windows client version.
Discoverability through broadcast protocols is a convenience feature and is not required to access an SMB server.
SCALE has implemented administrator roles to further comply with FIPS security hardening standards. The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
Verify Active Directory connections are working and error free before adding an SMB share. If configured but not working or in an error state, AD cannot bind and prevents starting the SMB service.
Creating an SMB share to your system involves several steps to add the share and get it working.
Create the SMB share user account. You can also use directory services like Active Directory or LDAP to provide additional user accounts. If setting up an external SMB share, we recommend using Active Directory or LDAP, or at a minimum synchronizing the user accounts between systems.
Create the SMB share and dataset. You can create a basic SMB share, or for more specific share types or feature requirements, use the Advanced Options instructions before saving the share.
TrueNAS allows creating the dataset and share at the same time from either the Add Dataset screen or the Add SMB share screen. Use either option to create a basic SMB share, but when customizing share presets use the Add SMB screen to create the share and dataset. The procedure in this article provides the instructions to add the dataset while adding the share using the Add SMB screen.
Modify the share permissions. After adding or modifying the user account for the share, edit the dataset permissions.
After adding the share, start the service and mount it to your other system.
TrueNAS must be joined to Active Directory or have at least one local SMB user before creating an SMB share. When creating an SMB user, ensure that Samba Authentication is enabled. You cannot access SMB shares using the root user, TrueNAS built-in user accounts, or those without Samba Authentication selected.
To add users or edit users, go to Credentials > Users to add or edit the SMB share user(s). Click Add to create a new or as many new user accounts as needed. If joined to Active Directory, Active Directory can create the TrueNAS accounts.
Enter the values in each required field, verify Samba Authentication is selected, then click Save. For more information on the fields and adding users, see Creating User Accounts.
By default, all new local users are members of a built-in group called builtin_users. You can use a group to grant access to all local users on the server or add more groups to fine-tune permissions for large numbers of users.
You can create an SMB share while creating a dataset on the Add Dataset screen or create the dataset while creating the share on the Add SMB Share screen. This article covers adding the dataset on the Add SMB Share screen.
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares. Sharing an entire pool makes it more difficult to later restrict access if needed.
To create a basic Windows SMB share and a dataset, go to Shares, then click Add on the Windows Shares (SMB) widget to open the Add Share screen.
Enter or browse to select SMB share mount path (parent dataset where you want to add a dataset for this share) to populate the Path field. The Path is the directory tree on the local file system that TrueNAS exports over the SMB protocol.
Click Create Dataset. Enter the name for the share dataset in the Create Dataset dialog, then click Create. The system creates the new dataset.
Name becomes the dataset name entered and is the SMB share name. This forms part of the share pathname when SMB clients perform an SMB tree connect. Because of how the SMB protocol uses the name, it must be less than or equal to 80 characters. Do not use invalid characters as specified in Microsoft documentation MS-FSCC section 2.1.6.
If you change the name, follow the naming conventions for:
If creating an external SMB share, enter the hostname or IP address of the system hosting the SMB share and the name of the share on that system. Enter as EXTERNAL:ip address\sharename in Path, then change Name to EXTERNAL with no special characters.
(Optional) Select a preset from the Purpose dropdown list to apply. The preset selected locks or unlock pre-determined Advanced Options settings for the share. To retain control over all the share Advanced Options settings, select No presets or Default share parameters. To create an alternative to Home Shares, select Private SMB Datasets and Shares. See Setting Up SMB Home Shares for more information on replacing this legacy feature with private SMB shares and datasets.
(Optional) Enter a Description to help explain the share purpose.
Select Enabled to allow sharing of this path when the SMB service is activated. Leave it cleared to disable the share without deleting the configuration.
(Optional) Click Advanced Options to configure audit logging or other advanced configuration settings such as changing Case Sensitivity.
Click Save to create the share and add it to the Shares > Windows (SMB) Shares list.
Enable the SMB service when prompted.
For a basic SMB share, using the Advanced Options settings is not required, but if you set Purpose to No Presets, click Advanced Options to finish customizing the SMB share for your use case.
The following are possible use cases. See SMB Shares Screens for all settings and other possible use cases.
To add ACL support to the share, select Enable ACL under Advanced Options on either the Add SMB or Edit SMB screens. See Managing SMB Shares for more on configuring permissions for the share and the file system.
There are two levels to set SMB share permissions, at the share or for the dataset associated for with the share. See Managing SMB Shares for more information on these options.
See Permissions for more information on dataset permissions.
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
Using the Edit Share ACL option configures the permissions for just the share, but not the dataset the share uses. The permissions apply at the SMB share level for the selected share. They do not apply to other file sharing protocol clients, other SMB shares that export the same share path (i.e., /poolname/shares specified in Path), or to the dataset the share uses.
After creating the share and dataset, modify the share permissions to grant user or group access.
Click on
Edit Share ACL icon to open the Edit Share ACL screen if you want to modify permissions at the share level.Select either User in Who, then the user name in User, and then set the permission level using Permissions and Type.
(Optional) Click Add then select Group, the group name, and then set the group permissions.
Click Save.
See Permissions for more information on setting user and group settings.
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
To configure share owner, user and group permissions for the dataset Access Control List (ACL), use the Edit Filesystem ACL option. This modifies the ACL entry for the SMB share the path (defined in Path) at the dataset level. To customize permissions, add Access Control Entries (ACEs) for users or groups.
To access the dataset (filesystem) permissions, either click the «span class=“material-icons”>security> Edit Filesystem ACL icon on the share row to open the Edit ACL screen for the dataset the share uses. You can also go to Datasets, select the dataset the share uses (same name as the share), then click Edit on the Permissions widget to open the Edit ACL screen.
Samba Authentication selected by default when SMB share users are created or added to TrueNAS SCALE manually or through a directory service, and these users are automatically added to the builtin-users group. Users in this group can add or modify files and directories in the share.
The share dataset ACL includes an ACE for the builtin-users group, and the @owner and @group are set to root by default. Change the @owner and @group values to the admin (Full admin) user and click Apply under each.
To restrict or grant additional file permissions for some or all share users, do not modify the builtin-users group entry. Best practice is to create a new group for the share users that need different permissions, reassign these users to the new group and remove them from builtin-users group. Next, edit the ACL by adding a new ACE entry for the new group, and then modify the permissions of that group.
Home users can modify the builtin-users group ACE entry to grant FULL_CONTROL
If you need to restrict or increase permissions for some share users, create a new group and add an ACE entry with the modified permissions.
To change permissions for the builtin_users group, go to Datasets, select the share dataset, and scroll down to the Permissions widget.
Click Edit to open the Edit ACL screen. Locate the ACE entry for the builtin-users group and click on it.
Check the Access Control List area to see the if the permissions are correct.
Enter or select Group in the Who field.
Begin typing builtin_users in the Group field until it displays, then click on it to populate the field.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field. For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List to add the ACE item or save changes.
To change the permission level for some share users, add a new group, reassign the user(s) to the new group, then modify the share dataset ACL to include this new group and the desired permissions.
Go to Local Groups, click Add and create the new group.
Go Local Users, select a user, click Edit, remove the builtin-user entry from Auxiliary Groups and add the new group. Click Save. Repeat this step for each user or change the group assignment in the directory server to the new group.
Edit the filesystem (dataset) permissions. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field. For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List.
If restricting this group to read only and the share dataset is nested under parent datasets, go to each parent dataset, edit the ACL. Add an ACE entry for the new group, and select Traverse. Keep the parent dataset permission set to either Full_Control or MODIFY but select Traverse.
If a share dataset is nested under other datasets (parents), you must add the ACL Traverse permission at the parent dataset level(s) to allow read-only users to move through directories within an SMB share.
After adding the group and assigning it to the user(s), next modify the dataset ACLs for each dataset in the path (parent datasets and the share dateset).
Add the new group to the share ACL. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Click Save Access Control List.
Return to the Datasets screen, locate the parent dataset for the share dataset, use one of the methods to access the Edit ACL screen for the parent dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then select Traverse.
Click Save Access Control List.
Repeat for each parent dataset in the path. This allows the restricted share group to navigate through the directories in the path to the share dataset.
To connect to an SMB share, start the SMB service.
After adding a new share TrueNAS prompts you to either start, or restart the SMB service.
You can also start the service from the Windows (SMB) Share widget or on the System Settings > Services screen from the SMB service row.
From the Sharing screen, click on the Windows (SMB) Shares
to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.Each SMB share on the list also has a toggle to enable or disable the service for that share.
To make SMB share available on the network, go to System > Services and click the toggle for SMB. Set Start Automatically if you want the service to activate when TrueNAS boots.
Configure the SMB service by clicking Config Service from the
dropdown menu on the Windows (SMB) Shares widget header or by clicking edit on the Services screen. Unless you need a specific setting or are configuring a unique network environment, we recommend using the default settings.The instructions in this section cover mounting the SMB share on a system with the following operating systems.
External SMB shares are essentially redirects to shares on other systems. Administrators might want to use this when managing multiple TrueNAS systems with SMB shares and if they do not want to keep track of which shares live on which boxes for clients. This feature allows admins to connect to any of the TrueNAS systems with external shares set up, and to see them all.
Create the SMB share on another SCALE server (for example, system1), as described in Adding an SMB Share above.
We recommend using Active Directory or LDAP when creating user accounts, but at a minimum synchronize user accounts between the system with the share (system1) and on the TrueNAS SCALE system where you set up the external share (for example, system2).
On system2, enter the host name or IP address of the system hosting the SMB share (system1) and the name given the share on that system as EXTERNAL:ip address\sharename in Path, then change Name to EXTERNAL with no special characters.
Leave Purpose set to Default share parameters, leave Enabled selected, then click Save to add the share redirect.
Repeat the system2 instructions above to add an external redirect (share) on system1 to see the SMB shares of each system.
Repeat for each TrueNAS system with SMB shares to add as an external redirect. Change the auto-populated name to EXTERNAL2 or something to distinguish it from the SMB shares on the local system (system1 in this case) and any other external shares added.
These tutorials describe creating and managing various specific configurations of SMB shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
To access SMB share management options, go to Shares screen with the Windows (SMB) Shares widget. The widget lists SMB shares configured on but is not the full list. Each share listed includes four icons that open other screens or dialogs that provide access to share settings. To see a full list of shares, click on Windows (SMB) Shares
to open the Sharing > SMB screen. Each share row on this screen provides access to the other screens or dialogs with share settings.SCALE has implemented administrator roles to further comply with FIPS security hardening standards. The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
To manage an SMB share click the icons on the widget or use the on the Sharing > SMB details screen to see the options for the share you want to manage. Options are:
Edit opens the Edit SMB screen where you can change settings for the share.
Edit Share ACL opens the Share ACL screen where you can add or edit ACL entries.
Edit Filesystem ACL opens the Edit ACL screen where you can edit the dataset permissions for the share. The Dataset Preset option determines the ACL type and therefore the ACL Editor screen that opens.
Delete opens a delete confirmation dialog. Use this to delete the share and remove it from the system. Delete does not affect shared data.
You have two options that modify ACL permissions for SMB shares:
Edit Share ACL where you modify ACL permissions applying to the entire SMB share.
Edit Filesystem ACL where you modify ACL permissions at the shared dataset level.
See the ACL Primer for general information on Access Control Lists (ACLs) in general, the Permissions article for more details on configuring ACLs, and Edit ACL Screen for more information on the dataset ACL editor screens and setting options.
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
Using the Edit Share ACL option configures the permissions for just the share, but not the dataset the share uses. The permissions apply at the SMB share level for the selected share. They do not apply to other file sharing protocol clients, other SMB shares that export the same share path (i.e., /poolname/shares specified in Path), or to the dataset the share uses.
After creating the share and dataset, modify the share permissions to grant user or group access.
Click on
Edit Share ACL icon to open the Edit Share ACL screen if you want to modify permissions at the share level.Select either User in Who, then the user name in User, and then set the permission level using Permissions and Type.
(Optional) Click Add then select Group, the group name, and then set the group permissions.
Click Save.
See Permissions for more information on setting user and group settings.
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
To configure share owner, user and group permissions for the dataset Access Control List (ACL), use the Edit Filesystem ACL option. This modifies the ACL entry for the SMB share the path (defined in Path) at the dataset level. To customize permissions, add Access Control Entries (ACEs) for users or groups.
To access the dataset (filesystem) permissions, either click the «span class=“material-icons”>security> Edit Filesystem ACL icon on the share row to open the Edit ACL screen for the dataset the share uses. You can also go to Datasets, select the dataset the share uses (same name as the share), then click Edit on the Permissions widget to open the Edit ACL screen.
Samba Authentication selected by default when SMB share users are created or added to TrueNAS SCALE manually or through a directory service, and these users are automatically added to the builtin-users group. Users in this group can add or modify files and directories in the share.
The share dataset ACL includes an ACE for the builtin-users group, and the @owner and @group are set to root by default. Change the @owner and @group values to the admin (Full admin) user and click Apply under each.
To restrict or grant additional file permissions for some or all share users, do not modify the builtin-users group entry. Best practice is to create a new group for the share users that need different permissions, reassign these users to the new group and remove them from builtin-users group. Next, edit the ACL by adding a new ACE entry for the new group, and then modify the permissions of that group.
Home users can modify the builtin-users group ACE entry to grant FULL_CONTROL
If you need to restrict or increase permissions for some share users, create a new group and add an ACE entry with the modified permissions.
To change permissions for the builtin_users group, go to Datasets, select the share dataset, and scroll down to the Permissions widget.
Click Edit to open the Edit ACL screen. Locate the ACE entry for the builtin-users group and click on it.
Check the Access Control List area to see the if the permissions are correct.
Enter or select Group in the Who field.
Begin typing builtin_users in the Group field until it displays, then click on it to populate the field.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field. For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List to add the ACE item or save changes.
To change the permission level for some share users, add a new group, reassign the user(s) to the new group, then modify the share dataset ACL to include this new group and the desired permissions.
Go to Local Groups, click Add and create the new group.
Go Local Users, select a user, click Edit, remove the builtin-user entry from Auxiliary Groups and add the new group. Click Save. Repeat this step for each user or change the group assignment in the directory server to the new group.
Edit the filesystem (dataset) permissions. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field. For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List.
If restricting this group to read only and the share dataset is nested under parent datasets, go to each parent dataset, edit the ACL. Add an ACE entry for the new group, and select Traverse. Keep the parent dataset permission set to either Full_Control or MODIFY but select Traverse.
If a share dataset is nested under other datasets (parents), you must add the ACL Traverse permission at the parent dataset level(s) to allow read-only users to move through directories within an SMB share.
After adding the group and assigning it to the user(s), next modify the dataset ACLs for each dataset in the path (parent datasets and the share dateset).
Add the new group to the share ACL. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Click Save Access Control List.
Return to the Datasets screen, locate the parent dataset for the share dataset, use one of the methods to access the Edit ACL screen for the parent dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then select Traverse.
Click Save Access Control List.
Repeat for each parent dataset in the path. This allows the restricted share group to navigate through the directories in the path to the share dataset.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
SCALE uses predefined setting options to establish an SMB share that fits a predefined purpose, such as a basic time machine share.
To set up a basic time machine share:
Create the user(s) for this SMB share. Go to Credentials > Local User and click Add.
Create the share and dataset with Purpose set to Basic time machine share.
After creating the share, enable the SMB service.
You can either create the dataset to use for the share on the Add Dataset screen and the share, or create the dataset when you add the share on the Add SMB screen. If you want to customize the dataset, use the Add Dataset screen.
To create a basic dataset, go to Datasets. Default settings include those inherited from the parent dataset.
Select a dataset (root, parent, or child), then click Add Dataset.
Enter a value in Name.
Select the Dataset Preset option you want to use. Options are:
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset. If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators. Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
If creating an SMB or multi-protocol (SMB and NFS) share the dataset name value auto-populates the share name field with the dataset name.
If you plan to deploy container applications, the system automatically creates the ix-applications dataset, but this dataset is not used for application data storage. If you want to store data by application, create the dataset(s) first, then deploy your application. When creating a dataset for an application, select Apps as the Dataset Preset. This optimizes the dataset for use by an application.
If you want to configure advanced setting options, click Advanced Options. For the Sync option, we recommend production systems with critical data use the default Standard choice or increase to Always. Choosing Disabled is only suitable in situations where data loss from system crashes or power loss is acceptable.
Select either Sensitive or Insensitive from the Case Sensitivity dropdown. The Case Sensitivity setting is found under Advanced Options and is not editable after saving the dataset.
Click Save.
Review the Dataset Preset and Case Sensitivity under Advanced Options on the Add Dataset screen before clicking Save. You cannot change these or the Name setting after clicking Save.
To use the Add SMB screen, click Add on the Windows (SMB) Shares widget to open the screen.
Set the Path to the existing dataset created for the share, or to where you want to add the dataset, then click Create Dataset.
Enter a name for the dataset and click Create Dataset. The dataset name populates the share Name field and updates the Path automatically. The dataset name becomes the share name. Leave this as the default.
If you change the name follow the naming conventions for:
Set the Purpose to Basic time machine share.
Select Enabled to allow sharing of this path when the SMB service is activated. Leave it cleared if you want to disable the share without deleting the configuration.
Finish customizing the share, then click Save.
Do not start the SMB service when prompted, start it after configuring the SMB service.
Click on the on the Windows (SMB) Share widget, then click Configure Service to open the SMB Service screen.
You can also go to System Settings > Services and scroll down to SMB. If using the Services screen, click the toggle to turn off the SMB service if it is running, then click edit Configure to open the SMB Service settings screen.
Click Advanced Settings.
Verify or select Enable Apple SMB2/3 Protocol Extension to enable it, then click Save.
Restart the SMB service.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
Enable Shadow Copies exports ZFS snapshots as Shadow Copies for Microsoft Volume Shadow Copy Service (VSS) clients.
Shadow Copies, also known as the Volume Shadow Copy Service (VSS) or Previous Versions, is a Microsoft service for creating volume snapshots. You can use shadow copies to restore previous versions of files from within Windows Explorer.
By default, all ZFS snapshots for a dataset underlying an SMB share path are presented to SMB clients through the volume shadow copy service or are accessible directly with SMB when the hidden ZFS snapshot directory is within the SMB share path.
Before you activate Shadow Copies in TrueNAS, there are a few caveats:
Shadow Copies might not work if you have not updated the Windows system to the latest service pack. If previous versions of files to restore are not visible, use Windows Update to ensure the system is fully up-to-date.
Shadow Copies support only works for ZFS pools or datasets.
You must configure SMB share dataset or pool permissions appropriately.
To enable shadow copies, go to Shares > Windows (SMB) Shares and locate the share.
If listed on the widget, select the Edit option for the share.
If not listed, click Windows (SMB) Shares
to open the Sharing > SMB list-view screen. Select the share, then click the for the share, then click Edit to open the Edit SMB screen.Click Advanced Options, scroll down to Other Options, and then select Enable Shadow Copies.
Click Save.
Users with an SMB client cannot delete Shadow copies. Instead, the administrator uses the TrueNAS web interface to remove snapshots.
Disable shadow copies for an SMB share by clearing the Enable shadow copies checkbox on the Edit SMB screen in the Other Options on the Advanced Options screen for the SMB share.
Disabling does not prevent access to the hidden
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
SMB Home Shares are a legacy feature for organizations looking to maintain existing SMB configurations. They are not recommended for new deployments.
Future TrueNAS SCALE releases can introduce instability or require configuration changes affecting this legacy feature.
TrueNAS does not recommend setting up home shares with the Use as Home Share option, found in the Add SMB and Edit SMB screen Advanced Options settings, in the Other Options section. This option is for organizations still using the legacy home shares option of adding a single SMB share to provide a personal directory for every user account.
Users wanting to create the equivalent of home shares should use the intructions in the Adding Private SMB Datasets and Shares section below for the recommended method for creating private shares and datasets.
The legacy home shares provide each user a personal home directory when connecting to the share. These home directories are not accessible by other users. You can use only one share as the home share, but you can create as many non-home shares as you need or want.
Other options for configuring individual user directories include:
Creating an SMB home share requires configuring the system storage and provisioning local users or joining Active Directory.
This option allows creating private share and datasets for the users that require the equivalent of the legacy home share. It is not intended for every user on the system. Setting up private SMB shares and datasets prevents the system from showing these to all users with access to the root level of the share. Examples of private SMB shares are those for backups, system configuration, and users or departments that need to keep information private from other users.
Before setting up SMB shares check system alerts to verify there are no errors related to connections to Active Directory. Resolve any issues with Active Directory before proceeding. If Active Directory cannot bind with TrueNAS you cannot start the SMB service after making changes.
To add private shares and datasets for users that require home directories:
Create the share using the Private SMB Datasets and Shares preset.
Configure the share dataset ACL to use the NFSv4_HOME preset.
Create users either manually or through Active Directory.
TrueNAS must be joined to Active Directory or have at least one local SMB user before creating an SMB share. When creating an SMB user, ensure that Samba Authentication is enabled. You cannot access SMB shares using the root user, TrueNAS built-in user accounts, or those without Samba Authentication selected.
You can use an existing dataset for the share or create a new dataset. You can either add a share when you create the dataset for the share on the Add Dataset screen, or create the dataset when you add the share on the Add SMB screen. If creating a simple SMB share and dataset use either method, or if customizing the dataset, use the Add Dataset screen to access dataset advanced setting options. To configure a customized SMB share, use the Add SMB share option that provides access to the advanced setting options for shares. This procedure covers creating the share and dataset from the Add Share screen.
To create an alternative to the legacy SMB home share:
Go to Shares, click Add on the Windows (SMB) Shares widget to open the Add SMB screen.
If you created the dataset already, you can add the share with the correct share preset on this screen. If you are creating the share and dataset together you can create both using the correct share preset on this screen.
Browse to or enter the location of an existing dataset or path to where you want to create the dataset to populate the Path for the share. To add a dataset, click Create Dataset, enter a name for the dataset, then click Create Dataset. For example, creating a share and dataset named private.
Follow naming conventions for:
By default, the dataset name populates the share Name field and becomes the share name. The share and dataset must have the same name. It also updates the Path automatically.
Set Purpose to the Private SMB Dataset and Share preset and click Advanced Options to show the additional settings. Configure the options you want to use.
(Optional) Select Enable for audit logging.
Scroll down to Other Options and select Export Recycle Bin to allow moving files deleted in the share to a recycle bin in that dataset.
Files are renamed to a per-user subdirectory within
Click Save.
Enable or restart the SMB service when prompted and make the share available on your network.
After saving the dataset and if not already set for the dataset, set the ACL permissions.
After creating the share and dataset, edit ACL permissions. You can access the Edit ACL screen either from the Datasets or the Shares screens.
If starting on the Datasets screen, select the dataset row, then click Edit on the Permissions widget to open the Edit ACL screen. See Setting Up Permissions for more information on editing dataset permissions.
If starting on the Shares screen, select the share on the Windows (SMB) Share widget, then click Edit Filesystem ACL to open the Edit ACL screen. Select the option to edit the file system ACL not the share permissions. See SMB Shares for detailed information on editing the share dataset permissions.
To set the permission for the private dataset and share, the home share alternative scenario, select the HOME (if a POSIX ACL) or NSFv4_HOME (for NFSv4 ACL) preset option to correctly configure dataset permissions.
Click the Owner dropdown and select the administration user with full control, then repeat for Group. You can set the owning group to your Active Directory domain admins. Click Apply Owner and Apply Group.
Next, click Use Preset and choose NFS4_HOME. If the dataset has a POSIX ACL the preset is HOME. Click Continue, then click Save Access Control List.
Next, add the users that need a private dataset and share.
As of SCALE 22.12 (Bluefin) and later, TrueNAS does not support SMB client operating systems that are labeled by their vendor as End of Life or End of Support. This means MS-DOS (including Windows 98) clients, among others, cannot connect to TrueNAS SCALE SMB servers.
The upstream Samba project that TrueNAS uses for SMB features notes in the 4.11 release that the SMB1 protocol is deprecated and warns portions of the protocol might be further removed in future releases. Administrators should work to phase out any clients using the SMB1 protocol from their environments.
Go to Credentials > Users and click Add. Create a new user name and password. For home directories, make the username all lowercase.
Add and configure permissions for the user the private share is for to allow log in access to the share and the ability see a folder matching their username.
By default, the user Home Directory is set to /var/empty. You must change this to the path for the new parent dataset created for home directories. Select the path /mnt/poolname/datasetname/username where poolname is the name of the pool where you added the share dataset, datasetname is the name of the dataset associated with the share, and username is the username (all lowercase) and is also the name of the home directory for that username. Select Create Home Directory.
If existing users require access to the home share, go to Credentials > Local Users and edit an existing account.
Click Save. TrueNAS adds the user and creates the home directory for the user.
If existing users require access to a home share, go to Credentials > Users, select the user, click Edit and add the home directory as described above.
SCALE 24.04 changes the default user home directory location from /nonexistent to /var/empty. This new directory is an immutable directory shared by service accounts and accounts that should not have a full home directory.
The 24.04.01 maintenance release introduces automated migration to force home directories of existing SMB users from /nonexistent to /var/empty.
You can use Active Directory or LDAP to create share users.
If not already created, add a pool, then join Active Directory.
Go to Storage and create a pool.
Next, set up the Active Directory that you want to share resources with over your network.
When creating the share for this dataset, use the SMB preset for the dataset but do not add the share from the Add Dataset screen.
Do not share the root directory!
Go to Shares and follow the instructions listed above using the Private SMB Dataset and Share preset, and then modifying the file system permissions of the dataset to use the NFSv4_HOME ACL preset.
There are normalize forms for a unicode character with diacritical marks: decomposed (NFD) and pre-composed (NFC).
Take for example the character ä (a + umlaut) and the encoding differences between NFC (b’\xc3\xa4’) and NFD (b’a\xcc\x88’).
The MacOS SMB client historically and at present forces normalization of unicode strings to NFC prior to generating network traffic to the remote SMB server.
The practical impact of this is that a file that contains NFD diacritics on a remote SMB server (TrueNAS, Windows, etc.) might be visible in the directory listing in the MacOS SMB client and thereby Finder, but any operations on the file (edits, deletions, etc.) have undefined behaviors since a file with NFC diacritics does not exist on the remote server.
>>> os.listdir(".")
['220118_M_HAN_MGK_X_4_Entwässerung.pdf']
>>> os.unlink('220118_M_HAN_MGK_X_4_Entwässerung.pdf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '220118_M_HAN_MGK_X_4_Entwässerung.pdf'
>>> os.listdir(".")
['220118_M_HAN_MGK_X_4_Entwässerung.pdf']
Above is a short example of a MacOS SMB client attempting to delete a file with NFD normalization on remote Windows server.
Short of Apple providing a fix for this, the only strategy for an administrator to address these issues is to rename the files with pre-composed (NFC) form. Unfortunately, normalization is not guaranteed to be lossless.
For more information see Unicode Normalization Forms or Combining Diacritical Marks.
When creating a share, do not attempt to set up the root or pool-level dataset for the share. Instead, create a new dataset under the pool-level dataset for the share. Setting up a share using the root dataset leads to storage configuration issues.
SMB multichannel allows servers to use multiple network connections simultaneously by combining the bandwidth of several network interface cards (NICs) for better performance.
SMB multichannel does not function if you combine NICs into a LAGG.
If you already have clients connected to SMB shares, disconnect them before activating multichannel.
After you connect a client to their SMB share, open Powershell as an administrator on a client, then enter Get-SmbMultichannelConnection
. The terminal should list multiple server IPs.
You can also enter Get-SmbMultichannelConnection | ConvertTo-Json
and ensure CurrentChannels
is more than 1.
The Data Protection section allows users to set up multiple reduntant tasks that will protect and/or backup data in case of drive failure.
Scrub Tasks and S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Tests can provide early disk failure alerts by identifying data integrity problems and detecting various indicators of drive reliability.
Cloud Sync, Periodic Snapshot, Rsync, and Replication Tasks, provide backup storage for data and allow users to revert the system to a previous configuration or point in time.
When TrueNAS performs a scrub, ZFS scans the data on a pool. Scrubs identify data integrity problems, detect silent data corruptions caused by transient hardware issues, and provide early disk failure alerts.
TrueNAS generates a default scrub task when you create a new pool and sets it to run every Sunday at 12:00 AM.
Resilvering is a process that copies data to a replacement disk. Complete it as quickly as possible. Resilvering is a high priority task. It can run in the background while performing other system functions, however, this can put a higher demand on system resources. Increasing the priority of resilvers helps them finish faster as the system runs tasks with higher priority ranking.
Use the Resilver Priority screen to schedule a time where a resilver task can become a higher priority for the system and when the additional I/O or CPU use does not affect normal usage.
Select Enabled, then use the dropdown lists to select a start time in Begin and time to finish in End to define a priority period for the resilver. To select the day(s) to run the resilver, use the Days of the Week dropdown to select when the task can run with the priority given.
A resilver process running during the time frame defined between the beginning and end times likely runs faster than during times when demand on system resources is higher. We advise you to avoid putting the system under any intensive activity or heavy loads (replications, SMB transfers, NFS transfers, Rsync transfers, S.M.A.R.T. tests, pool scrubs, etc.) during a resilver process.
TrueNAS needs at least one data pool to create scrub task.
To create a scrub task for a pool, go to Data Protection and click ADD in the Scrub Tasks window.
Select a preset schedule from the dropdown list or click Custom to create a new schedule for when to run a scrub task. Custom opens the Advanced Scheduler window.
To view the progress of a scrub task, check the status under the Next Run column.
To edit a scrub, go to Data Protection and click the scrub task you want to edit.
This section has tutorials for configuring and managing data backups to or from TrueNAS to various 3rd party cloud service providers. This article provides instructions on adding a cloud sync task, configuring environment variables, running an unscheduled sync task, creating a copy of a task with a reversed transfer mode, and troubleshooting common issues with some cloud storage providers.
TrueNAS can send, receive, or synchronize data with a cloud storage provider. Cloud sync tasks allow for single-time transfers or recurring transfers on a schedule. They are an effective method to back up data to a remote location.
These providers are supported for Cloud Sync tasks in TrueNAS SCALE:
Using the cloud means data can go to a third-party commercial vendor not directly affiliated with iXsystems. You should fully understand vendor pricing policies and services before using them for cloud sync tasks.
iXsystems is not responsible for any charges incurred from using third-party vendors with the cloud sync feature.
You must have:
You can create cloud storage account credentials using Credentials > Backup Credentials > Cloud Credentials before adding the sync task or add it when configuring the cloud sync task using Add on the Data Protection > Cloud Sync Task widget to open the Cloudsync Task Wizard. See the Cloud Credentials article for instructions on adding a backup cloud credential.
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select an existing backup credential from the Credential dropdown list. If not already added as a cloud credential, click Add New to open the Cloud Credentials screen to add the credential. Click Save to close the screen and return to the wizard.
Click Next to open the Where and When wizard screen.
Select the option from Direction and in Transfer Mode. Select the location where to pull from or push data to in the Folder field.
Select the dataset location in Directory/Files. Browse to the dataset to use on SCALE for data storage. Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
Cloud provider settings change based on the credential you select. Select or enter the required settings that include where files are stored. If shown, select the bucket on the Bucket dropdown list.
Select the time to run the task from the Schedule options.
Click Save to add the task.
Use Dry Run to test the configuration before clicking Save or select the option on the Cloud Sync Task widget after you click Save. TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule.
The option to encrypt data transferred to or from a cloud storage provider is available in the Advanced Options settings.
Select Remote Encryption to use rclone crypt encryption during pull and push transfers. With Pull selected as the Transfer Direction, the Remote Encryption decrypts files stored on the remote system before the transfer. This requires entering the same password used to encrypt data in both Encryption Password and Encryption Salt.
With Push selected as the Transfer Direction, data is encrypted before it is transferred and stored on the remote system. This also requires entering the same password used to encrypt data in both Encryption Password and Encryption Salt.
The rclone project has identified known issues with Filename Encryption in certain configurations, such as when long file names are used. See SSH_FX_BAD_MESSAGE when syncing files with long filename to encrypted sftp storage. In some cases, this can prevent backup jobs from completing or being restored.
We do not recommend enabling Filename Encryption for any cloud sync tasks that did not previously have it enabled. Users with existing cloud sync tasks that have this setting enabled must leave it enabled on those tasks to be able to restore those existing backups. Do not enable file name encryption on new cloud sync tasks!
When Filename Encryption is selected, transfers encrypt and decrypt file names with the rclone Standard file name encryption mode. The original directory structure of the files is preserved. When disabled, encryption does not hide file names or directory structure, file names can be 246 characters long, use sub-paths, and copy single files. When enabled, file names are encrypted, file names are limited to 143 characters, directory structure is visible, and files with identical names have identical uploaded names. File names can use sub-paths, single copy files, and shortcuts to shorten the directory recursion.
Sync keeps all the files identical between the two storage locations. If the sync encounters an error, it does not delete files in the destination.
One common error occurs when the Dropbox copyright detector flags a file as copyrighted.
Syncing to a Backblaze B2 bucket does not delete files from the bucket, even after deleting those files locally. Instead, files are tagged with a version number or moved to a hidden state. To automatically delete old or unwanted files from the bucket, adjust the Backblaze B2 Lifecycle Rules.
A directory, deleted in BackBlaze B2 and notated with an asterisk, do not display in the SCALE UI. These folders are essentially empty directories and Backblaze API restricts them so they do not display.
Sync cannot delete files stored in Amazon S3 Glacier or S3 Glacier Deep Archive. Restore these files by another means, like the Amazon S3 console.
Advanced users can write scripts that run immediately before or after the cloud sync task.
Use either the Advanced Options screen accessed from the Cloudsync Task Wizard or Edit Cloud Sync Task screen, scroll down to the Advanced Options to locate and then enter environment variables in either the Pre-script or Post-script fields. The Post-script field only runs when the cloud sync task succeeds.
Saved tasks activate based on the schedule set for the task. Click Run Now on the Cloud Sync Task widget to run the sync task before the saved scheduled time. You can also expand the task on the Cloud Sync Tasks screen and click Run Now on the task details screen.
An in-progress cloud sync must finish before another can begin. Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.
To view logs about a running task, or its most recent run, click on the State oval.
To create a new cloud sync task that uses the same options but reverses the data transfer, select history for an existing cloud sync on the Data Protection page. The Restore Cloud Sync Task window opens.
Enter a name in Description for this reversed task.
Select the Transfer Mode and then define the path for a storage location on TrueNAS scale for the transferred data.
Click Restore.
TrueNAS saves the restored cloud sync as another entry in Data protection > Cloud Sync Tasks.
If you set the restore destination to the source dataset, TrueNAS may alter ownership of the restored files to root. If root did not create the original files and you need them to have a different owner, you can recursively reset their ACL permissions through the GUI.
Google Drive and G Suite are widely used tools for creating and sharing documents, spreadsheets, and presentations with team members. While cloud-based tools have inherent backups and replications included by the cloud provider, certain users might require additional backup or archive capabilities. For example, companies using G Suite for important work might be required to keep records for years, potentially beyond the scope of the G Suite subscription. TrueNAS offers the ability to easily back up Google Drive by using the built-in cloud sync.
You can add Google Drive credentials using the Add Cloud Credentials screen accessed from the Credentials > Backup Credentials > Cloud Credentials screen, or you can add them when you create a cloud sync task using the Add Cloud Sync Task screen accessed from the Data Protection > Cloud Sycn Task screen.
To set up a cloud credential, go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Select Google Drive on the Provider dropdown list. The Google Drive authentication settings display on the screen.
Enter the Google Drive authentication settings.
a. Click Log In To Provider. The Google Authentication window opens.
b. Click Proceed to open the Choose an Account window.
c. Select the email account to use. Google displays the Sign In window. Enter the password and click Next to enter the password. Click Next again. Google might display a Verify it’s you window. Enter a phone number where Google can text an verification code, or you can click Try another way.
d. Click Allow on the TrueNAS wants to access your Google Account window. TrueNAS populates Access Token with the token Google provides.
Click Verify Credentials and wait for TrueNAS to display the verification dialog with verified status. Close the dialog.
Click Save. The Cloud Credentials widget displays the new credentials. These are also available for cloud sync tasks to use.
You must add the cloud credential on the Backup Credentials screen before you create the cloud sync task.
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select Google Drive on the Credential dropdown list then enter your credentials.
Click Next.
Select the direction for the sync task. PULL brings files from the cloud storage provider to the location specified in Directory/Files (this is the location on TrueNAS SCALE). PUSH sends files from the location in Directory/Files to the cloud storage provider location you specify in Folder.
Select the transfer method from the Transfer Mode dropdown list. Sync keeps files identical on both TrueNAS SCALE and the remote cloud provider server. If the sync encounters an error, destination server files are not deleted. Copy duplicates files on both the TrueNAS SCALE and remote cloud provider server. Move transfer the files to the destination server and then deleted the copy on server that transferred the files. It also overwrites files with the same names on the destination.
Enter or browse to the dataset or folder directory. Click the
arrow to the left of / under the Directory/Files and Folder fields. Select the TrueNAS SCALE dataset path in Directory/Files and the Google Drive path in Folder. If PUSH is the selected Direction, this is where on TrueNAS SCALE the files you want to copy, sync or move transfer to the provider. If Direction is set to PULL this is the location where on TrueNAS SCALE you want to copy, sync or move files to.Click the
to the left of / to collapse the folder tree.Select the preset from the Schedule dropdown that defines when the task runs. For a specific schedule, select Custom and use the Advanced Scheduler. Clearing the Enable checkbox makes the configuration available without allowing the specified schedule to run the task.
To manually activate a saved task, go to Data Protection > Cloud Sync Tasks, click for the cloud sync task you want to run. Click CONTINUE or CANCEL for the Run Now operation.
(Optional) Click Advanced Options to set any advanced option you want or need for your use case or to define environment variables. Scroll down to and enter the variables or scripts in either the Pre-script or Post-script fields. These fields are for advanced users.
Click Dry Run to test your settings before you click Save. TrueNAS connects to the cloud storage provider and simulates a file transfer but does not send or receive data.
The new task displays on the Cloud Sync Tasks widget with the status of PENDING until it runs. If the task completes without issue the status becomes SUCCESS.
See Using Scripting and Environment Variables for more information on environment variables.
One caveat is that Google Docs and other files created with Google tools have their own proprietary set of permissions and their read/write characteristics unknown to the system over a standard file share. Files are unreadable as a result.
To allow Google-created files to become readable, allow link sharing to access the files before the backup. Doing so ensures that other users can open the files with read access, make changes, and then save them as another file if further edits are needed. Note that this is only necessary if the file was created using Google Docs, Google Sheets, or Google Slides; other files should not require modification of their share settings.
TrueNAS is perfect for storing content, including cloud-based content, for the long term. Not only is it simple to sync and backup from the cloud, but users can rest assured that their data is safe, with snapshots, copy-on-write, and built-in replication functionality.
TrueNAS can send, receive, or synchronize data with the cloud storage provider Storj. Cloud sync tasks allow for single-time transfers or recurring transfers on a schedule. They are an effective method to back up data to a remote location.
This procedure provides instructions to set up both Storj and SCALE.To take advantage of the lower-cost benefits of the Storj-TrueNAS cloud service, you must create your Storj account using the link provided on the Add Cloud Credentials screen.
You must also create and authorize the storage buckets on Storj for use by SCALE.
iXsystems is not responsible for any charges you incur using a third-party vendor with the cloud sync feature.
TrueNAS supports major providers like Amazon S3, Google Cloud, and Microsoft Azure. It also supports many other vendors. To see the full list of supported vendors, go to Credentials > Backup Credentials > Cloud Credentials click Add, and open the Provider dropdown list.
You must have all system storage (pool and datasets or zvols) configured and ready to receive or send data.
To create your cloud sync task for a Storj-TrueNAS transfer you:
Create the SCALE cloud credential.
Adding the cloud credential in SCALE includes using the link to create the Storj-TrueNAS account, creating a new bucket, and obtaining the S3 authentication credentials you need to complete the process in SCALE.
Create the Storj-TrueNAS account.
You must create a new Storj-TrueNAS account to use SCALE to access a Storj account.
Add a new Storj bucket.
Create Storj S3 access for the new bucket.
Finish creating the SCALE cloud credential using the S3 access and secret keys provided by Storj.
Create the cloud sync task for one bucket.
The instructions in this section covers adding the Storj-iX account and configuring the cloud service credentials in SCALE and Storj. The process includes going to Storj to create a new Storj-iX account and returning to SCALE to enter the S3 credentials provided by Storj.
Go to Credentials > Backup Credentials and click Add on the Cloud Credentials widget. The Add Cloud Credential screen opens with Storj displayed as the default provider in the Provider field.
Enter a descriptive name to identify the credential in the Name field.
Click Signup for account to create your Stor-TrueNAS account. This opens the Storj new account screen for TrueNAS.
You must use this link to create your Storj account to take advantage of the benefits of the Storj-TrueNAS pricing!
After setting up your Storj-TrueNAS account, create your Storj bucket and the Storj S3 access for the new bucket.
Enter the authentication information provided by Storj in the Acces Key ID and Secret Access Key fields.
Click Verify Credentials and wait for the system to verify the credentials.
Click Save.
After completing this configuration form, you can set up the cloud sync task.
You can create your iX-Storj cloud service account using two methods:
The Storj Create your Storj account web page opens. Enter your information in the fields, select the I agree to the Terms of Service and Privacy Policy, then click the button at the bottom of the screen. The Storj main dashboard opens.
Now you can add the storage bucket you want to use in your Storj-TrueNAS account and SCALE cloud sync task.
From the Storj main dashboard:
Click Buckets on the navigation panel on the left side of the screen to open the Buckets screen.
Click New Bucket to open the Create a bucket window.
Enter a name in Bucket Name using lowercase alphanumeric characters, with no spaces between characters, then click Continue to open the Encrypt your bucket window.
Select the encryption option you want to use. Select Generate passphrase to let Storj provide the encryption or select Enter Passphrase to enter your own. If you already have a Storj account and want to use the same passphrase for your new bucket, select Enter Passphrase.
If you select Generate a passphrase, Storj allows you to download the encryption keys. You must keep encryption keys stored in a safe place where you can back up the file. Select I understand, and I have saved the passphrase then click Download.
Click Continue to complete the process and open the Buckets screen with your new bucket.
After creating your bucket, add S3 access for the new bucket(s) you want to use in your Storj-TrueNAS account and the SCALE cloud sync task.
Click Access to open the** Access Management** dashboard, then click Create S3 Credentials on the middle S3 credentials widget.
The Create Access window opens with Type set to S3 Credentials.
Enter the name you want to use for this credential. Our example uses the name of the bucket we created.
Select the permissions you want to allow this access from the Permissions dropdown and select the bucket you want to have access to this credential from the dropdown list. For example, select All for Permissions, then select the one bucket we created ixstorj1.
If you want to use the SCALE option to add new buckets in SCALE, set Storj Permissions and Buckets to All.
Select Add Date (optional) if you want to set the duration or length of time you want to allow this credential to exist. This example set this to Forever. You can select a preset period or use the calendar to set the duration.
Click Encrypt My Access to open the Encryption Information dialog, then click Continue to open theSelect Encryption options window.
Select the encryption option you want to use. Select Generate Passphrase to allow Storj to provide the encryption passphrase, or select Create My Own Passphrase to enter a passphrase of your choice.
Use Copy to Clipboard or Download.txt to obtain the Storj-generated passphrase. Keep this passphrase along with the access keys in a safe place where you can back up the file.
If you lose your passphrase, neither Storj nor iXsystems can help you recover your stored data!
7 . Click Create my Access to obtain the access and secret keys. Use Download.txt to save these keys to a text file.
This completes the process of setting up your Storj buckets and S3 access. Enter these keys in the Authentication fields in TrueNAS SCALE on the Add Cloud Credential screen to complete setting up the SCALE cloud credential.
To add the Storj cloud sync task, go to Data Protection > Cloud Sync Tasks:
Click Add to open the Cloudsync Task Wizard.
Select the Storj credential on the Credential dropdown list, then click Next to show the What and When wizard screen.
Set the Direction and Transfer Mode you want to use.
Set the Direction and Transfer Mode you want to use.
Browse to the dataset or zvol you want to use on SCALE for data storage. Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
If you set the Storj S3 access to only apply to the new bucket created in Storj, you can only use that bucket, selecting Add New results in an error. If you set the Storj S3 Bucket access to All, you can either select the new bucket you created in Storj or Add New to create a new Storj bucket here in SCALE!
If Direction is set to PUSH, click on the folder icon for the Folder field to select the desired folder in the Storj bucket from the dropdown list if not copying/moving/syncing the entire contents of the bucket with the dataset selected in the Directory/Files field.
Set the task schedule for when to run this task.
Click Save.
TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule. To test the task, click Dry Run or Run Now to start the task apart from the scheduled time.
Google Photos works best in TrueNAS using a Google Photos API key and rclone token.
On the Google API dashboard, click the dropdown menu next to the Google Cloud logo and select your project. If you do not have a project, click NEW PROJECT and enter a value in Project name, Organization, and Location.
After you select your project, click Enabled APIs & Services on the left menu, then click + ENABLE APIS AND SERVICES.
Enter photos library api in the search bar, then select Photos Library API and click ENABLE.
Next, click OAuth consent screen on the left menu, select EXTERNAL, then click CREATE.
Enter a value in App name and User support email.
Enter an email address in the Developer contact information section, then click SAVE AND CONTINUE.
Continue to the Add users section, enter your email address, then click ADD.
On the OAuth consent screen, click PUBLISH APP under Testing and push the app to production.
Click Credentials on the left menu, then click + CREATE CREDENTIALS and select OAuth client ID.
Select Desktop app in the Application type dropdown, then enter a name for the client ID and click CREATE.
Copy and save your client ID and secret, or download the JSON file.
Download rclone for your OS and open it in a command line utility. The example photos in this article use Powershell in Windows OS.
Enter rclone config
, then enter n
to create a new remote.
Enter a name for the new remote, then enter the number from the list corresponding to Google Photos.
Enter the client id and secret you saved when you created the Google Photos API credentials, then enter false
to keep the Google Photos backend read-only.
Do not edit the advanced config. When prompted about automatically authenticating rclone with the remote, enter y
.
A browser window opens to authorize rclone access. Click Allow.
In the command line, enter y
when prompted about media item resolution to complete the configuration.
Copy and save the type, client_id, client_secret, and token, then enter y
to keep the new remote.
Open your TrueNAS Web UI. Go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Select Google Photos as the Provider and enter a name.
Do not click Log In To Provider.
Paste the Google Photos API client ID and client secret in the OAuth Client ID and OAuth Client Secret fields.
Paste your rclone token into the Token field.
Click Verify Credential to ensure you filled out the fields correctly, then click Save.
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select an existing backup credential from the Credential dropdown list. If not already added as a cloud credential, click Add New to open the Cloud Credentials screen to add the credential. Click Save to close the screen and return to the wizard.
Click Next to open the Where and When wizard screen.
Select the option from Direction and in Transfer Mode. Select the location where to pull from or push data to in the Folder field.
Select the dataset location in Directory/Files. Browse to the dataset to use on SCALE for data storage. Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
Cloud provider settings change based on the credential you select. Select or enter the required settings that include where files are stored. If shown, select the bucket on the Bucket dropdown list.
Select the time to run the task from the Schedule options.
Click Save to add the task.
Use Dry Run to test the configuration before clicking Save or select the option on the Cloud Sync Task widget after you click Save. TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule.
You often need to copy data to another system for backup or when migrating to a new system. A fast and secure way of doing this is by using rsync with SSH.
Rsync provides the ability to either push or pull data. The Push function copies data from TrueNAS to a remote system. The Pull function moves or copies data from a remote system and stores it in the defined Path on the TrueNAS host system.
There are two ways to connect to a remote system and run an rsyc task: setting up an SSH connection or an rsync module. You need to have either an SSH connection for the remote server already configured or an rsync module configured in a remote rsync server. Each has different preparation requirements.
When the remote system is another TrueNAS, set the Rsync Mode to SSH, verify the SSH service is active on both systems, and ensure SSH keys are exchanged between systems. When the remote system is not TrueNAS, make sure that system has the rsync service activated and permissions configured for the user account name that TrueNAS uses to run the rsync task.
Create an SSH connection and keypair. Go to Credentials > Backup Credentials to add an SSH connection and keypair. Download the keys. Enter the admin user that should set up and have permission to perform the remote sync operation with the remote system. If using two TrueNAS systems with the admin user, enter admin. If one system only uses the root user, enter root.
Update the admin user by adding the private key to user in the UI and then adding the private key to the home directory for the admin user. When the Rsync Mode is SSH,
Start the SSH service on both systems. Go to System Settings > Services and enable SSH.
Create a dataset on the remote TrueNAS (or other system). Write down the host and path to the data on the remote system you plan to sync with.
Create a module on the remote system. On TrueNAS, install an rsync app (for example, Rsyncd) and configure the module.
First, enable SSH and establish a connection to the remote server.
After establishing the SSH connection, add the rsync task.
Go to Data Protection and click Add on the Rsync Tasks widget to open the Add Rsync Task screen.
Choose a Direction for the rsync task as either Push or Pull and then define the task Schedule.
Select a User account that matches the SSH connection Username entry in the SSH Connections set up for this remote sync.
Provide a Description for the rsync task.
Select SSH in Rsync Mode. The SSH settings fields show.
Choose a connection method from the Connect using dropdown list.
If selecting SSH private key stored in user’s home directory, enter the IP address or hostname of the remote system in Remote Host. Use the format username@remote_host if the username differs on the remote host. Enter the SSH port number for the remote system in Remote SSH Port. By default, 22 is reserved in TrueNAS.
If selecting SSH connection from the keychain, select an existing SSH connection to a remote system or choose Create New to add a new SSH connection.
Enter a full path to a location on the remote server where you either copy information from or to in Remote Path. Maximum path length is 255 characters.
If the remote path location does not exist, select Validate Remote Path to create and define it in Remote Path.
Select the schedule to use and configure the remaining options according to your specific needs.
Click Save.
Before you create an rsync task on the host system, you must create a module on the remote system. You must define at least one module per rsyncd.conf(5) on the rsync server. The Rsync Daemon application is available in situations where configuring TrueNAS as an rsync server with an rsync module is necessary. If the non-TruNAS remote server includes an rsync service, make sure it is turned on.
After configuring the rsync server, go to Data Protection and click Add on the Rsync Tasks widget to open the Add Rsync Task screen.
Enter or browse to the dataset or folder to sync with the remote server. Use the
to the left of the /mnt folder and each folder listed on the tree to expand and browse through, then click on the name to populate the path field.Click in the User field then select the user from the dropdown list. The user must have permissions to run an rsync on the remote server.
Set the Direction for the rsync task. Select Pull to copy from the remote server to TrueNAS or Push to copy from TrueNAS to the remote server.
Select Module as the connection mode from the Rsync Mode dropdown.
Enter the remote host name or IP in Remote Host. Use the format username@remote_host when the username differs from the host entered into the Remote Host field.
Set the schedule for when to run this task, and any other options you want to use. If you need a custom schedule, select Custom to open the advanced scheduler window.
Select the Enabled to enable the task. Leave cleared to disable the task but not delete the configuration. You can still run the rsync task by going to Data Protection and clicking then the Run Now play_arrow icon for the rsycn task.
Click Save.
Periodic snapshot tasks allow you to schedule creating read-only versions of pools and datasets at a given point in time. You can also access VMWare snapshot integration and TrueNAS SCALE storage snapshots from the Periodic Snapshot Tasks widget.
Create the required datasets or zvols before creating a snapshot task.
Go to Data Protection > Periodic Snapshot Tasks and click Add.
First, choose the dataset (or zvol) to schedule as a regular backup with snapshots, and how long to store the snapshots.
Next, define the task Schedule. If you need a specific schedule, choose Custom and use the Advanced Scheduler section below.
Configure the remaining options for your use case. For help with naming schema and lifetime settings refer to the sections below.
Click Save to save this task and add it to the list in Data Protection > Periodic Snapshot Tasks.
You can find any snapshots taken using this task in Storage > Snapshots.
To check the log for a saved snapshot schedule, go to Data Protection > Periodic Snapshot Tasks and click on the task. The Edit Periodic Snapshot Tasks screen displays where you can modify any settings for the task.
The Naming Schema determines how automated snapshot names generate. A valid schema requires the %Y (year), %m (month), %d (day), %H (hour), and %M (minute) time strings, but you can add more identifiers to the schema too, using any identifiers from the Python strptime function.
For Periodic Snapshot Tasks used to set up a replication task with the Replication Task function:
You can use custom naming schema for full backup replication tasks. If you are going to use the snapshot for an incremental replication task, use the default naming schema.
This uses some letters differently from POSIX (Unix) time functions.
For example, including %z
(time zone) ensures that snapshots do not have naming conflicts when daylight time starts and ends, and %S (second) adds finer time granularity.
Examples:
Naming Scheme | Snapshot Names Look Like |
---|---|
replicationsnaps-1wklife-%Y%m%d_%H:%M | replicationsnaps-1wklife-20210120_00:00 , replicationsnaps-1wklife-20210120_06:00 |
autosnap_%Y.%m.%d-%H.%M.%S-%z | autosnap_2021.01.20-00.00.00-EST , autosnap_2021.01.20-06.00.00-EST |
When referencing snapshots from a Windows computer, avoid using characters like colon (:) that are invalid in a Windows file path. Some applications limit filename or path length, and there might be limitations related to spaces and other characters. Always consider future uses and ensure the name given to a periodic snapshot is acceptable.
A snapshot lifetime value defines how long the snapshot schedule ignores that snapshot when it looks for obsolete snapshots to remove. For example, defining a lifetime of two weeks on a snapshot created after a weekly snapshot schedule runs can result in that snapshot actually being deleted three weeks later. This is because the snapshot has a timestamp and defined lifetime that preserves the snapshot until the next time the scheduled snapshot task runs.
TrueNAS also preserves snapshots when at least one periodic task requires it. For example, you have two schedules created where one schedule takes a snapshot every hour and keeps them for a week, and the other takes a snapshot every day and keeps them for 3 years. Each has an hourly snapshot taken. After a week, snapshots created at 01.00 through 23.00 get deleted, but you keep snapshots timed at 00.00 because they are necessary for the second periodic task. These snapshots get destroyed at the end of 3 years.
Use this procedure to create ZFS snapshots when using TrueNAS SCALE as a VMWare datastore.
You must have a paid edition of VMWare ESXi to use the TrueNAS SCALE VMWare Snapshots feature. ESXi free has a locked (read-only) API that prevents using TrueNAS SCALE VMWare Snapshots.
This tutorial uses ESXi version 8.
When creating a ZFS snapshot of the connected dataset, VMWare automatically takes a snapshot of any running virtual machines on the associated datastore. VMware snapshots can integrate VMware Tools, making it possible to quiesce VM snapshots, sync filesystems, take shadow copy snapshots, and more. Quiescing snapshots is the process of bringing VM data into a consistent state, suitable for creating automatic backups. Quiesced snapshots can be file-system consistent, where all pending data or file-system changes complete, or application consistent, where applications complete all tasks and flush buffers, prior to creating the snapshot.See Manage Snapshots from VMWare for more information.
VM snapshots are included as part of the connected Virtual Machine File System (VMFS) datastore and stored as a point-in-time within the scheduled or manual TrueNAS ZFS snapshot of the data or zvol backing that VMWare datastore. The temporary VMware snapshots are automatically deleted on the VMWare side, but still exist in the ZFS snapshot and are available as stable restore points.
TrueNAS Enterprise
TrueNAS Enterprise customers with TrueNAS CORE 12.0 and newer and TrueNAS SCALE 22.12.4 (Bluefin) and newer deployed can access the iXsystems TrueNAS vCenter plugin. This activates management options for TrueNAS hardware attached to vCenter Server and enables limited management of TrueNAS systems from a single interface.
Please contact iXsystems Support to learn more and schedule a time to deploy or upgrade the plugin.
Before using TrueNAS SCALE to create VMWare snapshots, configure TrueNAS to present a VMFS datastore or NFS export to your ESXi host(s) (this tutorial uses iSCSI) and then create and start your VM(s) in ESXi. Virtual machines must be running for TrueNAS to include them in VMWare snapshots, because powered-off virtual machines are already in a consistent state
Go to Datasets and click Add Zvol to create a dedicated zvol for VMWare.
This tutorial uses virtual/vmware/zvol-01.
Create an iSCSI share. Go to Shares and click Wizard on the Block (iSCSI) Shares Targets widget.
a. Enter a name for the share. For example, vmware.
Select Device for Extent Type and select the zvol from the Device dropdown.
Leave Sharing Platform set to VMware and Target set to Create New, then click Next.
b. Set Portal to Create New. You can leave Discovery Authentication Method set to NONE, or select CHAP or Mutual CHAP and enter a Discovery Authentication Group ID. Click Add next to IP Address and select either 0.0.0.0 for IPv4 or :: for IPv6 to listen on all ports.
c. Leave Initiators blank and click Save.
In the VMWare ESXi Host Client, go to Storage, select Adapters, and then click Software iSCSI to configure the iSCSI connection.
a. Configure CHAP authentication if needed or leave set to Do not use CHAP.
b. Click Add dynamic target, enter the IP address for the TrueNAS SCALE system, and click Save Configuration to return to the Adapters screen.
c. Click Rescan to discover the iSCSI initiator. ESXi automatically adds static targets for discovered initiators. Click Software iSCSI again to confirm.
d. Go to Devices and click Rescan to discover the shared storage. ESXi adds the TrueNAS iSCSI disk to the list of devices.
Go to Datastores and click New Datastore to create a new VMFS datastore using the TrueNAS device. Then go to Virtual Machines and create your new virtual machine(s), using the new datastore for storage.
To configure TrueNAS SCALE to create VMWare snapshots, go to Data Protection and click the VMware Snapshot Integration button in the Periodic Snapshot Tasks widget to open the VMWare Snapshots screen.
Click the Add button to configure the VMWare Snapshot Task.
You must follow the exact sequence to add the VMware snapshot or the ZFS Filesystem and Datastore fields do not populate with options available on your system. If you click in ZFS Filestore* or Datastores before you click Fetch Datastores the creation process fails, the two fields do not populate with the information from the VMWare host, and you must exit the add form or click Cancel and start again.
Enter the IP address or host name for your VMWare system in Hostname.
Enter the user credentials on the VMware host with ‘Create Snapshot’ and ‘Remove Snapshot’ permissions in VMware. See Virtual Machine Snapshot Management Privileges from VMware for more information.
Click Fetch Datastores. This connects TrueNAS SCALE to the VMWare host and populates the ZFS Filesystem and Datastore dropdown fields. Make sure the correct TrueNAS ZFS dataset or zvol matching the VMware datastore is populated.
Select the TrueNAS SCALE dataset from the ZFS Filesystem dropdown list of options.
Select the VMFS datastore from the Datastore dropdown list of options.
Click Save. The saved snapshot configuration appears on the VMware Snapshots screen.
State indicates the current status of the VMware connection as PENDING, SUCCESS, or ERROR.
Create a new periodic snapshot task for the zvol or a parent dataset. If there is an existing snapshot task for the zvol or a parent dataset, VMWare snapshots are automatically integrated with any snapshots created after the VMWare snapshot is configured.
Expand the configured task on the Periodic Snapshot Tasks screen and ensure that VMware Sync is true.
To revert a VM using a ZFS snapshot, first clone the snapshot as a new dataset in TrueNAS SCALE, present the cloned dataset to ESXi as a new LUN, resignature the snapshot to create a new datastore, then stop the old VM and re-register the existing machine from the new datastore.
Clone the snapshot to a new dataset.
a. Go to Data Protection and click Snapshots on the Periodic Snapshot Tasks widget and locate the snapshot you want to recover and click on that row to expand details.
b. Click Clone to New Dataset. Enter a name for the new dataset or accept the one provided then click Clone.
The cloned zvol appears on the Datasets screen.
Share the cloned zvol to VMWare using NFS or iSCSI (this tutorial uses iSCSI).
a. Go to Shares and click Block (iSCSI) Shares Targets to access the iSCSI screen.
b. Click Extents and then click Add to open the Add Extent screen.
c. Enter a name for the new extent, select Device from the Extent Type dropdown, and select the cloned zvol from the Device dropdown. Edit other settings according to your use case and then click Save.
d. Click Associated Targets and then click Add to open the Add Associated Target screen.
e. Select the existing VMWare target from the Target dropdown. Enter a new LUN ID number or leave it blank to automatically assign the next available number. Select the new extent from the Extent dropdown and then click Save.
Go to Storage > Adapters and click Rescan to discover the new LUN. Then go to the Devices tab and click Rescan again to discover VMFS filesystems on the LUN. At this point, ESXi discovers the cloned device snapshot, but is unable to mount it because the original device is still online.
Resignature the snapshot so that it can be mounted.
a. Access the ESXi host shell using SSH or a local console connection to resignature the snapshot
b. Enter the command
esxcli storage vmfs snapshot list
to view the unmounted snapshot.
Note the VMFS UUID
value.
c. Enter the command
esxcli storage vmfs snapshot resignature -u VMFS-UUID
, where VMFS-UUID is the ID of the snapshot according to the previous command output.
ESXi resignatures the snapshot and automatically mounts the device.
d. Go back to Storage > Devices in the ESXi Host Client UI and click Refresh. The mounted snapshot appears in the list of devices.
e. Go to the Datastores tab. You might need to click Refresh again. A new datastore for the mounted snapshot appears in the list of datastores.
Stop the old virtual machine(s) you want to revert and use the snapshot datastore to register an existing VM from the snapshot.
a. Go to Virtual Machines in ESXi, select the existing VM(s) to revert, and click Power Off.
b. Click Create / Register VM to open the New virtual machine screen.
c. Select Register an existing virtual machine and then click next.
d. Click Select and use the Datastore Browser to select the snapshot datastore.
Select the VM(s) you want to revert and click Next.
e. Review selections on the Ready to complete screen/ If correct, click Finish.
Start the new VM(s) and verify functionality, then delete or archive the previous VM(s). Copy or migrate the VMware virtual machine to the original, non-snapshot datastore.
S.M.A.R.T. or Self-Monitoring, Analysis and Reporting Technology is a standard for disk monitoring and testing. You can monitor disks for problems using different kinds of self-tests. TrueNAS can adjust when it issues S.M.A.R.T. alerts. When S.M.A.R.T. monitoring reports a disk issue, we recommend you replace that disk. Most modern ATA, IDE, and SCSI-3 hard drives support S.M.A.R.T. Refer to your respective drive documentation for confirmation.
TrueNAS runs S.M.A.R.T. tests on disks. Running tests can reduce drive performance, so we recommend scheduling tests when the system is in a low-usage state. Avoid scheduling disk-intensive tests at the same time! For example, do not schedule S.M.A.R.T. tests on the same day as a disk scrub or other data protection task.
To test one or more disk for errors, go to Storage and click the Disks button.
Select the disks you want to test using the checkboxes to the left of the disk names. Selecting multiple disks displays the Batch Operations options.
Click Manual Test. The Manual S.M.A.R.T. Test dialog displays.
Manual S.M.A.R.T. tests on NVMe devices is currently not supported.
Next, select the test type from the Type dropdown and then click Start.
Test types differ based on the drive connection, ATA or SCSI. Test duration varies based on the test type you chose. TrueNAS generates alerts when tests discover issues.
The ATA drive connection test type options are:
For more information, refer to smartctl(8).
To schedule recurring S.M.A.R.T. tests, go to Data Protection and click ADD in the S.M.A.R.T. Tests widget.
Select the disks to test from the Disks dropdown list, and then select the test type to run from the Type dropdown list.
Next select a preset from the Schedule dropdown. To create a custom schedule select Custom to open the advanced scheduler window where you can define the schedule parameters you want to use.
Saved schedules appear in the S.M.A.R.T. Tests window.
S.M.A.R.T. tests can offline disks! Avoid scheduling S.M.A.R.T. tests simultaneously with scrub or other data protection tasks.
Start the S.M.A.R.T. service. Go to System Settings > Services and scroll down to the S.M.A.R.T. service. If not running, click the toggle to turn the service on. Select Start Automatically to have this service start after after the system reboots.
If you have not configured the S.M.A.R.T. service yet, while the service is stopped, click edit to open the service configuration form. See Services S.M.A.R.T. Screen for more information on service settings. Click Save to save settings and return to the Services screen.
TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data. When properly configured and scheduled, replication takes regular snapshots of storage pools or datasets and saves them in the destination location either on the same system or a different system.
Local replication occurs on the same TrueNAS SCALE system using different pools or datasets. Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system, or with some other remote server you want to use to store your replicated data. Local and remote replication can involve encrypted pools or datasets.
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote. It also covers the related steps to take prior to configuring a replication task.
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities. Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Replication tasks typically require a configured and active periodic snapshot task.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection. You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
A local replication creates a zfs snapshot and saves it to another location on the same TrueNAS SCALE system either using a different pool, or dataset or zvol. This allows users with only one system to take quick data backups or snapshots of their data when they have only one system. In this scenario, create a dataset on the same pool to store the replication snapshots. You can create and use a zvol for this purpose. If configuring local replication on a system with more than one pool, create a dataset to use for the replicated snapshots on one of those pools.
While we recommend regularly scheduled replications to a remote location as the optimal backup scenario, this is useful when no remote backup locations are available, or when a disk is in immediate danger of failure.
Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused. Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume. Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see SCALE Hardware Guide for CPU, memory and storage capacity information.
With the implementation of the Local Administrator user and role-based permissions, setting up replication tasks as an admin user has a few differences over setting up replication tasks when logged in as root.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule. Users also have the option to run a scheduled job on demand.
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote. It also covers the related steps you should take prior to configuring a replication task.
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities. Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Replication tasks typically require a configured and active periodic snapshot task.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection. You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
The replication wizard allows users to create and copy ZFS snapshots to another location on the same system.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc. Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots. If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems. Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
a. Select On this System on the Source Location dropdown list. Browse to the location of the pool or dataset you want to replicate and select it so it populates Source with the path. Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
b. Select On this System on the Destination Location dropdown list. Browse to the location of the pool or dataset you want to use to store replicated snapshots and select to populate Destination with the path.
c. (Optional) Enter a name for the snapshot in Task Name. SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge. To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time. Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use. This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it. Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually. Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION. A dialog displays if this is the first snapshot taken using the destination dataset. If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task. This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget. The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data. When properly configured and scheduled, remote replication takes take regular snapshots of storage pools or datasets and saves them in the destination location on another system.
Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system (SCALE or CORE) where you want to use to store your replicated snapshots.
With the implementation of the Local Administrator user and role-based permissions, setting up replication tasks as an admin user has a few differences than with setting up replication tasks when logged in as root. Setting up remote replication while logged in as the admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule. Users also have the option to run a scheduled job on demand.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote. It also covers the related steps you should take prior to configuring a replication task.
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities. Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Replication tasks typically require a configured and active periodic snapshot task.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection. You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
To streamline creating simple replication tasks use the Replication Task Wizard to create and copy ZFS snapshots to another system. The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc. Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard. This saves some time when creating multiple replication tasks between the same two systems.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots. If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems. Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
a. Select either On this System or On a Different System on the Source Location dropdown list. If your source is a remote system, select On a Different System. The Destination Location automatically changes to On this System. If your source is the local TrueNAS SCALE system, you must select On a Different System from the Destination Location dropdown list to do remote replication.
TrueNAS shows the number snapshots available for replication.
b. Select an existing SSH connection to the remote system, or select Create New to open the New SSH Connection configuration screen.
c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the Source with the path. You can select multiple sources or manually type the names into the Source field. Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
d. Repeat to populate the Destination field. You cannot use zvols as a remote replication destination. Add a name to the end of the path to create a new dataset in that location.
e. Select Use Sudo for ZFS Commands. Only displays when logged in as the admin user (or the name of the admin user).
This removes the need to issue the cli zfs allow
command in Shell on the remote system.
When the dialog displays, click Use Sudo for ZFS Comands. If you close this dialog, select the option on the Add Replication Task wizard screen.
f. Select Replicate Custome Snapshots, then leave the default value in Naming Schema. If you know how to enter the schema you want, enter it in Naming Schema. Remote sources require entering a snapshot naming schema to identify the snapshots to replicate. A naming schema is a pattern of naming custom snapshots you want to replicate. Enter the name and strftime(3) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication. Separate entries by pressing Enter. The number of snapshots matching the patterns display.
g. (Optional) Enter a name for the snapshot in Task Name. SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge. To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time. Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use. This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it. Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually. Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION. A dialog displays if this is the first snapshot taken using the destination dataset. If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task. This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget. The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
For information on replicating encrypted pools or datasets, see Setting Up a Encrypted Replication Task.
When using a TrueNAS system on a different release, like CORE or SCALE Angelfish, the remote or destination system user is always root.
To configure a new SSH connection from the Replication Task Wizard:
Select Create New on the SSH Connection dropdown list to open the New SSH Connection configuration screen.
Enter a name for the connection.
Select the Setup Method from the dropdown list. If a TrueNAS system, select Semi-Automatic.
Enter the URL to the remote TrueNAS in TrueNAS URL.
Enter the administration user (i.e., root or admin) that logs into the remote system with the web UI in Admin Username. Enter the password in Admin Password.
Enter the administration user (i.e., root or admin) for remote system SSH session. If you clear root as the the user and type any other name the Enable passwordless sudo for ZFS commands option displays. This option does nothing so leave it cleared.
Select Generate New from the Private Key dropdown list.
(Optional) Select a cipher from the dropdown list, or enter a new value in seconds for the Connection Timeout if you want to change the defaults.
Click Save to create a new SSH connection and populate the SSH Connection field in the Replication Task Wizard.
Using encryption for SSH transfer security is always recommended.
In situations where you use two systems within an absolutely secure network for replication, disabling encryption speeds up the transfer. However, the data is completely unprotected from eavesdropping.
Choosing No Encryption for the task is less secure but faster. This method uses common port settings but you can override these by switching to the Advanced Replication Creation options or by editing the task after creation.
TrueNAS SCALE advanced replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data. When properly configured and scheduled, local or remote replication using the Advanced Replication Creation option takes regular snapshots of storage pools or datasets and saves them in the destination location on the same or another system.
Local replication occurs on the same TrueNAS SCALE system using different pools or datasets. Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system, or with some other remote server you want to use to store your replicated data. Local and remote replication can involve encrypted pools or datasets.
The Advanced Replication Creation option opens the Add Replication Task screen. This screen provides access to the same settings found in the replication wizard but has more options to specify:
You can also:
With the implementation of the local administrator user to replace the root login, there are a few differences between setting up replication tasks as an admin user than with setting up replication tasks when logged in as root. Setting up remote replication while logged in as the admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule. Users also have the option to run a scheduled job on demand.
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote. It also covers the related steps you should take prior to configuring a replication task.
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities. Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Replication tasks typically require a configured and active periodic snapshot task.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection. You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
Configure your SSH connection before you begin configuring the replication task through the Add Replication Task screen. If you have an existing SSH connection with the remote system the option displays on the SSH Connection dropdown list.
Turn on SSH service. Go to System Settings > Services screen, verify the SSH service configuration, then enable it.
To access advanced replication settings, click Advanced Replication Creation at the bottom of the first screen of the Replication Task Wizard. The Add Replication Task configuration screen opens.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots. If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems. Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Give the task a name and set the direction of the task. Unlike the wizard, the Name does not automatically populate with the source/destination task name after you set the source and destination for the task. Each task name must be unique, and we recommend you name it in a way that makes it easy to remember what the task is doing.
Select the direction of the task. Pull replicates data from a remote system to the local system. Push sends data from the local system to the remote.
Select the method of tranfer for this replication from the Transport dropdown list. Select LOCAL to replicate data to another location on the same system. Select SSH is the standard option for sending or receiving data from a remote system. Select the existing SSH Connection from the dropdown list. Select SSH+Netcat is available as a faster option for replications that take place within completely secure networks. SSH+Netcat requires defining netcat ports and addresses to use for the Netcat connection.
With SSH-based replications, select the SSH Connection to the remote system that sends or receives snapshots. To create a new connection to use for replication from a destination to this local system, select newpullssh.
Select Use Sudo for Zfs Commands to controls whether the user used for SSH/SSH+NETCAT replication has passwordless sudo enabled to execute zfs commands on the remote host.
If not selected, you must enter zfs allow
on the remote system to to grant non-user permissions to perform ZFS tasks.
Specify the source and destination paths. Adding /name to the end of the path creates a new dataset in that location. Click the arrow to the left of each folder or dataset name to expand the options and browse to the dataset, then click on the dataset to populate the Source. Choose a preconfigured periodic snapshot task as the source of snapshots to replicate. Pulling snapshots from a remote source requires a valid SSH Connection before the file browser can show any directories.
A remote destination requires you to specify an SSH connection before you can enter or select the path. If the file browser shows a connection error after selecting the correct SSH Connection, you might need to log in to the remote system and configure it to allow SSH connections. Define how long to keep snapshots in the destination.
Remote sources require defining a snapshot naming schema to identify the snapshots to replicate. Local sources are replicated by snapshots that were generated from a periodic snapshot task and/or from a defined naming schema that matches manually created snapshots.
DO NOT use zvols as remote destinations.
Select a previously configured periodic snapshot task for this replication task in Periodic Snapshot Tasks. The replication task selected must have the same values in Recursive and Exclude Child Datasets as the chosen periodic snapshot task. Selecting a periodic snapshot schedule removes the Schedule field.
If a periodic snapshot task does not exist, exist the advanced replication task configuration, go configure a periodic snapshot task, then return to the Advanced Replication screen to configure the replication Task. Select Replicate Specific Snapshots to define specific snapshots from the periodic task to use for the replication. This displays the schedule options for the snapshot task. Enter the schedule. The only periodically generated snapshots included in the replication task are those that match your defined schedule.
Select the naming schema or regular expression option to use for this snapshot.
A naming schema is a collection of strftime time and date strings and any identifiers that a user might have added to the snapshot name.
For example, entering the naming schema custom-%Y-%m-%d_%H-%M
finds and replicates snapshots like custom-2020-03-25_09-15
.
Enter multiple schemas by pressing Enter to separate each schema.
Set the replication schedule to use and define when the replication task runs. Leave Run Automatically selected to use the snapshot task specified and start the replication immediately after the related periodic snapshot task completes. Select Schedule to display scheduling options for this replication task and To automate the task according to its own schedule.
Selecting Schedule allows scheduling the replication to run at a separate time. Choose a time frame that gives the replication task enough time to finish and is during a time of day when network traffic for both source and destination systems is minimal. Use the custom scheduler (recommended) when you need to fine-tune an exact time or day for the replication.
Click Save.
Options for compressing data, adding a bandwidth limit, or other data stream customizations are available. Stream Compression options are only available when using SSH. Before enabling Compressed WRITE Records, verify that the destination system also supports compressed write records.
Allow Blocks Larger than 128KB is a one-way toggle. Replication tasks using large block replication only continue to work as long as this option remains enabled.
By default, the replication task uses snapshots to quickly transfer data to the receiving system. Selecting Full Filesystem Replication means the task completely replicates the chosen Source, including all dataset properties, snapshots, child datasets, and clones. When using this option, we recommended allocating additional time for the replication task to run.
Leave Full Filesystem Replication unselected and select Include Dataset Properties to include just the dataset properties in the snapshots to replicate. Leave this option unselected on an encrypted dataset to replicate the data to another unencrypted dataset.
Select Recursive to recursively replicate child dataset snapshots or exclude specific child datasets or properties from the replication.
Enter newly defined properties in Properties Override to replace existing dataset properties with the newly defined properties in the replicated files.
List any existing dataset properties to remove from the replicated files in Properties Exclude.
When a replication task is having difficulty completing, it is a good idea to select Save Pending Snapshots. This prevents the source TrueNAS from automatically deleting any snapshots that failed to replicate to the destination system.
By default, the destination dataset is set to be read-only after the replication completes. You can change the Destination Dataset Read-only Policy to only start replication when the destination is read-only (set to REQUIRE) or to disable it by setting it to IGNORE.
The Encryption option adds another layer of security to replicated data by encrypting the data before transfer and decrypting it on the destination system. Selecting Encryption adds the additional setting options HEX key or PASSPHRASE. You can store the encryption key either in the TrueNAS system database or in a custom-defined location.
Synchronizing Destination Snapshots With Source destroys any snapshots in the destination that do not match the source snapshots. TrueNAS also does a full replication of the source snapshots as if the replication task never run, which can lead to excessive bandwidth consumption.
This can be a very destructive option. Make sure that any snapshots deleted from the destination are obsolete or otherwise backed up in a different location.
Defining the Snapshot Retention Policy is generally recommended to prevent cluttering the system with obsolete snapshots. Choosing Same as Source keeps the snapshots on the destination system for the same amount of time as the defined Snapshot Lifetime from the source system periodic snapshot task.
You can use Custom to define your own lifetime for snapshots on the destination system.
Selecting Only Replicate Snapshots Matching Schedule restricts the replication to only those snapshots created at the same time as the replication schedule.
TrueNAS SCALE replication allows users to create replicated snapshots of data stored in encrypted pools, datasets or zvols that on their SCALE system as a way to back up stored data to a remote system. You can use encrypted datasets in a local replication.
You can set up a replication task for a dataset encrypted with a passphrase or a hex encryption key, but you must unlock the dataset before the task runs or the task fails.
With the implementation of the Local Administrator user and role-based permissions, when setting up remote replication tasks when logged in as an admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule. Users also have the option to run a scheduled job on demand.
Remote replication with datasets also require an SSH connection in TrueNAS. You can use an existing SSH connection if it has the same user credentials you want to use for the new replication task.
This section provides a simple overview of setting up a remote replication task for an encrypted dataset. It also covers the related steps you should take prior to configuring the replication task.
To streamline creating simple replication tasks use the Replication Task Wizard to create and copy ZFS snapshots to another system. The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, select encryption options, schedule, or retention lifetime, etc. Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard. This saves some time when creating multiple replication tasks between the same two systems.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots. If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems. Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Unlock the source dataset and export the encryption key to a text editor such as Notepad. Go to Datasets select the source dataset, locate the ZFS Encryption widget and unlock the dataset if locked. Export the key and paste it in any text editor such as Notepad. If you set up encryption to use a passphrase, you do not need to export a key.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
a. Select On this System on the Source Location dropdown list. If your source is the local TrueNAS SCALE system, you must select On a Different System from the Destination Location dropdown list to do remote replication.
If your source is a remote system, create the replication task as the root user and select On a Different System. The Destination Location automatically changes to On this System.
TrueNAS shows the number of snapshots available for replication.
b. Select an existing SSH connection to the remote system or create a new connection. Select Create New to open the New SSH Connection configuration screen.
c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the Source with the path. You can select multiple sources or manually type the names into the Source field. Separate multiple entries with commas. Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
d. Repeat to populate the Destination field. You cannot use zvols as a remote replication destination. Add a /datasetname to the end of the destination path to create a new dataset in that location.
e. (Optional) Select Encryption to add a second layer of encryption over the already encrypted dataset.
f. Select Use Sudo for ZFS Commands. Only displays when logged in as the admin user (or the name of the admin user).
This removes the need to issue the cli zfs allow
command in Shell on the remote system.
When the dialog displays, click Use Sudo for ZFS Comands. If you close this dialog, select the option on the Add Replication Task wizard screen.
This option only displays when logged in as the admin user.
If not selected you need to issue the cli zfs allow
command in Shell on the remote system.
g. Select Replicate Custom Snapshots, then accept the default value in Naming Schema. Remote sources require entering a snapshot naming schema to identify the snapshots to replicate. A naming schema is a pattern of naming custom snapshots you want to replicate. If you want to change the default schema, enter the name and strftime(3) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication. Separate entries by pressing Enter. The number of snapshots matching the patterns display.
h. (Optional) Enter a name for the snapshot in Task Name. SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge. To make it easier to find the snapshot, give it a name that is easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time. Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use. This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it. Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually. Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION. A dialog displays if this is the first snapshot taken using the destination dataset. If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task. This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget. The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
When using a TrueNAS system on a different release, like CORE or SCALE Angelfish, the remote or destination system user is always root.
To configure a new SSH connection from the Replication Task Wizard:
Select Create New on the SSH Connection dropdown list to open the New SSH Connection configuration screen.
Enter a name for the connection.
Select the Setup Method from the dropdown list. If a TrueNAS system, select Semi-Automatic.
Enter the URL to the remote TrueNAS in TrueNAS URL.
Enter the administration user (i.e., root or admin) that logs into the remote system with the web UI in Admin Username. Enter the password in Admin Password.
Enter the administration user (i.e., root or admin) for remote system SSH session. If you clear root as the the user and type any other name the Enable passwordless sudo for ZFS commands option displays. This option does nothing so leave it cleared.
Select Generate New from the Private Key dropdown list.
(Optional) Select a cipher from the dropdown list, or enter a new value in seconds for the Connection Timeout if you want to change the defaults.
Click Save to create a new SSH connection and populate the SSH Connection field in the Replication Task Wizard.
Using encryption for SSH transfer security is always recommended.
In situations where you use two systems within an absolutely secure network for replication, disabling encryption speeds up the transfer. However, the data is completely unprotected from eavesdropping.
Choosing No Encryption for the task is less secure but faster. This method uses common port settings but you can override these by switching to the Advanced Replication Creation options or by editing the task after creation.
After the replication task runs and creates the snapshot on the destination, you must unlock it to access the data. Click the from the replication task options to download a key file that unlocks the destination dataset.
TrueNAS does not support preserving encrypted dataset properties when trying to re-encrypt an already encrypted source dataset.
To replicate an encrypted dataset to an unencrypted dataset on the remote destination system, follow the instructions above to configure the task, then to clear the dataset properties for the replication task:
Select the task on the Replication Task widget. The Edit Replication Task screen opens.
Scroll down to and select Include Dataset Properties to clear the checkbox.
This replicates the unlocked encrypted source dataset to an unencrypted destination dataset.
When you replicate an encrypted pool or dataset you have one level of encryption applied at the data storage level. Use the passphrase or key created or exported from the dataset or pool to unlock the dataset on the destination server.
To add a second layer of encryption at the replication task level, select Encryption on the Replication Task Wizard, then select the type of encryption you want to apply.
Select either Hex (base-16 numeral format) or Passphrase (alphanumeric format) from the Encryption Key Format dropdown list to open settings for that type of encryption.
Selecting Hex displays Generate Encryption Key preselected. Select the checkbox to clear it and display the Encryption Key field where you can import a custom hex key.
Selecting Passphrase displays the Passphrase field where you enter your alphanumeric passphrase.
Select Store Encryption key in Sending TrueNAS database to store the encryption key in the sending TrueNAS database or leave unselected to choose a temporary location for the encryption key that decrypts replicated data.
TrueNAS SCALE users should either replicate the dataset/Zvol without properties to disable encryption at the remote end or construct a special JSON manifest to unlock each child dataset/zvol with a unique key.
Replicate every encrypted dataset you want to replicate with properties.
Export key for every child dataset that has a unique key.
For each child dataset construct a proper json with poolname/datasetname of the destination system and key from the source system like this:
{"tank/share01": "57112db4be777d93fa7b76138a68b790d46d6858569bf9d13e32eb9fda72146b"}
Save this file with the extension
On the remote system, unlock the dataset(s) using properly constructed
Uncheck properties when replicating so that the destination dataset is not encrypted on the remote side and does not require a key to unlock.
Go to Data Protection and click ADD in the Replication Tasks window.
Click Advanced Replication Creation.
Fill out the form as needed and make sure Include Dataset Properties is NOT checked.
Click Save.
Go to Datasets on the system you are replicating from. Select the dataset encrypted with a key, then click Export Key on the ZFS Encryption widget to export the key for the dataset.
Apply the JSON key file or key code to the dataset on the system you replicated the dataset to.
Option 1: Download the key file and open it in a text editor. Change the pool name/dataset part of the string to the pool name/dataset for the receiving system. For example, replicating from tank1/dataset1 on the replicate-from system to tank2/dataset2 on the replicate-to system.
Option 2: Copy the key code provided in the Key for dataset window.
On the system receiving the replicated pool/dataset, select the receiving dataset and click Unlock.
Unlock the dataset. Either clear the Unlock with Key file checkbox, paste the key code into the Dataset Key field (if there is a space character at the end of the key, delete the space), or select the downloaded Key file that you edited.
Click Save.
Click Continue.
The Network menu option has several screens that enable configuring network interfaces and general system-level network settings. The tutorials in this section guide with the various screens and configuration forms contained within this menu item.
TrueNAS SCALE supports configuring different types of network interfaces as part of the various backup, sharing, and virtualization features in the software. The tutorials in this section guide with each of these types of configurations.
The Network screen allows you to add new or edit existing network interfaces, and configure static and alias IP addresses.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
You can use DHCP to provide the IP address for only one network interface and this is most likely for your primary network interface configured during the installation process.
To add another network interface, click Add on the Interfaces widget to display the Add Interface panel. Leave the DHCP checkbox clear. Click Add to the right of Aliases, near the bottom of the Add Interface screen and enter a static IP address for the interface.
You must specify the type of interface you want to create. Select the type of interface from the Type dropdown options: Bridge, Link Aggregation or LAGG, and VLAN or virtual LAN. You cannot edit the interface type after you click Save.
Each interface type displays new fields on the Add Interface panel. Links with more information on adding these specific types of interfaces are at the bottom of this article.
Click on an existing interface in the Interfaces widget then click on the Edit icon to open the Edit Interface screen. The Edit Interface and Add Interface settings are identical except for Type and Name. You cannot edit these settings after you click Save. Name shows on the Edit Interface screen, but you cannot change the name. Type only shows on the Add Interface screen. If you make a mistake with either field you can only delete the interface and create a new one with the desired type.
If you want to change from DHCP to a static IP, you must also add the new default gateway and DNS nameservers that work with the new IP address. See Setting Up a Static IP for more information.
If you delete the primary network interface you can lose your TrueNAS connection and the ability to communicate with the TrueNAS through the web interface!
You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Click the delete icon for the interface. A delete interface confirmation dialog opens.
Do not delete the primary network interface!
If you delete the primary network interface you lose your TrueNAS connection and the ability to communicate with the TrueNAS through the web interface! You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
To configure alias IPs to provide access to internal portions of the network, go to the Network screen:
Click on the Edit icon for the interface to open the Edit Interface screen for the selected interface.
Clear the DHCP checkbox to show Aliases. Click Add for each alias you want to add to this interface.
Enter the IP address and CIDR values for the alias(es).
Select DHCP to control the primary IP for the interface.
Click Save.
In general, a bridge refers to various methods of combining (aggregating) multiple network connections into a single aggregate network.
TrueNAS uses bridge(4) as the kernel bridge driver. Bridge(8) is a command for configuring the bridge in Linux. While the examples focus on the deprecated brctl(8) from the bridge-utilities package, we use ip(8) and bridge(8) from iproute2 instead. Refer to the FAQ section that covers bridging topics more generally.
Network bridging does not inherently aggregate bandwidth like link aggregation (LAGG). Bridging is often used for scenarios where you need to extend a network segment or combine different types of network traffic. Bridging can be used to integrate different types of networks (e.g., wireless and wired networks) or to segment traffic within the same network. A bridge can also be used to allow a VM configured on TrueNAS to communicate with the host system. See Accessing NAS From a VM for more information.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
To set up a bridge interface, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select Bridge from the Type dropdown list. You cannot change the Type field value after you click Save.
Enter a name for the interface. Use the format bondX, vlanX, or brX where X is a number representing a non-parent interface. You cannot change the Name of the interface after you click Save.
(Optional but recommended) Enter any notes or reminders about this particular bridge in Description.
Select the interfaces on the Bridge Members dropdown list.
Click Add to the right of Aliases to show the IP address fields, and enter the IP address for this bridge interface. Click Add again to show an additional IP address fields for each additional IP address you want to add.
Click Save when finished. The created bridge shows in Interfaces with its associated IP address information.
Click Test Changes to determine if network changes are successful.
After TrueNAS finishes testing the interface, click Save Changes to keep the changes. Click Revert Changes to discard the changes and return to the previous configuration.
In general, a link aggregation (LAGG) is a method of combining (aggregating) multiple network connections in parallel to provide additional bandwidth or redundancy for critical networking situations. TrueNAS uses lagg(4) to manage LAGGs.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
To set up a LAGG, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select Link Aggregation from the Type dropdown list. You cannot change the Type field value after you click Save.
Enter a name for the interface using the format bondX, where X is a number representing a non-parent interface. You cannot change the Name of the interface after clicking Apply.
(Optional, but recommended) Enter any notes or reminders about this particular LAGG interface in Description.
Select the protocol from the Link Aggregation Protocol dropdown. Options are LACP, FAILOVER, or LOADBALANCE. Each option displays additional settings.
Select the interfaces to use in the aggregation from the Link Aggregation Interface dropdown list.
(Optional) Click Add to the right of Aliases to show additional IP address fields for each additional IP address to add to this LAGG interface.
Click Save when finished.
A virtual LAN (VLAN) is a partitioned and isolated domain in a computer network at the data link layer (OSI layer 2). Click here for more information on VLANs. TrueNAS uses vlan(4) to manage VLANs.
Before you begin, make sure you have an Ethernet card connected to a switch port and already configured for your VLAN. Also that you have preconfigured the VLAN tag in the switched network.
To set up a VLAN interface, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select VLAN from the Type dropdown list. You cannot change the Type field value after you click Apply.
Enter a name for the interface using the format vlanX where X is a number representing a non-parent interface. You cannot change the Name of the interface after clicking Save.
(Optional, but recommended) Enter any notes or reminders about this particular VLAN in Description.
Select the interface in the Parent Interface dropdown list. This is typically an Ethernet card connected to a switch port already configured for the VLAN.
Enter the numeric tag for the interface in the VLAN Tag field. This is typically preconfigured in the switched network.
Select the VLAN Class of Service from the Priority Code Point dropdown list.
(Optional) Click Add to the right of Aliases to show additional IP address fields for each additional IP address to add to this VLAN interface.
Click Save.
This article describes setting up a network interface with a static IP address or changing the main interface from a DHCP-assigned to a manually-entered static IP address. You must know the DNS name server and default gateway addresses for your IP address.
Disruptive Change!
You can lose your TrueNAS connection if you change the network interface that the web interface uses!
Command line knowledge and physical access to the TrueNAS system are often required to fix misconfigured network settings.
By default, during installation, TrueNAS SCALE configures the primary network interface for Dynamic Host Configuration Protocol (DHCP) IP address management. However, some administrators might choose to assign a static IP address to the primary network interface. This choice may be made if TrueNAS is deployed on a system that does not allow DHCP for security, stability, or other reasons.
In all deployments, only one interface can be set up for DHCP, which is typically the primary network interface configured during the installation process. Any additional interfaces must be manually configured with one or more static IP addresses.
Have the DNS name server addresses, the default gateway for the new IP address, and any static IP addresses on hand to prevent lost communication with the server while making and testing network changes. You have only 60 seconds to change and test these network settings before they revert back to the current settings, for example back to DHCP assigned if moving from DHCP to a static IP.
Back up your system to preserve your data and system settings. Save the system configuration file and a system debug.
As a precaution, grab a screenshot of your current settings in the Global Configuration widget.
If your network changes result in lost communication with the network and you need to return to the DHCP configuration, you can refer to this information to restore communication with your server. Lost communication might require reconfiguring your network settings using the Console Setup menu.
To view a demonstration of this procedure see the tutorial video in the Managing Global Configuration article.
To change an interface from using DHCP to a static IP address:
Click on the Edit icon for the interface on the Interfaces widget to open the Edit Interface screen, then clear the DHCP checkbox.
Click Add to the right of Aliases to add IP address fields, then enter the new static IP. Select the CIDR number from the dropdown list.
Multiple interfaces cannot be members of the same subnet.
If an error displays or the Save button is inactive when setting the IP addresses on multiple interfaces, check the subnet and ensure the CIDR numbers differ.
Click Save. A dialog opens where you can select to either Test Changes or Revert Changes. If you have only one active network interface the system protects your connection to the interface by displaying the Test Changes dialog.
You have 60 seconds to test and save the change before the system discards the change and reverts back to the DHCP-configured IP address.
Check the name servers and default router information in the Global Information widget. If the current settings are not on the same network, click Settings and modify each setting as needed to allow the static IP to communicate over the network.
Add the IP addresses for the DNS name servers in the Nameserver 1, Nameserver 2, and Nameserver 3 fields.
For home users, use 8.8.8.8 for a DNS name server address so you can communicate with external networks.
Add the IP address for the default gateway in the appropriate field. If the static network is IPv4 enter the gateway in IPv4 Default Gateway, if the static network is IPv6 use IPv6 Default Gateway.
Click Save.
Test the network changes. Click Test Changes. Select Confirm to activate Test Changes button.
The system attempts to connect to the new static IP address. If successful the Save Changes dialog displays.
Click Save Changes to make the change to the static IP address permanent or click Revert Changes to discard changes and return to previous settings. The Save Changes confirmation dialog displays. Click SAVE. The system displays a final confirmation that the change is in effect.
Only one interface can use DHCP to assign the IP address and that is likely the primary network interface. If you do not have an existing network interface set to use DHCP you can convert an interface from static IP to DHCP.
To switch/return to using DHCP:
Click Settings on the Global Configuration widget.
Clear the name server fields and the default gateway, and then click Save.
Click on the Edit icon for the interface to display the Edit Interface screen.
Select DHCP.
Remove the static IP address from the IP Address field.
Click Apply.
Click Settings to display the Global Configuration screen, then enter the name server and default gateway addresses for the new DHCP-provided IP address.
Home users can enter 8.8.8.8 in the Nameserver 1 field.
Click Test Change. If the network settings are correct, the screen displays the Save Changes widget. Click Save Changes.
If the test network operation fails or the system times out, your system returns to the network settings before you attempted the change. Verify the name server and default gateway information to try again.
Use the Global Configuration Settings screen to add general network settings like the default gateway and DNS name servers to allow external communication.
To add new or change existing network interfaces see Managing Interfaces.
Disruptive Change
You can lose your TrueNAS connection if you change the network interface that the web interface uses! You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Go to Network and click Settings on the Global Configuration widget to open the Edit Global Configuration screen, then:
Enter the host name for your TrueNAS in Hostname. For example, host.
Enter the system domain name in Domain. For example, example.com.
Enter the IP addresses for your DNS name servers in the Nameserver 1, Nameserver 2, and/or Nameserver 3 fields. For home users, enter 8.8.8.8 in the Nameserver 1 field so your TrueNAS SCALE can communicate externally with the Internet.
Enter the IP address for your default gateway into the IPv4 Default Gateway if you are using IPv4 IP addresses. Enter the IP address in the IPv6 Default Gateway if you are using IPv6 addresses.
Select the Outbound Network radio button for outbound service capability.
Select Allow All to permit all TrueNAS SCALE services that need external communication to do that or select Deny All to prevent that external communication. Select Allow Specific and then use the dropdown list to pick the services you want to allow to communicate externally.
Click on as many services for which you want to permit external communications. Unchecked services cannot communicate externally.
Click Save. The Global Configuration widget on the Network screen updates to show the new settings.
Use the Global Configuration Settings screen to manage existing general network settings like the default gateway, DNS servers, set DHCP to assign the IP address or to set a static IP address, add IP address aliases, and set up services to allow external communication.
Disruptive Change
You can lose your TrueNAS connection if you change the network interface that the web interface uses!
You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Use the Global Configuration Outbound Network radio buttons to set up services to have external communication capability.
These services use external communication:
Select the Allow All to permit all the above services to communicate externally. This is the default setting.
Select the Deny All to prevent all the above services from communicating externally.
Select the Allow Specific to permit external communication for the services you select. Allow Specific displays a dropdown list of the services you can select. Click on all that apply. A checkmark displays next to a selected service, and these services display in the field separated by a comma (,).
Click Save when finished.
Use Netwait to prevent starting all network services until the network is ready. Netwait sends a ping to each of the IP addresses you specify until one responds, and after receiving the response then services can start.
To set up Netwait, from the Network screen:
Click on Settings in the Global Configuration widget to open the Global Configuration screen.
Select Enable Netwait Feature. The Netwait IP List field displays.
Enter your list of IP addresses to ping. Press Enter after entering each IP address.
Click Save when finished.
TrueNAS Enterprise
The instructions in the article only apply to SCALE Enterprise (HA) systems.
SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
Have the list of network addresses, name sever and default gateway IP addresses, and host and domain names ready so you can complete the network configuration without disruption or system timeouts.
SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes. This is to prevent users from breaking their network connection in SCALE.
Both controllers must be powered on and ready before you configure network settings.
You must disable the failover service before you can configure network settings!
Only configure network settings on controller 1! When ready to sync to peer, SCALE applies settings to controller 2 at that time.
SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
Have the list of network addresses, name sever and default gateway IP addresses, and host and domain names ready so you can complete the network configuration without disruption or system timeouts.
SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes. This is to prevent users from breaking their network connection in SCALE.
To configure network settings on controller 1:
Disable the failover service. Go to System Settings > Services locate the Failover service and click edit. Select Disable Failover and click Save.
Edit the global network settings to add any missing network settings or make any changes.
Edit the primary network interface to add failover settings. Go to Network and click on the primary interface eno1 to open the Edit Interface screen for this interface.
a. Turn DHCP off if it is on. Select DHCP to clear the checkbox.
b. Add the failover settings. Select Critical, and then select 1 on the Failover Group dropdown list.
c. Add the virtual IP (VIP) and controller 2 IP. Click Add for Aliases to display the additional IP address fields.
First, enter the IP address for controller 1 into IP Address (This Controller) and select the netmask (CIDR) number from the dropdown list.
Next, enter the controller 2 IP address into IP Address (TrueNAS Controller 2).
Finally, enter the VIP address into Virtual IP Address (Failover Address).
Click Save
Click Test Changes after editing the interface settings. You have 60 seconds to test and then save changes before they revert. If this occurs, edit the interface again.
Turn failover back on. Go to System Settings > Failover and select Disable Failover to clear the checkmark and turn failover back on, then click Save.
The system might reboot. Monitor the status of controller 2 and wait until the controller is back up and running, then click Sync To Peer. Select Reboot standby TrueNAS controller and Confirm, then click Proceed to start the sync operation. The controller reboots, and SCALE syncs controller 2 with controller 1, which adds the network settings and pool to controller 2.
TrueNAS does not have defined static routes by default but TrueNAS administrators can use the Static Routes widget on the Network screen to manually enter routes so a router can send packets to a destination network.
If you have a monitor and keyboard connected to the system, you can use the Console Setup menu to configure static routes during the installation process, but we recommend using the web UI for all configuration tasks.
If you need a static route to reach portions of the network, from the Network screen:
Click Add in the Static Routes widget to open the Add Static Route screen.
Enter a value in Destination. Enter the destination IP address and CIDR mask in the format A.B.C.D/E where E is the CIDR mask.
Enter the gateway IP address for the destination address in Gateway.
(Optional) Enter a brief description for this static route, such as the part of the network it reaches.
Click Save.
IPMI requires compatible hardware! Refer to your hardware documentation to determine if the TrueNAS web interface has IPMI options.
Many TrueNAS Storage Arrays have a built-in out-of-band management port that provides side-band management should the system become unavailable through the web interface.
Intelligent Platform Management Interface (IPMI) allows users to check the log, access the BIOS setup, and boot the system without physical access. IPMI also enables users to remotely access the system to assist with configuration or troubleshooting issues.
Some IPMI implementations require updates to work with newer versions of Java. See here for more information.
IPMI is configured in Network > IPMI. The IPMI configuration screen provides a shortcut to the most basic IPMI configuration.
We recommend setting a strong IPMI password. IPMI passwords must include at least one upper case letter, one lower case letter, one digit, and one special character (punctuation, e.g. ! # $ %, etc.). It must also be 8-16 characters long. Document your password in a secure way!
After saving the configuration, users can access the IPMI interface using a web browser and the IP address specified in Network > IPMI. The management interface prompts for login credentials. Refer to your IPMI device documentation to learn the default administrator account credentials.
After logging in to the management interface, users can change the default administrative user name and create additional IPMI users. IPMI utility appearance and available functions vary by hardware.
SCALE Credential options are collected in this section of the UI and organized into a few different screens:
Local Users allows those with permissions to add, configure, and delete users on the system. There are options to search for keywords in usernames, display or hide user characteristics, and toggle whether the system shows built-in users.
Local Groups allows those with permissions to add, configure, and delete user groups on the system. There are options to search for keywords in group names, display or hide group characteristics, and toggle whether the system shows built-in groups.
Directory Services contains options to edit directory domain and account settings, set up Idmapping, and configure access and authentication protocols. Specific options include configuring Kerberos realms and key tables (keytab), as well as setting up LDAP validation.
Backup Credentials stores credentials for cloud backup services, SSH Connections, and SSH Keypairs. Users can set up backup credentials with cloud and SSH clients to back up data in case of drive failure.
Certificates contains all the information for certificates, certificate signing requests, certificate authorities, and DNS-authenticators. TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
2FA allows users to set up Two-Factor Authentication for their system. Users can set up 2FA, then link the system to an authenticator app (such as Google Authenticator, LastPass Authenticator, etc.) on a mobile device.
The initial implementation of the TrueNAS SCALE administrator login permitted users to continue using the root user but encouraged users to create a local administrator account when first installing SCALE.
Starting with SCALE Bluefin 22.12.0, root account logins are deprecated for security hardening and to comply with Federal Information Processing Standards (FIPS). All TrueNAS users should create a local administrator account with all required permissions and begin using it to access TrueNAS. When the root user password is disabled, only an administrative user account can log in to the TrueNAS web interface.
TrueNAS SCALE plans to permanently disable root account access in a future release.
SCALE has implemented administrator roles and privileges that allow greater control over access to functions in SCALE and to further comply with FIPS security hardening standards. SCALE includes three predefined admin user account levels:
Full Admin - This is the local administrator account created by the system when doing a clean install using an
Sharing Admin - This is assigned to users responsible for only managing shares (SMB, NFS, iSCSI). This user can create shares and the datasets for shares, start/restart the share service, and modify the ACL for the share dataset.
Read-only Admin - This is assigned to users that can monitor the system but not make changes to settings.
At present, SCALE has both the root and local administrator user logins and passwords.
Root is the default system administration account for CORE, SCALE Angelfish, and early Bluefin releases.
Users migrating from CORE to SCALE or from pre 22.12.3 releases must manually create an admin user account. Only fresh installations using an
iso file provide the option to create the admin user during the installation process.SCALE systems with only the root user account can log in to the TrueNAS web interface as the root user.
System administrators should thereafter create and begin using the admin login, and then disable the root user password.
SCALE 24.04 (Dragonfish) introduces administrators privileges and role-based administrator accounts. The root or local administrator user can create new administrators with limited privileges based on their needs. Predefined administrator roles are read only, share admin, and the default full access local administrator account.
As part of security hardening and to comply with Federal Information Processing standards (FIPS), iXsystems plans to completely disable root login in a future release.
All systems should create the local administrator account and use this account for web interface access. When properly set up, the local administrator (full admin) account performs the same functions and has the same access as the root user.
Some UI screens and settings still refer to the root account, but these references are updating to the administrator account in future releases of SCALE.
To improve system security after the local administrator account is created, disable the root account password to restrict root access to the system.
For more information on the different administrator scenarios users can encounter, read Logging Into SCALE the First Time.
As a security measure, the root user is no longer the default account and the password is disabled when you create the admin user during installation.
Do not disable the admin account and root passwords at the same time. If both root and admin account passwords become disabled at the same time and the web interface session times out, a one-time sign-in screen allows access to the system.
Enter and confirm a password to gain access to the UI. After logging in, immediately go to Credentials > Local Users to enable the root or admin password before the session times out again. This temporary password is not saved as a new password and it does not enable the admin or root passwords, it only provides one-time access to the UI.
When disabling a password for UI login, it is also disabled for SSH access.
To enable SSH to access the system as the admin user (or for root):
Configure the SSH service.
a. Go to System Settings > Services, then select Configure for the SSH service.
b. Select Log in as Root with Password to enable the root user to sign in as root.
Select Log in as Admin with Password and Allow Password Authentication to enable the admin user to sign in as admin. Select both options.
c. Click Save and restart the SSH service.
Configure or verify the user configuration options to allow SSH access.
If you want to SSH into the system as the root, you must enable a password for the root user. If the root password password is disabled in the UI you cannot use it to gain SSH access to the system.
To allow the admin user to issue commands in an ssh session, edit the admin user and select which sudo options are allowed. Select SSH password login enabled to allow authenticating and logging into an SSH session. Disable this after completing the SSH session to return to a security hardened system.
Select Allow all sudo commands with no password. You to see a prompt in the ssh session to enter a password the first time you enter a sudo command but to not see this password prompt again in the same session.
To use two-factor authentication with the administrator account (root or admin user), first configure and enable SSH service to allow SSH access, then configure two-factor authentication. If you have the root user configured with a password and enable it, you can SSH into the system with the root user. Security best practice is to disable the root user password and only use the local administrator account.
At present, administrator logins work with TrueCommand but you need to set up the TrueNAS connection using an API key.
In TrueNAS, user accounts allow flexibility for accessing shared data. Typically, administrators create users and assign them to groups. Doing so makes tuning permissions for large numbers of users more efficient.
Root is the default system administration account for CORE, SCALE Angelfish, and early Bluefin releases.
Users migrating from CORE to SCALE or from pre 22.12.3 releases must manually create an admin user account. Only fresh installations using an
iso file provide the option to create the admin user during the installation process.SCALE systems with only the root user account can log in to the TrueNAS web interface as the root user.
System administrators should thereafter create and begin using the admin login, and then disable the root user password.
SCALE 24.04 (Dragonfish) introduces administrators privileges and role-based administrator accounts. The root or local administrator user can create new administrators with limited privileges based on their needs. Predefined administrator roles are read only, share admin, and the default full access local administrator account.
As part of security hardening and to comply with Federal Information Processing standards (FIPS), iXsystems plans to completely disable root login in a future release.
When the network uses a directory service, import the existing account information using the instructions in Directory Services.
Using Active Directory requires setting Windows user passwords in Windows.
To see user accounts, go to Credentials > Local Users.
TrueNAS hides all built-in users (except root) by default. Click the toggle Show Built-In Users to see all built-in users.
All CORE systems migrating to SCALE, and all Angelfish and early Bluefin releases of SCALE upgrading to 22.12.3+ or to later SCALE major versions should create and begin using an admin user instead of the root user. After migrating or upgrading from CORE or a pre-SCALE 22.12.3 release to a later SCALE release, use this procedure to create the Local Administrator user.
Go to Credentials > Local Users and click Add.
Enter the name to use for the administrator account. For example, admin. You can create multiple admin users with any name and assign each different administration privileges.
Enter and confirm the admin user password.
Select builtin_administrators on the Auxiliary Group dropdown list.
Add the home directory for the new admin user. Enter or browse to select the location where SCALE creates the home directory. For example, /mnt/tank. If you created a dataset to use for home directories, select that dataset. Select the Read, Write, and Execute permissions for User, Group, and Other this user should have, then select Create Home Directory.
Select the shell for this admin user from the Shell dropdown list. Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh.
Select the sudo authorization permissions for this admin user. Some applications, such as Nextcloud, require sudo permissions for the administrator account. For administrator accounts generated during the initial installation process, TrueNAS SCALE sets authorization to Allow all sudo commands.
Click Save. The system adds the user to the builtin-users group after clicking Save.
Log out of the TrueNAS system and then log back in using the admin user credentials to verify that the admin user credentials work properly with your network configuration.
After adding the admin user account, disable the root user password:
Go to Credentials > Local Users, click on the root user, and select Edit. Click the Disable Password toggle to disable the password, then click Save.
When creating a user, you must:
All other settings are optional. Click Save after configuring the user settings to add the user.
To create a new user, click Add.
Enter a personal name or description in Full Name, for example, John Doe or Share Anonymous User, then either allow TrueNAS to suggest a simplified name derived from the Full Name or enter a name in Username.
Enter and confirm a password for the user.
Make sure the login password is enabled. Click the Disable Password toggle to enable/disable the login password.
Setting the Disable Password toggle to active (blue toggle) disables these functions:
Enter a user account email address in the Email field if you want this user to receive notifications
Accept the default user ID or enter a new UID. TrueNAS suggests a user ID starting at 3000, but you can change it if you wish. We recommend using an ID of 3000 or greater for non-built-in users.
Leave the Create New Primary Group toggle enabled to allow TrueNAS to create a new primary group with the same name as the user. To add the user to a different existing primary group, disable the Create New Primary Group toggle and search for a group in the Primary Group field. To add the user to more groups use the Auxiliary Groups dropdown list.
Configure a home directory and permissions for the user. Some functions, such as replication tasks, require setting a home directory for the user configuring the task.
When creating a user, the home directory path is set to
SCALE 24.04 changes the default user home directory location from /nonexistent to /var/empty. This new directory is an immutable directory shared by service accounts and accounts that should not have a full home directory.
The 24.04.01 maintenance release introduces automated migration to force home directories of existing SMB users from /nonexistent to /var/empty.
Select Read, Write, and Execute for each role (User, Group, and Other) to set access control for the user home directory. Built-in users are read-only and can not modify these settings.
Assign a public SSH key to a user for key-based authentication by entering or pasting the public key into the Authorized Keys field. You can click Choose File under Upload SSH Key and browse to the location of an SSH key file.
Do not paste the private key.
Always keep a backup of an SSH public key if you are using one.
As of SCALE 24.04, users assigned to the trueNAS_readonly_administrators group cannot access the Shell screen.
Select the shell option for the admin user from the Shell dropdown list. Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh.
To disable all password-based functionality for the account, select Lock User. Clear to unlock the user.
Set the sudo permissions you want to assign this user. Exercise caution when allowing sudo commands, especially without password prompts. We recommend limiting this privilege to trusted users and specific commands to minimize security risks.
Allowed sudo commands, Allow all sudo commands, Allowed sudo commands with no password and Allow all sudo commands with no password grant the account limited root-like permissions using the sudo command.
If selecting Allowed sudo commands or Allowed sudo commands with no password, enter the specific sudo commands allowed for this user.
Enter each command as an absolute path to the ELF (Executable and Linkable Format) executable file, for example, /usr/bin/nano.
Leave Samba Authentication selected to allow using the account credentials to access data shared with SMB.
Click Save.
To edit an existing user account, go to Credentials > Local Users. Click anywhere on the user row to expand the user entry, then click Edit to open the Edit User configuration screen. See Local User Screens for details on all settings.
TrueNAS offers groups as an efficient way to manage permissions for many similar user accounts. See Users for managing users. The interface lets you manage UNIX-style groups. If the network uses a directory service, import the existing account information using the instructions in Active Directory.
To see saved groups, go to Credentials > Local Groups.
By default, TrueNAS hides the system built-in groups. To see built-in groups, click the Show Built-In Groups toggle. The toggle turns blue and all built-in groups display. Click the Show Built-In Groups toggle again to show only non-built-in groups on the system.
To create a group, go to Credentials > Local Groups and click Add.
Enter a unique number for the group ID in GID that TrueNAS uses to identify a Unix group. Enter a number above 3000 for a group with user accounts or enter the default port number as the GID for a system service.
Enter a name for the group. The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal (=). You can only use the dollar sign ($) as the last character in a group name.
Allowed sudo commands, Allow all sudo commands, Allowed sudo commands with no password and Allow all sudo commands with no password grant members of the group limited root-like permissions using the sudo command.
Use Allowed sudo commands or Allowed sudo commands with no password to list specific sudo commands allowed for group members.
Enter each command as an absolute path to the ELF (Executable and Linkable Format) executable file, for example /usr/bin/nano.
Exercise caution when allowing sudo commands, especially without password prompts. We recommend limiting this privilege to trusted users and specific commands to minimize security risks.
To allow Samba permissions and authentication to use this group, select Samba Authentication.
To allow more than one group to have the same group ID (not recommended), select Allow Duplicate GIDs. Use only if absolutely necessary, as duplicate GIDs can lead to unexpected behavior.
Click anywhere on a row to expand that group and show the group management buttons.
Use Members to manage membership and Edit or Delete to manage the group.
To manage group membership, go to Credentials > Local Groups, expand the group entry, and click Members to open the Update Members screen.
To add a user account to the group, select the user and then click the right arrow .
To remove a user account from the group, select the user and then click the left arrow .
To select multiple users, press Ctrl and click on each entry.
Click Save.
To edit an existing group, go to Credentials > Local Groups, expand the group entry, and click edit Edit to open the Edit Group configuration screen. See Local Group Screens for details on all settings.
The SCALE Directory Services tutorials contain options to edit directory domain and account settings, set up ID mapping, and configure authentication and authorization services in TrueNAS SCALE.
When setting up directory services in TrueNAS, you can connect TrueNAS to either an Active Directory or an LDAP server but not both.
To view Idmap and Kerberos Services, click Show next to Advanced Settings.
The Active Directory (AD) service shares resources in a Windows network. AD provides authentication and authorization services for the users in a network, eliminating the need to recreate the user accounts on TrueNAS.
When joined to an AD domain, you can use domain users and groups in local ACLs on files and directories. You can also set up shares to act as a file server.
Joining an AD domain also configures the Privileged Access Manager (PAM) to let domain users log on via SSH or authenticate to local services.
Users can configure AD services on Windows or Unix-like operating systems using Samba version 4.
To configure an AD connection, you must know the AD controller domain and the AD system account credentials.
Users can take a few steps before configuring Active Directory (AD) to ensure the connection process goes smoothly.
Obtain the AD admin account name and password.
After taking these actions, you can connect to the Active Directory domain.
To confirm that name resolution is functioning, you can use the Shell and issue a ping
command and a command to check network SRV records and verify DNS resolution.
To use dig
to verify name resolution and return DNS information:
Go to System Settings > Shell and type dig
to check the connection to the AD domain controller.
The domain controller manages or restricts access to domain resources by authenticating user identity from one domain to the other through login credentials, and it prevents unauthorized access to these resources. The domain controller applies security policies to request-for-access domain resources.
When TrueNAS sends and receives packets without loss, the connection is verified.
Press Ctrl + C to cancel the ping
.
Also using Shell, check the network SRV records and verify DNS resolution enter command host -t srv <_ldap._tcp.domainname.com>
where <_ldap._tcp.domainname.com> is the domain name for the AD domain controller.
Active Directory relies on the time-sensitive Kerberos protocol. TrueNAS adds the AD domain controller with the PDC Emulator FSMO Role as the preferred NTP server during the domain join process. If your environment requires something different, go to System Settings > General to add or edit a server in the NTP Servers window.
Keep the local system time sync within five (5) minutes of the AD domain controller time in a default AD environment.
Use an external time source when configuring a virtualized domain controller. TrueNAS generates alerts if the system time gets out-of-sync with the AD domain controller time.
TrueNAS has a few options to ensure both systems are synchronized:
To connect to Active Directory, in SCALE:
Go to Credentials > Directory Services click Configure Active Directory to open the Active Directory configuration screen.
Enter the domain name for the AD in Domain Name and the account credentials in Domain Account Name and Domain Account Password.
Select Enable to attempt to join the AD domain immediately after saving the configuration. SCALE populates the Kerberos Realm and Kerberos Principal fields on the Advanced Options settings screen.
TrueNAS offers advanced options for fine-tuning the AD configuration, but the preconfigured defaults are generally suitable.
When the import completes, AD users and groups become available while configuring basic dataset permissions or an ACL with TrueNAS cache enabled (enabled by default).
Joining AD also adds default Kerberos realms and generates a default AD_MACHINE_ACCOUNT keytab. TrueNAS automatically begins using this default keytab and removes any administrator credentials stored in the TrueNAS configuration file.
If the cache becomes out of sync or fewer users than expected are available in the permissions editors, resync it by clicking Settings in the Active Directory window and selecting Rebuild Directory Service Cache.
When creating the entry, enter the TrueNAS hostname in the name field and make sure it matches the information on the Network > Global Configuration screen in the Hostname and NetBIOS fields.
You can disable your AD server connection without deleting your configuration or leaving the AD domain. Click Settings to open the Active Directory settings screen, then select the Enable checkbox to clear it, and click Save to disable SCALE AD service. This returns you to the main Directory Services screen where you see the two main directory services configuration options.
Click Configure Active Directory to open the Active Directory screen with your existing configuration settings. Select Enable again, click Save to reactivate your connection to your AD server.
TrueNAS SCALE requires users to cleanly leave an Active Directory if you want to delete the configuration. To cleanly leave AD, use the Leave Domain button on the Active Directory Advanced Settings screen to remove the AD object. Remove the computer account and associated DNS records from the Active Directory.
If the AD server moves or shuts down without you using Leave Domain, TrueNAS does not remove the AD object, and you have to clean up the Active Directory.
TrueNAS has an Open LDAP client for accessing the information on an LDAP server. An LDAP server provides directory services for finding network resources like users and their associated permissions.
You can have either Active Directory or LDAP configured on SCALE but not both.
To configure SCALE to use an LDAP directory server:
Go to Credentials > Directory Services and click Configure LDAP.
Enter your LDAP server host name. If using a cloud service LDAP server, do not include the full URL.
Enter your LDAP server base DN. This is the top of the top level of the LDAP directory tree to use when searching for resources.
Enter the bind DN (administrative account name for the LDAP server) and the bind password.
Select Enable to activate the server
Click Save.
If you want to further modify the LDAP configuration, click Advanced Options. See the LDAP UI Reference article for details about advanced settings.
To disable LDAP but not remove the configuration, clear the Enable checkbox. The main Directory Services screen returns to the default view showing the options to configure Active Directory or LDAP. To enable LDAP again, click Configure LDAP to open the LDAP screen with your saved configuration. Select Enable again to reactivate your LDAP directory server configuration.
To remove the LDAP configuration, click Settings to open the LDAP screen. Clear all settings and click Save.
Kerberos is a computer network security protocol. It authenticates service requests between trusted hosts across an untrusted network (i.e., the Internet).Kerberos is extremely complex. Only system administrators experienced with configuring Kerberos should attempt it. Misconfiguring Kerberos settings, realms, and keytabs can have a system-wide impact beyond Active Directory or LDAP, and can result in system outages. Do not attempt configure or make changes if you do not know what you are doing!
If you configure Active Directory in SCALE, SCALE populates the realm fields and the keytab with with what it discovers in AD. You can configure LDAP to communicate with other LDAP severs using Kerberos, or NFS if it is properly configured, but SCALE does not automatically add the realm or key tab for these services.
After AD populates the Kerberos realm and keytabs, do not make changes. Consult with your IT or network services department, or those responsible for the Kerberos deployment in your network environment for help. For more information on Kerberos settings refer to the MIT Kerberos Documentation.
Kerberos uses realms and keytabs to authenticate clients and servers. A Kerberos realm is an authorized domain that a Kerberos server can use to authenticate a client. By default, TrueNAS creates a Kerberos realm for the local system. A keytab (“key table”) is a file that stores encryption keys for authentication.
TrueNAS SCALE allows users to configure general Kerberos settings, as well as realms and keytabs.
TrueNAS automatically generates a realm after you configure AD.
Users can configure Kerberos realms by navigating to Directory Services and clicking Add in the Kerberos Realms window.
Enter the realm and key distribution (KDC) names, then define the admin and password servers for the realm.
Click Save.
TrueNAS automatically generates a keytab after you configure AD.
A Kerberos keytab replaces the administration credentials for Active Directory after intial configuration. Since TrueNAS does not save the Active Directory or LDAP administrator account password in the system database, keytabs can be a security risk in some environments.
When using a keytab, create and use a less-privileged account to perform queries. TrueNAS stores that account password in the system database.
After generating the keytab, go back to Directory Services in TrueNAS and click Add in the Kerberos Keytab window to add it to TrueNAS.
To make AD use the keytab, click Settings in the Active Directory window and select it using the Kerberos Principal dropdown list.
When using a keytab with AD, ensure the keytab username and userpass match the Domain Account Name and Domain Account Password.
To make LDAP use a keytab principal, click Settings in the LDAP window and select the keytab using the Kerberos Principal dropdown list.
If you do not understand Kerberos auxiliary parameters, do not attempt to configure new settings!
The Kerberos Settings screen includes two fields used to configure auxiliary parameters.
Kerberos is extremely complex. Only system administrators experienced with configuring Kerberos should attempt it. Misconfiguring Kerberos settings, realms, and keytabs can have a system-wide impact beyond Active Directory or LDAP, and can result in system outages. Do not attempt configure or make changes if you do not know what you are doing!
Idmap settings exist for the purpose of integration with an existing directory domain to ensure that UIDs and GIDs assigned to Active Directory users and groups have consistent values domain-wide. The correct configuration therefore relies on details that are entirely external to the TrueNAS server, e.g., how the AD administrator has configured other Unix-like computers in the environment.
The default is to use an algorithmic method of generating IDs based on the RID component of the user or group SID in Active Directory.
Only administrators experienced with configuring Id mapping should attempt to add new or edit existing idmaps. Misconfiguration can lead to permissions incorrectly assigned to users or groups in the case where data is transferred to/from external servers via ZFS replication or rsync (or when access is performed via NFS or other protocols that directly access the UIDs/GIDs on files).
The Idmap directory service lets users configure and select a backend to map Windows security identifiers (SIDs) to UNIX UIDs and GIDs. Users must enable the Active Directory service to configure and use identity mapping (Idmap).
Users can click Add in the Idmap widget to configure backends or click on an already existing Idmap to edit it.
TrueNAS automatically generates an Idmap after you configure AD or LDAP.
From the Directory Services screen, click Show to the right of Advanced Settings and then click Confirm to close the warning dialog.
Click Add on the Idmap widget to open the Idmap Settings screen.
Select the type from the Name field dropdown. Screen settings change based on the selection.
Select the Idmap Backend type from the dropdown list. Screen settings change based on the backend selected.
Enter the required field values.
Click Save.
TrueNAS backup credentials store cloud backup services credentials, SSH connections, and SSH keypairs. Users can set up backup credentials with cloud and SSH clients to back up data in case of drive failure.
The Cloud Credentials widget on the Backup Credentials screen allows users to integrate TrueNAS with cloud storage providers.
These providers are supported for Cloud Sync tasks in TrueNAS SCALE:
To maximize security, TrueNAS encrypts cloud credentials when saving them. However, this means that to restore any cloud credentials from a TrueNAS configuration file, you must enable Export Password Secret Seed when generating that configuration backup. Remember to protect any downloaded TrueNAS configuration files.
Authentication methods for each provider could differ based on the provider security requirements. You can add credentials for many of the supported cloud storage providers from the information on the Cloud Credentials Screens. This article provides instructions for the more involved providers.
We recommend users open another browser tab to open and log into the cloud storage provider account you intend to link with TrueNAS.
Some providers require additional information that they generate on the storage provider account page. For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair found on the Security Credentials > Access Keys page.
Have any authentication information your cloud storage provider requires on-hand to make the process easier. Authentication information could include but are not limited to user credentials, access tokens, and access and security keys.
To set up a cloud credential, go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Select the cloud service from the Provider dropdown list. The provider required authentication option settings display.
For details on each provider authentication settings see Cloud Credentials Screens.
Click Verify Credentials to test the entered credentials and verify they work.
Click Save.
The process to set up the Storj-TrueNAS account, buckets, create the S3 access and download the credentials is documented fully in Adding a Storj Cloud Sync Task in the Adding Storj Cloud Credentials section.
If adding an Amazon S3 cloud credential, you can use the default authentication settings or use advanced settings if you want to include endpoint settings.
Cloud storage providers using OAuth as an authentication method are Box, Dropbox, Google Drive, Google Photo, pCloud and Yandex.
BackBlaze B2 uses an application key and key ID to authenticate credentials.
Google Cloud Storage uses a service account json file to authenticate credentials.
OpenStack Swift authentication credentials change based on selections made in AuthVersion. All options use the user name, API key or password and authentication URL, and can use the optional endpoint settings.
Some providers can automatically populate the required authentication strings by logging in to the account.
The SSH Connections and SSH Keypairs widgets on the Backup Credentials screen display a list of SSH connections and keypairs configured on the system. Using these widgets, users can establish Secure Socket Shell (SSH) connections.
You must also configure and activate the SSH Service to allow SSH access.
To begin setting up an SSH connection, go to Credentials > Backup Credentials.
Click Add on the SSH Connections widget.
This procedure uses the semi-automatic setup method for creating an SSH connection with other TrueNAS or FreeNAS systems.
Follow these instructions to set up an SSH connection to a non-TrueNAS or non-FreeNAS system. To manually set up an SSH connection, you must copy a public encryption key from the local system to the remote system. A manual setup allows a secure connection without a password prompt.
This procedure covers adding a public SSH key to the admin account on the TrueNAS SCALE system and generating a new SSH Keypair to add to the remote system (TrueNAS or other).
TrueNAS generates and stores RSA-encrypted SSH public and private keypairs on the SSH Keypairs widget found on the Credentials > Backup Credentials screen. Keypairs are generally used when configuring SSH Connections or SFTP Cloud Credentials. TrueNAS does not support encrypted keypairs or keypairs with passphrases.
TrueNAS automatically generates keypairs as needed when creating new SSH Connections or Replication tasks.
To manually create a new keypair:
Click the vertical ellipsis
at the bottom of the SSH Keypairs configuration screen to download these strings as text files for later use.Use the Credentials > Certificates screen Certificates, Certificate Signing Requests (CSRs), Certificate Authorities (CA), and ACME DNS-Authenticators widgets to manage certificates, certificate signing requests (CSRs), certificate authorities (CA), and ACME DNS-authenticators.
Each TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
The Certificates screen widgets display information for certificates, certificate signing requests (CSRs), certificate authorities(CAs), and ACME DNS-authenticators configured on the system, and provide the ability to add new ones. TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
By default, TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can import and create more certificates by clicking Add in the Certificates window.
To add a new certificate:
Click Add on the Certificates widget to open the Add Certficates wizard.
First, enter a name as certificate identifier and select the type. The Identifier and Type step lets users name the certificate and choose whether to use it for internal or local systems, or import an existing certificate. Users can also select a predefined certificate extension from the Profiles dropdown list.
Next, specify the certificate options. Select the Key Type as this selection changes the settings displayed. The Certificate Options step provides options for choosing the signing certificate authority (CSR), the type of private key type to use (as well as the number of bits in the key used by the cryptographic algorithm), the cryptographic algorithm the certificate uses, and how many days the certificate authority lasts.
Now enter the certificate location and basic information. The Certificate Subject step lets users define the location, name, and email for the organization using the certificate. Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, select any extension types you want to apply. Selecting Extended Key displays settings for Key Usage settings as well. Select any extra constraints you need for your scenario. The Extra Constraints step contains certificate extension options.
Review the certificate options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the certificate.
To import a certificate, first select Import Certificate as the Type and name the certificate.
Next, if the CSR exists on your SCALE system, select CSR exists on this system and then select the CSR.
Copy/paste the certificate and private Keys into their fields, and enter and confirm the passphrase for the certificate if one exists.
Review the options, and then click Save.
The Certificate Authorities widget lets users set up a certificate authority (CA) that certifies the ownership of a public key by the named subject of the certificate.
To add a new CA:
First, add the name and select the type of CA.
The Identifier and Type step lets users name the CA and choose whether to create a new CA or import an existing CA.
Users can also select a predefined certificate extension from the Profiles drop-down list.
Next, enter the certificate options. Select the key type. The Key Type selection changes the settings displayed. The Certificate Options step provides options for choosing what type of private key to use (as well as the number of bits in the key used by the cryptographic algorithm), the cryptographic algorithm the CA uses, and how many days the CA lasts.
Now enter the certificate subject information.
The Certificate Subject step lets users define the location, name, and email for the organization using the certificate.
Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, enter any extra constraints you need for your scenario. The Extra Constraints step contains certificate extension options.
Review the CA options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the CA.
The Certificate Signing Requests widget allows users configure the message(s) the system sends to a registration authority of the public key infrastructure to apply for a digital identity certificate.
To add a new CSR:
First enter the name and select the CSR type.
The Identifier and Type step lets users name the certificate signing request (CSR) and choose whether to create a new CSR or import an existing CSR.
Users can also select a predefined certificate extension from the Profiles drop-down list.
Next, select the certficate options for the CSR you selected. The Certificate Options step provides options for choosing what type of private key type to use, the number of bits in the key used by the cryptographic algorithm, and the cryptographic algorithm the CSR uses.
Now enter the information about the certificate.
The Certificate Subject step lets users define the location, name, and email for the organization using the certificate.
Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, enter any extra constraints you need for your scenario. The Extra Constraints step contains certificate extension options.
Review the certificate options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the CSR.
Automatic Certificate Management Environment (ACME) DNS authenticators allow users to automate certificate issuing and renewal. The user must verify ownership of the domain before TrueNAS allows certificate automation.
ACME DNS is an advanced feature intended for network administrators or AWS professionals. Misconfiguring ACME DNS can prevent you from accessing TrueNAS.
The system requires an ACME DNS Authenticator and CSR to configure ACME certificate automation.
To add an authenticator,
Click Add on the ACME DNS-Authenticator widget to open the Add DNS Authenticator screen.
Enter a name, and select the authenticator you want to configure. Options are cloudflare, Amazon route53, OVH, and shell. Authenticator selection changes the configuration fields.
If you select cloudflare as the authenticator, you must enter your Cloudflare account email address, API key, and API token.
If you select route53 as the authenticator, you must enter your Route53 Access key ID and secret access key. See AWS documentation for information on creating a long-term access key with these credentials.
If you select OVH as the authenticator, you must enter your OVH application key, application secret, consumer key, and endpoint. See OVHcloud and certbot-dns-ovh for information on retrieving these credentials and configuring access.
Click Save to add the authenticator.
The shell authenticator option is meant for advanced users. Improperly configured scripts can result in system instability or unexpected behavior.
If you select shell as the authenticator, you must enter the path to an authenticator script, the running user, a certificate timeout, and a domain propagation delay.
Advanced users can select this option to pass an authenticator script, such as acme.sh, to shell and add an external DNS authenticator. Requires an ACME authenticator script saved to the system.
TrueNAS SCALE allows users to automatically generate custom domain certificates using Let’s Encrypt.
Go to Credentials > Certificates and click ADD in the ACME DNS-Authenticators widget.
Enter the required fields depending on your provider, then click Save.
For Cloudflare, enter either your Cloudflare Email and API Key, or enter an API Token. If you create an API Token, make sure to give the token the permission Zone.DNS:Edit, as it’s required by certbot.
For Route53, enter your Access Key ID and Secret Access Key. The associated IAM user must have permission to perform the Route53 actions ListHostedZones
, ChangeResourceRecordSets
, and GetChange
.
For OVH, enter your OVH Application Key, OVH Application Secret, OVH Consumer Key, and OVH Endpoint.
Next, click ADD in the Certificate Signing Requests widget.
You can use default settings except for the Common Name and Subject Alternate Names fields.
Enter your primary domain name in the Common Name field, then enter additional domains you wish to secure in the Subject Alternate Names field.
For example, if your primary domain is domain1.com, entering www.domain1.com
secures both addresses.
Click the icon next to the new CSR.
Fill out the ACME Certificate form. Under Domains, select the ACME DNS Authenticator you created for both domains, then click Save.
You can create testing and staging certificates for your domain.
TrueNAS Enterprise
KMIP is only available for TrueNAS SCALE Enterprise licensed systems. Contact the iXsystems Sales Team to inquire about purchasing TrueNAS Enterprise licenses.
The Key Management Interoperability Protocol (KMIP) is an extensible client/server communication protocol for storing and maintaining keys, certificates, and secret objects. KMIP on TrueNAS SCALE Enterprise integrates the system within an existing centralized key management infrastructure and uses a single trusted source for creating, using, and destroying SED passwords and ZFS encryption keys.
With KMIP, keys created on a single server are then retrieved by TrueNAS. KMIP supports keys wrapped within keys, symmetric, and asymmetric keys. KMIP enables clients to ask a server to encrypt or decrypt data without the client ever having direct access to a key. You can also use KMIP to sign certificates.
To connect TrueNAS to a KMIP server, import a certificate authority (CA) and Certificate from the KMIP server, then configure the KMIP options.
For security reasons, we strongly recommend protecting the CA and certificate values.
Go to Credentials > KMIP.
Enter the central key server host name or IP address in Server and the number of an open connection on the key server in Port. Select the certificate and certificate authority that you imported from the central key server. To ensure the certificate and CA chain is correct, click on Validate Connection. Click Save.
When the certificate chain verifies, choose the encryption values, SED passwords, or ZFS data pool encryption keys to move to the central key server. Select Enabled to begin moving the passwords and keys immediately after clicking Save.
Refresh the KMIP screen to show the current KMIP Key Status.
If you want to cancel a pending key synchronization, select Force Clear and click Save.
The Virtualization section allows users to set up Virtual Machines (VMs) to run alongside TrueNAS. Delegating processes to VMs reduces the load on the physical system, which means users can utilize additional hardware resources. Users can customize six different segments of a VM when creating one in TrueNAS SCALE.
A virtual machine (VM) is an environment on a host computer that you can use as if it is a separate, physical computer. Users can use VMs to run multiple operating systems simultaneously on a single computer. Operating systems running inside a VM see emulated virtual hardware rather than the host computer physical hardware. VMs provide more isolation than Jails but also consume more system resources.
Before creating a VM, obtain an installer
To create a new VM, go to Virtualization and click Add to open the Create Virtual Machine configuration screen. If you have not yet added a virtual machine to your system, click Add Virtual Machines to open the same screen.
Select the operating system you want to use from the Guest Operating System dropdown list.
Compare the recommended specifications for the guest operating system with your available host system resources when allocating virtual CPUs, cores, threads, and memory size.
Change other Operating System settings per your use case.
Select UTC as the VM system time from the System Clock dropdown if you do not want to use the default Local setting.
Select Enable Display to enable a SPICE Virtual Network Computing (VNC) remote connection for the VM. The Bind and Password fields display. If Enable Display is selected:
Enter a display Password
Use the dropdown menu to change the default IP address in Bind if you want to use a specific address as the display network interface, otherwise leave it set to 0.0.0.0. The Bind menu populates any existing logical interfaces, such as static routes, configured on the system. Bind cannot be edited after VM creation.
Click Next.
Enter the CPU and memory settings for your VM.
If you selected Windows as the Guest Operating System, the Virtual CPUs field displays a default value of 2. The VM operating system might have operational or licensing restrictions on the number of CPUs.
Do not allocate too much memory to a VM. Activating a VM with all available memory allocated to it can slow the host system or prevent other VMs from starting.
Leave CPU Mode set to Custom if you want to select a CPU model.
Use Memory Size and Minimum Memory Size to specify how much RAM to dedicate to this VM. To dedicate a fixed amount of RAM, enter a value (minimum 256 MiB) in the Memory Size field and leave Minimum Memory Size empty.
To allow for memory usage flexibility (sometimes called ballooning), define a specific value in the Minimum Memory Size field and a larger value in Memory Size. The VM uses the Minimum Memory Size for normal operations but can dynamically allocate up to the defined Memory Size value in situations where the VM requires additional memory. Reviewing available memory from within the VM typically shows the Minimum Memory Size.
Click Next.
Configure disk settings.
Select Create new disk image to create a new zvol on an existing dataset.
Select Use existing disk image to use an existing zvol for the VM.
Select either AHCI or VirtIO from the Select Disk Type dropdown list. We recommend using AHCI for Windows VMs.
Select the location for the new zvol from the Zvol Location dropdown list.
Enter a value in Size (Examples: 500KiB, 500M, and 2TB) to indicate the amount of space to allocate for the new zvol.
Click Next.
Configure the network interface.
Select the network interface type from the Adapter Type dropdown list. Select Intel e82585 (e1000) as it offers a higher level of compatibility with most operating systems, or select VirtIO if the guest operating system supports para-virtualized network drivers.
Select the network interface card to use from the Attach NIC dropdown list.
Click Next.
Upload installation media for the operating system you selected.
You can create the VM without an OS installed. To add it either type the path or browse to the location and select it.
To upload an
Click Upload to begin the upload process. After the upload finishes, click Next.
Specify a GPU.
The VirtIO network interface requires a guest OS that supports VirtIO para-virtualized network drivers.
iXsystems does not have a list of approved GPUs at this time but does have drivers and basic support for the list of nvidia Supported Products.
Confirm your VM settings, then click Save.
After creating the VM, you can add or remove virtual devices.
Click on the VM row on the Virtual Machines screen to expand it and show the options, then click device_hub Devices.
Device notes:
See Adding and Managing VM Devices for more information.
After creating the VM and configuring devices for it, click on the VM to expand it and show the options to manage the VM.
An active VM displays options for settings_ethernet Display and keyboard_arrow_right Serial Shell connections.
When a Display device is configured, remote clients can connect to VM display sessions using a SPICE client, or by installing a 3rd party remote desktop server inside your VM. SPICE clients are available from the SPICE Protocol site.
If the display connection screen appears distorted, try adjusting the display device resolution.
Use the State toggle or click stop Stop to follow a standard procedure to do a clean shutdown of the running VM. Click power_settings_new Power Off to halt and deactivate the VM, which is similar to unplugging a computer.
If the VM does not have a guest OS installed, the VM State toggle and stop Stop button might not function as expected. The State toggle and stop Stop buttons send an ACPI power down command to the VM operating system, but since an OS is not installed, these commands time out. Use the Power Off button instead.
After configuring the VM in TrueNAS and an OS
Some operating systems can require specific settings to function properly in a virtual machine. For example, vanilla Debian can require advanced partitioning when installing the OS. Refer to the documentation for your chosen operating system for tips and configuration instructions.
Configure VM network settings during or after installation of the guest OS. To communicate with a VM from other parts of your local network, use the IP address configured or assigned by DHCP within the VM.
To confirm network connectivity, send a ping to and from the VM and other nodes on your local network.
By default, VMs are unable to communicate directly with the host NAS. If you want to access your TrueNAS SCALE directories from a VM, to connect to a TrueNAS data share, for example, you have multiple options.
If your system has more than one physical interface, you can assign your VMs to a NIC other than the primary one your TrueNAS server uses. This method makes communication more flexible but does not offer the potential speed of a bridge.
To create a bridge interface for the VM to use if you have only one physical interface, stop all existing apps, VMs, and services using the current interface, edit the interface and VMs, create the bridge, and add the bridge to the VM device. See Accessing NAS from VM for more information.
After creating a VM, the next step is to add virtual devices for that VM. Using the Create Virtual Machine wizard configures at least one disk, NIC, and the display as part of the process. To add devices, from the Virtual Machines screen, click anywhere on a VM entry to expand it and show the options for the VM.
Click device_hub Devices to open the Devices screen for the VM. From this screen, you can edit, add, or delete devices. Click the
icon at the right of each listed device to see device options.The devices for the VM display as a list.
Device notes:
Before adding, editing, or deleting a VM device, stop the VM if it is running. Click the State toggle to stop or restart a VM, or use the Stop and Restart buttons.
Select Edit to open the Edit Device screen. You can change the type of virtual hard disk, the storage volume to use, or change the device boot order.
To edit a VM device:
Stop the VM if it is running, then click Devices to open the list of devices for the selected VM.
Click on the
icon at the right of the listed device you want to edit, then select Edit to open the Edit Device screen.Select the path to the zvol created when setting up the VM on the Zvol dropdown list.
Select the type of hard disk emulation from the Mode dropdown list. Select AHCI for better software compatibility, or select VirtIO for better performance if the guest OS installed in the VM has support for VirtIO disk devices.
(Optional) Specify the disk sector size in bytes in Disk Sector Size. Leave set to Default or select either 512 or 4096 byte values from the dropdown list. If not set, the sector size uses the ZFS volume values.
Specify the boot order or priority level in Device Order to move this device up or down in the sequence. The lower the number the higher the priority in the boot sequence.
Click Save.
Restart the VM.
Deleting a device removes it from the list of available devices for the selected VM.
To delete a VM device:
Stop the VM if it is running, then click Devices to open the list of devices for the selected VM.
Click on the
icon at the right of the listed device you want to edit, then select Delete. The Delete Virtual Machine dialog opens.Select Delete zvol device to confirm you want to delete the zvol device. Select Force Delete if you want the system to force the deletion of the zvol device, even if other devices or services are using or affiliated with the zvol device.
Click Delete Device.
Select CD-ROM as the Device Type on the Add Device screen and set a boot order.
Stop the VM if it is running, then click Devices.
Click Add and select CD-ROM from the Device Type dropdown list.
Specify the mount path. Click on the to the left of /mnt and at the pool and dataset levels to expand the directory tree. Select the path to the CD-ROM device.
Specify the boot sequence in Device Order.
Click Save.
Restart the VM.
Select NIC in the Device Type on the Add Device screen to add a network interface card for the VM to use.
Stop the VM if it is running, then click Devices.
Click Add and select NIC from the Device Type dropdown list.
Select the adapter type. Choose Intel e82585(e1000) for maximum compatibility with most operating systems. If the guest OS supports VirtIO paravirtualized network drivers, choose VirtIO for better performance.
Click Generate to assign a new random MAC address to replace the random default address, or enter your own custom address.
Select the physical interface you want to use from the NIC To Attach dropdown list.
(Optional) Select Trust Guest Filters to allow the virtual server to change its MAC address and join multicast groups. This is required for the IPv6 Neighbor Discovery Protocol (NDP).
Setting this attribute has security risks. It allows the virtual server to change its MAC address and receive all frames delivered to this address. Determine your network setup needs before setting this attribute.
Click Save.
Restart the VM
Select Disk in Device Type on the Add Device screen to configure a new disk location, drive type and disk sector size, and boot order.
Stop the VM if it is running, then click Devices.
Click Add and select Disk from the Device Type dropdown list.
Select the path to the zvol you created when setting up the VM using the Zvol dropdown list.
Select the hard disk emulation type from the Mode dropdown list. Select AHCI for better software compatibility, or VirtIO for better performance if the guest OS installed in the VM supports VirtIO disk devices.
Specify the sector size in bytes in Disk Sector Size. Leave set to Default or select either 512 or 4096 from the dropdown list to change it. If the sector size remains unset it uses the ZFS volume values.
Specify the boot sequence order for the disk device.
Click Save.
Restart the VM.
Select PCI Passthrough Device in the Device Type on the Add Device screen to configure the PCI passthrough device and boot order.
Depending upon the type of device installed in your system, you might see a warning: PCI device does not have a reset mechanism defined. You may experience inconsistent or degraded behavior when starting or stopping the VM. Determine if you want to proceed with this action in such an instance.
Stop the VM if it is running, then click Devices.
Click Add and select PCI Passthrough Device from the Device Type dropdown list.
Enter a value in PCI Passthrough Device using the format of bus#/slot#/fcn#.
Specify the boot sequence order for the PCI passthrough device.
Click Save.
Restart the VM.
Select USB Passthrough Device as the Device Type on the Add Device screen to configure the USB passthrough device, and set a boot order.
Stop the VM if it is running, then click Devices.
Click Add and select USB Passthrough Device from the Device Type dropdown list.
Select the Controller Type from the dropdown list.
Select the hub controller type from the Device dropdown list. If the type is not listed, select Specify custom, then enter the Vendor ID and Product ID.
Specify the boot sequence order.
Click Save.
Restart the VM.
Select Display as Device Type on the Add Device screen to configure a new display device.
Stop the VM if it is running, then click Devices.
Click Add and select Display from the Device Type dropdown list.
Enter a fixed port number in Port. To allow TrueNAS to assign the port after restarting the VM, set the value to zero (leave the field empty).
Specify the display session settings: a. Select the screen resolution to use for the display from the Resolution dropdown. b. Select an IP address for the display device to use in Bind. The default is 0.0.0.0. c. Enter a unique password for the display device to securely access the VM.
Select Web Interface to allow access to the VNC web interface.
Click Save.
Restart the VM.
Display devices have a 60-second inactivity timeout. If the VM display session appears unresponsive, try refreshing the browser tab.
If you want to access your TrueNAS SCALE directories from a VM, you have multiple options:
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
If your system only has a single physical interface, complete these steps to create a network bridge.
Go to Virtualization, find the VM you want to use to access TrueNAS storage and toggle it off.
Go to Network > Interfaces and find the active interface you used as the VM parent interface. Note the interface IP Address and subnet mask. Click the interface to open the Edit Interface screen.
If enabled, clear the DHCP checkbox. Note the IP address and mask under Aliases. Click the X next to the listed alias to remove the IP address and mask. The Aliases field now reads No items have been added yet. Click Save.
The Interfaces widget displays the edited interface without IP information.
Add a bridge interface.
Edit VM device configuration
Go to Virtualization, expand the VM you want to use to access TrueNAS storage and click Devices. Click more_vert in the NIC row and select Edit. Select the new bridge interface from the NIC to Attach dropdown list, then click Save.
You can now access your TrueNAS storage from the VM. You might have to set up shares or users with home directories to access certain files.
If you have more than one NIC on your system, you can assign VM traffic to a secondary NIC. Configure the secondary interface as described in Managing Interfaces before attaching it to a VM.
If you are creating a new VM, use the Attach NIC dropdown menu under Network Interface to select the secondary NIC.
To edit the NIC attached to an existing VM:
Go to Virtualization, expand the VM you want to use to access TrueNAS storage and click Devices.
Click more_vert in the NIC row and select Edit.
Select the secondary interface from the NIC to Attach dropdown list, then click Save.
TrueNAS applications allow for quick and easy integration of third-party software and TrueNAS SCALE. Applications are available from official, Enterprise, and community maintained trains.
TrueNAS Apps Support Timeline for 24.04 and 24.10Summary: Applications added to the TrueNAS Apps catalog before December 24, 2024, require updates to enable host IP port binding. These updates roll out on June 1, 2025, and require TrueNAS 25.04 (or later).
Due to breaking changes involved in enabling host IP port binding, June 1, 2025 is the deadline for automatic apps migration on upgrade. Migrate from 24.04 to 24.10 before June 1, 2025, to ensure automatic app migration.
Applications installed on 24.10 do not receive updates after June 1, 2025. To update or install new applications, any users still running TrueNAS Apps on 24.10 after June 1 must update TrueNAS to 25.04 (or later).
Timeframe App Migration
24.04 → 24.10App Migration
24.10 → 25.04Before June 1, 2025 ✅ Supported ✅ Supported After June 1, 2025 ❌ Not Supported ✅ Supported (no updates or installs until upgraded to 25.04)
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The first time you go to Apps, the Installed applications screen displays an Apps Service Not Configured status on the screen header.
After setting the pool apps uses, this changes to Apps Service Running.
The Installed applications screen displays Check Available Apps before you install the first application.
Click Check Available Apps or Discover Apps to open the Discover screen to see application widgets available in the TRUENAS catalog.
After installing an application, the Installed screen populates the Applications area with a table listing installed applications. Select an application to view the information widgets for applications, with options to edit the application settings, open container pod shell or logs, and access the Web Portal for the application, if applicable.
Application widgets vary by app, but all include the Application Info and Workloads widgets. Some include the History and Notes widgets.
You must choose the pool apps use before you can add applications. The first time you go to the Applications screen, click Settings > Choose Pool to choose a storage pool for Apps.
We recommend keeping the application use case in mind when choosing a pool. Select a pool with enough space for all the applications you intend to use. For stability, we also recommend using SSD storage for the applications pool.
TrueNAS creates an ix-applications dataset on the chosen pool and uses it to store all container-related data. The dataset is for internal use only. Set up a new dataset before installing your applications if you want to store your application data in a location separate from other storage on your system. For example, create the datasets for the Nextcloud application, and, if installing Plex, create the dataset(s) for Plex data storage needs.
Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to System Settings > General > GUI > Settings and locate the Web Interface HTTP -> HTTPS Redirect checkbox. To disable HTTPS redirects, clear this option and click Save, then clear the browser cache before attempting to connect to the app again.
After an apps storage pool is configured, the status changes to Apps Service Running.
To select a different pool for apps to use, click Settings > Unset Pool. This turns off the Apps service until you choose another pool for apps to use.
Official applications use the default system-level Kubernetes node IP settings.
You can change the Kubernetes node IP to assign an external interface to your apps, separate from the web UI interface, in Apps > Settings > Advanced Settings.
We recommend using the default Kubernetes node IP (0.0.0.0) to ensure apps function correctly.
The Settings dropdown includes an option Click Settings > Manage Container Images to open the Manage Container Images screen.
Update or delete images from this screen, or click Pull Image to download a specific custom image to TrueNAS.
To download a specific image, click the button and enter a valid path and tag to the image. Enter the path using the format registry/repository/image to identify the specific image. The default latest tag downloads the most recent image version.
When downloading a private image, enter user account credentials that allow access to the private registry.
Apps display a yellow circle with an exclamation point that indicates an upgrade is available, and the Installed application screen banner displays an Update or Update All button when upgrades are available. To upgrade an app to the latest version, click Update on the Application Info widget or to upgrade multiple apps, click the Update All button on the Installed applications banner. Both buttons only display if TrueNAS SCALE detects an available update for installed applications.
Update opens an upgrade window that includes two selectable options, Images (to be updated) and Changelog. Click on the down arrow to see the options available for each.
Click Upgrade to begin the process. A counter dialog opens showing the upgrade progress. When complete, the update badge and buttons disappear and the application Update state on the Installed screen changes from Update Available to Up to date.
To delete an application, click Stop on application row. After the app status changes to stopped, click Delete on the Application Info widget for the selected application to open the Delete dialog.
Click Confirm then Continue to delete the application.
The Settings dropdown list at the top of the Installed applications screen provides these options:
The Discover screen displays New & Updated Apps application widgets for the official TRUENAS catalog Chart, Community, and Enterprise trains. Non-Enterprise systems show the Chart catalog of app by default. The Chart catalog train has official applications that are pre-configured and only require a name during deployment.
Enterprise applications display automatically for Enterprise=licensed systems, but community users can add these apps using the Manage Catalogs screen. App trains display based on the Trains settings on the Edit Catalog screen.
See Using SCALE Catalogs for more information on managing catalogs.
The Discover screen includes three links:
The Custom App button opens a wizard where you can install unofficial apps or an app not included in a catalog.
Browse the widgets or use the search field to find an available applications. Click on an application widget to go to the application information screen.
You can refresh the charts catalog by clicking Refresh Charts on the Discover screen. You can also refresh all catalogs from the Catalogs screen. Click Manage Catalogs, then click Refresh All. Refresh the catalog after adding or editing the catalogs on your system.
To filter the app widgets shown, click the down arrow to the right of Filters.
You can filter by catalog, app category, name, catalog name, and date last updated. Click on the option then begin typing the name of the app into the search field to narrow the widgets to fit the filter criteria. Click in Categories to select apps based on the selection you make. Click in the field again to add another category from the dropdown list to select multiple categories.
From the application information screen, click Install to open the installation wizard for the application.
After installing an application, the Installed applications screen shows the application in the Deploying state. It changes to Running when the application is ready to use.
The installation wizard configuration sections vary by application, with some including more configuration areas than others. Click Install to review settings ahead of time to check for required settings. Click Discover on the breadcrumb at the top of the installation wizard to exiting the screen without saving and until you are ready return and configure the app settings.
All applications include these basic setting sections:
Application Name shows the default name for the application.
If deploying more than one instance of the application, you must change the default name. Also includes the version number for the application. Do not change the version number for official apps or those included in a SCALE catalog. When a new version becomes available, the Installed application screen banner and application row displays an update alert, and the Application Info widget displays an update button> Updating the app changes the version shown on the edit wizard for the application.
Application Configuration shows settings that app requires to deploy. This section can be named anything. For example, the MinIO app uses MinIO Configuration.
Typical settings include user credentials, environment variables, additional argument settings, name of the node, or even sizing parameters.
If not using the default user and group provided, add the new user (and group) to manage the application before using the installation wizard.
Network Configuration shows network settings the app needs to communicate with SCALE and the Internet. Settings include the default port assignment, host name, IP addresses, and other network settings.
If changing the port number to something other than the default setting, refer to Default Ports for a list of used and available port numbers.
Some network configuration settings include the option to add a certificate. Create the certificate authority and certificate before using the installation wizard if using a certificate is required for the application.
Storage Configuration shows options to configure storage for the application. Storage options include using the default ixVolume setting that adds a storage volume under the ix-applications dataset, host path where you select existing dataset(s) to use, or in some cases the SMB share option where you configure a share for the application to use. The Add button allows you to configure additional storage volumes for the application to use in addition to the main storage volume (dataset).
If the application requires specific datasets, configure these before using the installation wizard.
If setting host path storage, select Enable ACL to configure ACL entries for the selected dataset.
Browse to or select the dataset in Host Path.
Select Add next to ACL Entries to add a set of ID Type, ID, and Access fields to configure an entry. Click Add again for each additional ACL entry.
Select Force Flag under ACL Options to apply the ACL even if the path has existing data.
Resources Configuration shows CPU and memory settings for the container pod. This section can also be named Resource Limits. In most cases you can accept the default settings, or you can change these settings to limit the system resources available to the application.
After installing an app, you can modify most settings by selecting the app on the Installed applications screen and then clicking the Edit button on the Application Info widget for that app.
Refer to individual tutorials in the Community or Enterprise sections of the Documentation Hub for more details on application settings. Installation and editing wizards include tooltips to help users configure application settings.
Users with compatible hardware can allocate one or more GPU devices to an application for use in hardware acceleration. This is an advanced process that could require significant troubleshooting depending on installed GPU device(s) and application-specific criteria.
GPU devices can be available for the host operating system (OS) and applications or can be isolated for use in a Virtual Machine (VM). A single GPU cannot be shared between the OS/applications and a VM.
Allocate GPU from the Resources Configuration section of the application installation wizard screen or the Edit screen for a deployed application.
Click the GPU Resource allocation row for the type of GPU (AMD, Intel, or Nvidia) and select the number of GPU devices the application is allowed access to. It is not possible at this time to specify which available GPU device is allocated to the application and assigned devices can change on reboot.
To deploy a custom application, go to Discover and click Custom App to open the Install Custom App screen. See Using Install Custom App for more information.
Custom applications use the system-level Kubernetes Node IP settings by default.
You can assign an external interface to custom apps using one of the Networking section settings found on the Install Custom App screen.
Unless you need to run an application separately from the Web UI, we recommend using the default Kubernetes Node IP (0.0.0.0) to ensure apps function correctly.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS SCALE has a pre-built official catalog of over 50 available iXsystems-approved applications.
Users can configure custom apps catalogs if they choose, but iXsystems does not directly support non-official apps in a custom catalog.
TrueNAS uses outbound ports 80/443 to retrieve the TRUENAS catalog.
Users can manage the catalog from the Catalogs screen. Click Manage Catalogs at the top right side of the Discover screen to open the Catalogs screen.
Users can edit, refresh, delete, and view the catalog summary by clicking on a catalog to expand and show the options.
Edit opens the Edit Catalog screen, where users can change the name SCALE uses to look up a catalog or change the trains from which the UI retrieves available applications for the catalog.
Refresh pulls the catalog from its repository and refreshes it by applying any updates.
Delete allows users to remove a catalog from the system. Users cannot delete the default TRUENAS catalog.
Summary lists all apps in the catalog and sorts them by train, app, and version. Users can filter the list by Train and Status (All, Healthy, or Unhealthy).
For best stability during upgrades to future major versions of TrueNAS SCALE, use applications provided by the default TRUENAS catalog.
Third-party app catalogs available for TrueNAS are provided and maintained by individuals or organizations outside of iXsystems. iXsystems does not provide support for third-party applications, nor can we guarantee app updates and consistent functionality over time. Users who wish to deploy third-party catalogs should be prepared to self-support installed applications or rely on support services from the catalog provider.
To add a catalog, click Add Catalog at the top right of the Catalogs screen.
A warning dialog opens.
Click Continue to open the Add Catalog screen.
Fill out the Add Catalog form.
Enter a name in Catalog Name, for example, type mycatalog.
We do not recommend enabling Force Create, since it overrides safety mechanisms that prevent adding a catalog to the system even if some trains are unhealthy.
Select a valid GitHub repository in Repository. For example, https://github.com/mycatalog/catalog.
Type the name of the train TrueNAS should use to retrieve available application information from the catalog.
Finally, enter the GitHub repository branch TrueNAS should use for the catalog in Branch. Leave this set to main unless you need to change it.
Click Save.
Go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, then select enterprise to add it to the list of trains.
Click Save.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
SCALE includes the ability to run third-party apps in containers (pods) using Kubernetes settings.
Generally, you can deploy any container that follows the Open Container Initiative specifications.
Always read through the documentation page for the application container you are considering installing so that you know all of the settings that you need to configure. To set up a new container image, first, determine if you want the container to use additional TrueNAS datasets. If yes, create a dataset for host volume paths before you click Custom App on the Discover application screen.
Custom Docker applications typically follow open container specifications and deploy in TrueNAS following the custom application deployment process described below.
Carefully review documentation for the app you plan to install before attempting to install a custom app. Take note of any required environment variables, optional variables you want to define, start-up commands or arguments, networking requirements, such as port numbers, and required storage configuration.
If your application requires specific directory paths, datasets, or other storage arrangements, configure these before you start the Install Custom App wizard.
You cannot save settings and exit the configuration wizard to create data storage or directories in the middle of the process. If you are unsure about any configuration settings, review the Install Custom App Screen UI reference article before creating a new container image.
To create directories in a dataset on SCALE, before you begin installing the container, open the TrueNAS SCALE CLI and enter
storage filesystem mkdir path="/PATH/TO/DIRECTORY"
.
When you are ready to create a container, go to Apps, click Discover Apps, then click Custom App.
Enter a name for the container in Application Name. Accept the application train number in Version.
Enter the Docker Hub repository for the application you want to install in Image Repository using the format maintainer/image
, for example storjlabs/storagenode, or image
, such as debian, for Docker Official Images.
If the application requires it, enter the correct value in Image Tag and select the Image Pull Policy to use.
If the application requires it, enter the executables you want or need to run after starting the container in Container Entrypoint. Define any commands and arguments to use for the image. These can override any existing commands stored in the image.
Click Add for Container CMD to add a command. Click Add for Container Args to add a container argument.
Enter the Container Environment Variables to define additional environment variables for the container. Not all applications use environment variables. Check the application documentation to verify the variables that particular application requires.
Enter the networking settings. To use a unique IP address for the container, set up an external interface.
Users can create additional network interfaces for the container if needed. Users can also give static IP addresses and routes to a new interface.
a. Click Add to display the Host Interface and IPAM Type fields required when configuring network settings. Select the interface to use. Select Use static IP to display the Static IP Addresses and Static Routes fields, or select Use DHCP.
b. Scroll down to select the DNS Policy and enter any DNS configuration settings required for your application. By default, containers use the DNS settings from the host system. You can change the DNS policy and define separate nameservers and search domains. See the Kubernetes DNS services documentation for more details.
Enter the Port Forwarding settings. You can define multiple ports to forward to the workload.
If port forwarding settings do not display, remove external networking interfaces under Networking.
Click Add for each port you need to enter. Enter the required Container Port and Node Port settings, and select the Protocol for these ports.
The node port number must be over 9000. Ensure no other containers or system services are using the same port number.
Repeat for all ports.
Add the Storage settings. Review the image documentation for required storage volumes. See Setting up Storage below for more information.
Click Add for each host path volume. Enter or browse to select the Host Path for the dataset on TrueNAS. Enter the Mount Path to mount the host path inside the container.
Add any memory-backed or other volumes you need or want to use. You can add more volumes to the container later, if needed.
Enter any additional settings required for your application, such as workload details or adding container settings for your application.
Select the Scaling/Upgrade Policy to use. The default is Kill existing pods before creating new ones.
Use Resource Reservation to allocate GPU resources if available and required for the application.
Set any Resource Limits you want to impose on this application. Select Enable Pod resource limits to display the CPU Limit and Memory Limit fields.
Enter or select any Portal Configuration settings to use. Select Enable WebUI Portal to display UI portal settings.
Click Install to deploy the container. If you correctly configured the app, the widget displays on the Installed Applications screen.
When complete, the container becomes active. If the container does not automatically start, click Start on the widget.
Click on the App card reveals details.
You can mount SCALE storage locations inside the container. To mount SCALE storage, define the path to the system storage and the container internal path for the system storage location to appear. You can also mount the storage as read-only to prevent using the container to change any stored data. For more details, see the Kubernetes hostPath documentation.
Users can create additional Persistent Volumes (PVs) for storage within the container. PVs consume space from the pool chosen for application management. You need to name each new dataset and define a path where that dataset appears inside the container.
To view created container datasets, go to Datasets and expand the dataset tree for the pool you use for applications.
Users developing applications should be mindful that if an application uses Persistent Volume Claims (PVC), those datasets are not mounted on the host and therefore are not accessible within a file browser. Upstream zfs-localpv uses this behavior to manage PVC(s).
To consume or have file browser access to data that is present on the host, set up your custom application to use host path volumes.
Alternatively, you can use the network to copy directories and files to and from the pod using k3s kubectl
commands.
To copy from a pod in a specific container:
k3s kubectl cp <file-spec-src> <file-spec-dest> -c <specific-container>
To copy a local file to the remote pod:
k3s kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
To copy a remote pod file locally:
k3s kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Enhancing app security is a multifaceted challenge and there are various effective approaches. We invite community members to share insights on their methods by contributing to the documentation.
TrueNAS SCALE offers various applications, either directly provided or via the community. While applications can greatly expand TrueNAS functionality, making them accessible from outside the local network can create security risks that need to be solved.
Regardless of the VPN or reverse proxy you use, follow best practices to secure your applications.
- Update the applications regularly to fix security issues.
- Use strong passwords and 2FA, preferably TOTP, or passkeys for your accounts.
- Don’t reuse passwords, especially not for admin accounts.
- Don’t use your admin account for daily tasks.
- Create a separate admin account and password for every application you install.
The tutorials in this section aim to provide a general overview of different options to secure apps by installing an additional application client like Cloudflared or WireGuard to proxy traffic between the user and the application.
See the available guides below.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
This Guide shows how to create a Cloudflare tunnel and configure the Nextcloud and Cloudflared applications in TrueNAS SCALE. The goal is to allow secure access from anywhere.
Exposing applications to the internet can create security risks. Always follow best practices to secure your applications.
See additional security considerations below.
Review the Nextcloud documentation to get a better understanding of the security implications before proceeding.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Cloudflare Tunnel is a system that proxies traffic between the user and the application over the Cloudflare network. It uses a Cloudflared client that is installed on the TrueNAS SCALE system.
This allows a secure and encrypted connection without the need of exposing ports or the private IP of the TrueNAS system to the internet.
Register or log in to a Cloudflare account. A free account is sufficient.
Follow Cloudflare documentation to register a domain and set up DNS.
This video from Lawrence Systems provides a detailed overview of setting up Cloudflare tunnels for applications. It assumes that the applications run as a docker container, but the same approach can be used to secure apps running on TrueNAS SCALE in Kubernetes.
In the Cloudflare One dashboard:
Go to Networks and select Tunnels.
Click Create Tunnel, choose type Cloudflared and click Next.
Choose a Tunnel Name and click Save tunnel.
Copy the tunnel token from the Install and run a connector screen. This is needed to configure the Cloudflared app in TrueNAS SCALE.
The operating system selection does not matter as the same token is used for all options. For example, the command for a docker container is:
docker run cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
eyJhIjoiNjVlZGZlM2IxMmY0ZjEwNjYzMDg4ZTVmNjc4ZDk2ZTAiLCJ0IjoiNWYxMjMyMWEtZjE
2YS00MWQwLWFhN2ItNjJiZmYxNmI4OGIwIiwicyI6IlpqQmpaRE13WXpBdFkyRmpPUzAwWVRCbU
xUZ3hZVGd0TlRWbE9UQmpaakEyTlRFMCJ9
Copy the string after --token
, then click Next.
Add a public hostname for accessing Nextcloud, for example: nextcloud.example.com.
Set service Type to HTTPS. Enter the local TrueNAS IP with the Nextcloud container port, for example 192.168.1.1:9001.
Go to Additional application Settings, select TLS from the dropdown menu, and enable No TLS Verify.
Click Save tunnel.
The new tunnel displays on the Tunnels screen.
After creating the Cloudflare tunnel, go to Apps in the TrueNAS SCALE UI and click Discover Apps. Search or browse to select the Cloudflared app from the community train and click Install.
Accept the default Application Name and Version.
Copy the Cloudflare tunnel token from the Cloudflare dashboard
Paste the token from Cloudflare, that you copied earlier, in the Tunnel Token field.
All other settings can be left as default.
Click Save and deploy the application.
Install the Nextcloud community application.
The first application deployment may take a while and starts and stops multiple times. This is normal behavior.
The Nextcloud documentation provides information on running Nextcloud behind a reverse proxy. Depending on the reverse proxy and its configuration, these settings may vary. For example, if you don’t use a subdomain, but a path like example.com/nextcloud.
If you want to access your application via subdomain (shown in this guide) two environment variables must be set in the Nextcloud application: overwrite.cli.url and overwritehost.
Enter the two environment variables in Name as OVERWRITECLIURL and OVERWRITEHOST.
Enter the address for the Cloudflare Tunnel, configured above in Value, for example nextcloud.example.com.
With the Cloudflare connector and Nextcloud installed and configured, in your Cloudflare dashboard, go to Networks and select Tunnels.
The status of the tunnel should be HEALTHY.
Nextcloud should now be reachable via the Cloudflare Tunnel address, nextcloud.example.com in this example, using a HTTPS connection.
Use strong user passwords and configure two-factor authentication for additional security.
Cloudflare offers access policies to restrict access to the application to specific users, emails or authentication methods.
Go to Access, click Add an Application, and select Self-Hosted.
Add your Nextcloud application and the domain you configured in the Cloudflare tunnel.
Click Next.
Create a new policy by entering a Policy Name. Groups can be assigned to this policy or additional rules can be added.
Click Next and Save.
Note: there are additional options for policy configuration, but these are beyond the scope of this tutorial.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The TrueNAS community creates and maintains numerous applications intended to expand system functionality far beyond what is typically expected from a NAS.
The TrueNAS catalog is loaded by default and is used to populate the Discover apps screen. To view the catalog settings, select the Manage Catalogs at the top of the Discover apps screen.
Applications are provided “as-is” and can introduce system stability or security issues when installed. Some applications deploy as the root user for initial configuration before operating as a non-root user. Make sure the application is required for your specific use requirements and does not violate your security policies before installation.
The remaining tutorials in this section are for specific applications that are commonly used or replace some functionality that was previously built-in with TrueNAS.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Syncthing is a file synchronization application that provides a simple and secure environment for file sharing between different devices and locations. Use it to synchronize files between different departments, teams, or remote workers.
Syncthing is tested and validated to work in harmony with TrueNAS platforms and underlying technologies such as ZFS to offer a turnkey means of keeping data across many systems. It can seamlessly integrate with TrueNAS.
Syncthing does not use or need a central server or cloud storage. All data is encrypted and synchronized directly between devices to ensure files are protected from unauthorized access.
Syncthing is easy to use and configure. You can install on a wide range of operating systems, including Windows, MacOS, Linux, FreeBSD, iOS or Android. The Syncthing web UI provides users with easy management and configuration of the application software.
This article provides information on installing and using the TrueNAS Syncthing app.
SCALE has two versions of the Syncthing application, the community version in the charts train and a smaller version tested and polished for a safe and supportable experience for enterprise customers in the enterprise train. Community members can install either the enterprise or community version.
You can allow the app to create a storage volume(s) or use existing datasets created in SCALE. The TrueNAS Syncthing app requires a main configuration storage volume for application information. You can also mount existing datasets for storage volume inside the container pod.
If you want to use existing datasets for the main storage volume, [create any datasets]/scaletutorials/datasets/datasetsscale/ before beginning the app installation process (for example, syncthing for the configuration storage volume). If also mounting storage volume inside the container, create a second dataset named data1. If mounting multiple storage volumes, create a dataset for each volume (for example, data2, data3, etc.).
You can have multiple Syncthing app deployments (two or more Charts, two or more Enterprise, or Charts and Enterprise trains, etc.). Each Syncthing app deployment requires a unique name that can include numbers and dashes or underscores (for example, syncthing2, syncthing-test, syncthing_1, etc.).
Use a consistent file-naming convention to avoid conflict situations where data does not or cannot synchronize because of file name conflicts. Path and file names in the Syncthing app are case sensitive. For example, a file named MyData.txt is not the same as mydata.txt file in Syncthing.
If not already assigned, set a pool for applications to use.
Either use the default user and group IDs or create the new user with Create New Primary Group selected. Make note of the UID/GID for the new user.
Go to Apps > Discover Apps and locate the Syncthing charts app widget.
Click on the widget to open the Syncthing details screen.
Click Install to open the Install Syncthing screen.
Application configuration settings are presented in several sections, each explained below. To find specific fields, click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default owner user and group ID settings. You can customize your Syncthing charts deployment by adding environment variables but these are not required.
Add the storage volume(s). Either allow the Syncthing app to create the configuration storage volume or use an existing dataset created for this app. To use an existing dataset, select Enable Custom Host Path for Syncthing Configuration Volume, then browse to and select the dataset to populate the field. See Storage Settings for more details on adding existing datasets.
Accept the default port numbers in Networking. See Network Settings below for more information on network settings. Before changing the default port number, see Default Ports for a list of assigned port numbers. When selected, Host Network binds to the default host settings programmed for Syncthing. We recommend leaving this disabled.
Syncthing does not require advanced DNS options. If you want to add DNS options, see Advanced DNS Settings below.
Accept the default resource limit values for CPU and memory or select Enable Pod resource limits to show the CPU and memory limit fields, then enter the new values you want to use for Syncthing. See Resource Configuration Settings below for more information.
Click Install. The system opens the Installed Applications screen with the Syncthing app in the Deploying state. After installation completes the status changes to Running.
Click Web Portal on the Application Info widget to open the Syncthing web portal to begin configuring folders, devices, and other settings.
Secure Syncthing by setting up a username and password.
The following sections provide more detail explanations of the settings found in each section of the Install Syncthing screen.
Accept the default value or enter a name in Application Name field. In most cases use the default name but adding a second application deployment requires a different name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
Accept the defaults in the Configuration settings or enter new user and group IDs. The default value for Owner User ID and Owner Group ID is 568.
Click Add to the right of Syncthing environment to show the Name and Value fields.
For a list of Syncthing environment variables, go to the Syncthing documentation website and search for environment variables. You can add environment variables to the Syncthing app configuration after deploying the app. Click Edit on the Syncthing Application Info widget found on the Installed Application screen to open the Edit Syncthing screen.
You can allow the Syncthing app to create the configuration storage volume or you can create datasets to use for the configuration storage volume and to use for storage within the container pod.
To use existing datasets, select Enable Custom Host Path for Syncthing Configuration Volume to show the Host Path for Syncthing Configuration Volume and Extra Host Path Volumes fields. Enter the host path in Host Path for Syncthing Configuration Volume or browse to and select the dataset an existing dataset created for the configuration storage volume.
Add to the right of Extra Host Path Volumes shows the Mount Path in Pod and Host Path fields.
Enter the data1 dataset in Mount Path in Pod, then enter or browse to the dataset location in Host Path. If you added extra datasets to mount inside the container pod, click Add for each extra host path you want to mount inside the container pod. Enter or browse to the dataset created for the extra storage volumes to add inside the pod.
Accept the default port numbers in Web Port for Syncthing, TCP Port for Syncthing and UDP Port for Syncthing. The SCALE Syncthing chart app listens on port 20910. The default TCP port is 20978 and the default UDP port is 20979. Before changing default ports and assigning new port numbers, refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network. This binds to the host network.
Syncthing does not require configuring advanced DNS options. Accept the default settings or click Add to the right of DNS Options to enter the option name and value.
Accept the default values in Resources Configuration or select Enable Pod resource limits to enter new CPU and memory values. By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Syncthing uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi
After installing and starting the Syncthing application, launch the Syncthing web portal. Go to Actions > Settings and set a user password for the web UI.
The Syncthing web portal allows administrators to monitor and manage the synchronization process, view logs, and adjust settings.
Folders list configured sync folders, details on sync status and file count, capacity, etc. To change folder configuration settings, click on the folder.
This Device displays the current system IO status including transfer/receive rate, number of listeners, total uptime, sync state, and the device ID and version.
Actions displays a dropdown list of options. Click Advanced to access GUI, LDAP, folder, device, and other settings.
You can manage directional settings for sync configurations, security, encryption, and UI server settings through the Actions options.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The SCALE Chia app installs the Chia Blockchain architecture in a Kubernetes container. Chia Blockchain is a cryptocurrency ecosystem that uses Proof of Space and Time, and allows users to work with digital money and interact with their assets and resources. Instead of using expensive hardware that consumes exorbitant amounts of electricity to mine crypto, it leverages existing empty hard disk space on your computer(s) to farm crypto with minimal resources.
Before you install the application, you have the option to create the config and plots datasets for the Chia app storage volumes, or you can allow the SCALE to automatically create these storage volumes.
You also have the option to mount datasets inside the container for other Chia storage needs. You can allow SCALE to create these storage volumes, or you can create and name datasets according to your intended use or as sequentially-named datasets (i.e., volume1, volume2, etc.) for each extra volume to mount inside the container.
Create all datasets before you begin the app installation process if using existing datasets and the host path option. See Creating a Dataset for information on correctly configuring the datasets.
To install the SCALE Chia app you:
Log into SCALE, go to Apps, click on Discover Apps, then either begin typing Chia into the search field or scroll down to locate the Chai application widget.
Click on the widget to open the Chia app information screen.
Click Install to open the Install Chia configuration screen.
Application configuration settings are presented in several sections, each explained in Understanding SCALE Chia App Settings below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default value or enter a name in Application Name. Accept the default value in Version.
Select the timezone for your TrueNAS system location from the Timezone dropdown list of options.
Select the service from the Chia Service Mode dropdown list. The default option is Full Node, but you can select Farmer or Harvester. Harvester displays additional settings, each described in Chia Configuration below. Refer to Chia-provided documentation for more information on these services.
You can enter the network address (IP address or host name) for a trusted peer in Full Node Peer now or after completing the app installation. This is the trusted/known or untrusted/unknown server address you want to use in sync operations to speed up the sync time of your Chia light wallet. If not already configured in Chia, you can add this address as a trusted peer in Chia after completing the app installation.
Accept the default values in Chia Port and Farmer Port. You can enter port numbers below 9000, but check the Default Ports list to verify the ports are available. Setting ports below 9000 automatically enabled host networking.
By default, SCALE can create the storage volumes (datasets) for the app.
If you created datasets to use, select Host Path (Path that already exists on the system). Enter or browse to select the mount path for the config and plot datasets created in First Steps and populate the Host Path field for both Data and Plots storage volumes.
Accept the defaults in Resource Configuration or change the CPU and memory limits to suit your use case.
Click Install. The system opens the Installed Applications screen with the SCALE Chia app in the Deploying state. When the installation completes, it changes to Running.
The first time the SCALE Chia app launches, it automatically creates and sets a new private key for your Chia plotting and wallet, but you must preserve this key across container restarts.
To make sure your plots and wallet private key persists (is not lost) across container restarts, save the mnemonic seed created during the app installation and deployment.
On the Installed apps screen, click on the Chia app row, then scroll down to the Workloads widget and the Shell and Logs icons.
Click on the shell icon to open the Choose pod window.
Click Choose to open the Pod shell screen.
To show Chia key file details and the 24 word recovery key, enter /chia-blockchain/venv/bin/chia keys show --show-mnemonic-seed
.
The command should return the following information:
If you loose the keyfile at any time, use this information to recover your account. To copy from the SCALE Pod Shell, highlight the part of the screen you want to copy, then press Ctrl+Insert. Open a text editor like Notepad, paste the information into the file and save it. Back up this file and keep it secured where only authorized people can access it.
Now save this mnemonic-seed phrase to one of the host volumes on TrueNAS. Enter this command at the prompt:
echo type all 24 unique secret words in this command string > /plots/keyfile
Where type all 24 unique secret words in this command string is all 24 words in the mnemonic-seed.
Next, edit the SCALE Chia app to add the key file.
Click Installed on the breadcrumb at the top of the Pod Shell screen to return to the Apps > Installed screen. Click on the Chia app row, then click Edit in the Application Info widget to open the Edit Chia screen.
Click on Chia Configuration on the menu on the right of the screen or scroll down to this section. Click Add to the right of Additional Environments to add the Name and Value fields.
Enter keys in Name and /plots/keyfile in Value.
Scroll down to the bottom of the screen and click Save. The container should restart automatically.
After the app returns to the Running state, you can confirm the keys by returning to the Pod shell screen and entering the /chia-blockchain/venv/bin/chia keys show --show-mnemonic-seed
command again.
If the keys are not identical, edit the Chia app again and check for any errors in the name of values entered.
If identical, the keys file persists for this container.
You can now complete your Chia configuration using either the Chia command line (CLI) or web interface (GUI).
To complete the Chia software and client setup, go to the Chia Crash Course and Chai Getting Started guides and follow the instructions provided. The following shows the simplest option to install the Chia GUI.
Click on the link to the Chia downloads and select the option that fits your operating system environment. This example shows the Windows setup option.
After downloading the setup file and opening the Chia Setup wizard, agree to the license to show the Chia setup options.
Click Next. Choose the installation location.
Click Install to begin the installation. When complete, click Next to show the Chia Setup Installation Complete wizard window. Launch Chia is selected by default. Select the Add the Chia Command Line executable to PATH advanced option if you want to include this. Click Finish.
After the setup completes the Chia web portal opens in a new window where you configure your Chia wallet and farming modes, and other settings to customize Chia for your use case.
Use the Chia Documentation to complete configuration of your Chia software and client.
At this point, you are ready to begin farming Chia.
The CLI process is beyond the scope of this quick how-to, but we recommend you start by reading up on their CLI reference materials, Quick Start guide and other documentation.
The following sections provide more details on the settings found in each section of the SCALE Install Chia and Edit Chia screens.
Accept the default value or enter a name in Application Name field. In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
The Chia Configuration section includes four settings: Timezone, Chia Service Node, Full Node Peer, and Additional Environments.
Select the time zone for the location of the TrueNAS server from the dropdown list.
The Chia Service Node has three options: Full Node, Farmer, and Harvestere. The default Full Node, and Farmer do not have extra settings.
Selecting Harvester shows the required Farmer Address and Farmer Port settings, and CA for the certificate authority for the farmer system. Refer to Chia documentation on each of these services and what to enter as the farmer address and CA.
After configuring Chia in the Chia GUI or CLI, you can edit these configuration settings. You can also create a second SCALE Chia app deployment by repeating the instructions above if you want to create a second app deployment as a Harvester service node.
You can enter the network address (IP address or host name) for a trusted peer in Full Node Peer now or after completing the app installation and setting up the Chia GUI or CLI and configuring the Chia blockchain. Enter the trusted/known or untrusted/unknown server address you want to use in sync operations to speed up the sync time of your Chia light wallet. You can also edit this after the initial app installation in SCALE.
Click Add to the right of Additional Environments to add a Name and Value field. You can add environment variables here to customize Chia, and to make the initial key file persist, survive after a container restart. Click Add for each environment variable you want to add. Refer to Chia documentation for information on environment variables you might want to implement.
Accept the default port numbers in Chia Port and Farmer Port. The SCALE Chai app listens on port 38444 and 38447.
Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter an available number within the range 9000-65535.
You can allow SCALE to create the datasets for Chia plots and configuration storage, or you can create the datasets you want to use as storage volumes for the app or to mount in the container. If manually creating and using datasets, follow the instructions in Creating a Dataset to correctly configure the datasets. Add one dataset named config and the another named plots. If also mounting datasets in the container, add and name these additional storage volumes according to your intended use or use volume1, volume2, etc. for each additional volume.
In the SCALE Chia app Storage Configuration section, select Host Path (Path that already exists on the system) as the Type for the Data storage volume. Enter or browse to and select the location of the existing dataset to populate the Host Path field. Repeat this for the Plots storage volume.
If adding storage volumes inside the container pod, click Add to the right of Additional Volumes for each dataset or ixVolume you want to mount inside the pod.
You can edit the SCALE Chia app after installing it to add additional storage volumes.
The Resources Configuration section allows you to limit the amount of CPU and memory the application can use. By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory. The application might use considerably less system resources.
Tune these limits as needed to prevent the application from over-consuming system resources and introducing performance issues.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The SCALE Apps catalogue now includes Collabora from the developers of Nextcloud.
With Collabora, you can host your online office suite at home.
To integrate Collabora correctly, you must have a Nextcloud account with Collabora added.
Click on the Collabora app Install button in the Available Applications list.
Name your app and click Next. In this example, the name is collabora1.
Select a Timezone and, if you wish, enter a custom Username and Password.
You can also add extra parameters to your container as you see fit. See The LibreOffice GitHub Parameters page for more.
After you select your container settings, choose a Certificate and click Next.
Enter Environmental Variables as needed, then click Next.
Choose a node port to use for Collabora (we recommend the default), then click Next.
Configure extra host path volumes for Collabora as you see fit, then click Next.
Confirm your Collabora container options and click Save to complete setup.
After a few minutes, the Collabora container displays as ACTIVE.
After it does, you can click Web Portal to access the admin console.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The DDNS-Updater application is a lightweight universal dynamic DNS (DDNS) updater with web UI. When installed, a container launches with root privileges in order to apply the correct permissions to the DDNS-Updater directories. Afterwards, the container runs as a non-root user.
Make sure to have account credentials ready with the chosen DNS provider before installing the application in TrueNAS.
To grant access to specific user (and group) accounts other than using the default apps user (UID: 568), add a non-root TrueNAS administrative user from Credentials > Local Users and record the UID and GID for this user. Using a non-default user/group account forces permissions changes on any defined data storage for this application.
Have the TRUENAS catalog loaded and community train enabled. To view and adjust the current application catalogs, go to Apps and click Discover Apps > Manage Catalogs.
Go to Apps, click Discover Apps, and locate the DDNS-Updater application widget by typing the first few characters of the application name in the search bar.
Click the application card to see additional details about the application and options to install it.
Click Install to open the DDNS-Updater configuration screen. Application configuration options are presented in several sections. Find specific fields or skip to a particular section with the navigation box in the upper-right corner.
Leave these fields at their default settings. Changing the application version is only recommended when a specific version is required.
Select the timezone that applies to the TrueNAS location from the Timezone dropdown list.
Click Add to the right of DNS Provider Configuration to display provider setting options. Select the DDNS provider from the Provider dropdown list. Each provider displays the settings required to establish a connection with and authenticate to that specific provider.
Enter the domain and host name split between the Domain and Host fields.
For example, populate domain myhostname.ddns.net with ddns.net in Domain and myhostname afer the @ in Host or @myhostname.
Define how often to check IP addresses with Update Period and Update Cooldown Period.
The application also creates
To configure notifications with the Shoutrrr service, click Add and enter the service Address under Shoutrrr Addresses.
Use the Public IP options to define which providers to use for the various DNS, IPv4, and IPv6 public addresses. The default All providers allows for quick app usability but these options can be tuned as needed.
By default, the TrueNAS apps (UID/GID 568) user and group account manages this application.
Entering an alternate UID or GID reconfigures the application to run as that account. When using a custom account for this application, make sure the account is a member of the Builtin_administrators group and that the storage location defined in Storage Configuration has permissions tuned for this account after the application is installed.
By default, this application uses TrueNAS port 30007 to access the application web interface.
Adjust the Web Port integer when a different network port is required. Select Host Network to bind to the host network, but we recommend leaving this disabled.
Select the DDNS Updater Data Storage option from the Type dropdown list. Options are the iXVolume or a predefined host path.
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory. The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Review the configuration settings then click Install for TrueNAS to download and initialize the application.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Immich is a self-hosted photo and video backup tool.
Immich integrates photo and video storage with a web portal and mobile app. It includes features such as libraries, automatic backup, bulk upload, partner sharing, Typesense search, facial recognition, and reverse geocoding.
TrueNAS SCALE makes installing Immich easy, but you must use the Immich web portal and mobile app to configure accounts and access libraries.
The Immich app in TrueNAS SCALE installs, completes the initial configuration, then starts the Immich web portal. When updates become available, SCALE alerts and provides easy updates.
Before installing the Immich app in SCALE, review their Environment Variables documentation and to see if you want to configure any during installation. You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
You can allow SCALE to create the datasets Immich requires automatically during app installation.
Or before beginning app installation, create the datasets to use in the Storage Configuration section during installation.
Immich requires seven datasets: library, pgBackup, pgData, profile, thumbs, uploads, and video.
You can organize these as one parent with seven child datasets, for example
To install the Immich application, go to Apps, click Discover Apps, either begin typing Immich into the search field or scroll down to locate the Immich application widget.
Click on the widget to open the Immich application details screen.
Click Install to open the Immich application configuration screen.
Application configuration settings are presented in several sections, each explained below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default value in Timezone or change to match your local timezone.
Timezone is only used by the Immich exiftool
microservice if it cannot be determined from the image metadata.
Accept the default port in Web Port.
Immich requires seven storage datasets. You can allow SCALE to create them for you, or use the dataset(s) created in First Steps. Select the storage options you want to use for Immich Uploads Storage, Immich Library Storage, Immich Thumbs Storage, Immich Profile Storage, Immich Video Storage, Immich Postgres Data Storage, Immich Postgres Backup Storage. Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Accept the defaults in Resources or change the CPU and memory limits to suit your use case.
Click Install. The system opens the Installed Applications screen with the Immich app in the Deploying state. When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Immich web interface to set up your account and begin uploading photos. See Immich Post Install Steps for more information.
Go to the Installed Applications screen and select Immich from the list of installed applications. Click Edit on the Application Info widget to open the Edit Immich screen. The settings on the edit screen are the same as on the install screen. You cannot edit Storage Configuration paths after the initial app install.
Click Update to save changes. TrueNAS automatically updates, recreates, and redeploys the Immich container with the updated environment variables.
The following sections provide more detailed explanations of the settings found in each section of the Install Immich screen.
Accept the default value or enter a name in the Application Name field. In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
You can accept the defaults in the Immich Configuration settings, or enter the settings you want to use.
Accept the default setting in Timezone or change to match your local timezone.
Timezone is only used by the Immich exiftool
microservice if it cannot be determined from the image metadata.
You can enter a Public Login Message to display on the login page, or leave it blank.
Accept the default port numbers in Web Port. The SCALE Immich app listens on port 30041.
Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
You can install Immich using the default setting ixVolume (dataset created automatically by the system) or use the host path option with datasets created before installing the app.
Select Host Path (Path that already exists on the system) to browse to and select the datasets.
Accept the default values in Resources Configuration or enter new CPU and memory values By default, this application is limited to use no more than 4 CPU cores and 8 gibibytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container Immich uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m, which means Immich is able to use 4 cores.
Accept the default value 8Gi allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example 4G or 123Mi.
Systems with compatible GPU(s) display devices in GPU Configuration. Use the GPU Resource dropdown menu(s) to configure device allocation.
See Allocating GPU for more information about allocating GPU devices in TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Jellyfin is a volunteer-built media solution that puts you in control of managing and streaming your media.
Jellyfin enables you to collect, manage, and stream media files. Official and third-party Jellyfin streaming clients are available on most popular platforms.
TrueNAS SCALE makes installing Jellyfin easy, but you must use the Jellyfin web portal to configure accounts and manage libraries.
The Jellyfin app in TrueNAS SCALE installs, completes the initial configuration, then starts the Jellyfin web portal. When updates become available, SCALE alerts and provides easy updates.
You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
You can allow SCALE to create the datasets Jellyfin requires automatically during app installation.
Or before beginning app installation, create the datasets to use in the Storage Configuration section during installation.
Jellyfin requires two datasets: config and cache.
You can organize these as one parent with two child datasets, for example
If you want to run the application with a user or group other than the default apps (568) user and group, create them now.
To install the Jellyfin application, go to Apps, click Discover Apps, either begin typing Jellyfin into the search field or scroll down to locate the Jellyfin application widget.
Click on the widget to open the Jellyfin application details screen.
Click Install to open the Jellyfin application configuration screen.
Application configuration settings are presented in several sections, each explained below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the defaults in Jellyfin Configuration, User and Group Configuration, and Network Configuration or change to suit your use case. You must select Host Network under Network Configuration if using DLNA.
Jellyfin requires three app storage datasets. You can allow SCALE to create them for you, or use the dataset(s) created in First Steps. Select the storage options you want to use for Jellyfin Config Storage and Jellyfin Cache Storage. Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Jellyfin also requires a dataset or emptyDir for Jellyfin Transcodes Storage. Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset, select Host Path to use an existing dataset created on the system, or select emptyDir to use a temporary storage volume on the disk or in memory.
Solid state storage is recommended for config and cache storage. Do not use the same spinning disk device for both cache and config and media storage libraries.
Mount one or more media libraries using Additional Storage. Click Add to enter the path(s) on your system. Select Host Path (Path that already exists on the system) or SMB Share (Mounts a persistent volume claim to a SMB share) in Type. Enter a Mount Path to be used within the Jellyfin container. For example, the local Host Path /mnt/tank/video/movies could be assigned the Mount Path /media/movies. Define the Host Path or complete the SMB Share Configuration fields. See Mounting Additional Storage below for more information.
Accept the defaults in Resource Configuration or change the CPU and memory limits to suit your use case.
Click Install.
A container launches with root privileges to apply the correct permissions to the Jellyfin directories. Afterward, the Jellyfin container runs as a non-root user (default: 568). Configured storage directory ownership is changed if the parent directory does not match the configured user.
The system opens the Installed Applications screen with the Jellyfin app in the Deploying state. When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Jellyfin web interface initial setup wizard to set up your admin account and begin administering libraries.
Go to the Installed Applications screen and select Jellyfin from the list of installed applications. Click Edit on the Application Info widget to open the Edit Jellyfin screen. The settings on the edit screen are the same as on the install screen. You cannot edit Storage Configuration paths after the initial app install.
Click Update to save changes. TrueNAS automatically updates, recreates, and redeploys the Jellyfin container with the updated environment variables.
The following sections provide more detailed explanations of the settings found in each section of the Install Jellyfin screen.
Accept the default value or enter a name in the Application Name field. In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
You can accept the defaults in the Jellyfin Configuration settings, or enter the settings you want to use.
You can enter a Published Server URL for use in UDP autodiscovery, or leave it blank.
If needed, click Add to define Additional Environment Variables, see the Jellyfin Configuration documentation for options.
You can accept the default value of 568 (apps) in User ID and Group ID or define your own.
This user and group are used for running the Jellyfin container only and cannot be used to log in to the Jellyfin web interface. Create an admin user in the Jellyfin initial setup wizard to access the UI.
Select Host Network under Network Configuration if using DLNA, to bind network configuration to the host network settings. Otherwise, leave Host Network unselected.
Accept the default port numbers in Web Port. The SCALE Jellyfin app listens on port 30013.
Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
You can install Jellyfin using the default setting ixVolume (dataset created automatically by the system) or use the host path option with datasets created before installing the app.
Select Host Path (Path that already exists on the system) to browse to and select the datasets.
For Jellyfin Transcodes Storage, choose ixVolume, Host Path, or emptyDir (Temporary directory created on the disk or in memory). An emptyDir uses ephemeral storage either on the disk or by mounting a tmpfs (RAM-backed filesystem) directory for storing transcode files.
Click Add next to Additional Storage to add the media storage path(s) on your system.
Select Host Path (Path that already exists on the system) or SMB Share (Mounts a persistent volume claim to a SMB share) in Type. You can select iXvolume (dataset created automatically by the system) to create a new library dataset, but this is not recommended.
Mounting an SMB share allows data synchronization between the share and the app. The SMB share mount does not include ACL protections at this time. Permissions are currently limited to the permissions of the user that mounted the share. Alternate data streams (metadata), finder colors tags, previews, resource forks, and MacOS metadata is stripped from the share along with filesystem permissions, but this functionality is undergoing active development and implementation planned for a future TrueNAS SCALE release.
For all types, enter a Mount Path to be used within the Jellyfin container.
For example, the local Host Path
Accept the default values in Resources Configuration or enter new CPU and memory values By default, this application is limited to use no more than 4 CPU cores and 8 gibibytes available memory.
To customize the CPU and memory allocated to the container Jellyfin uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m, which means Jellyfin is allowed to use 4 CPU cores.
Accept the default value 8Gi allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example 4G.
Systems with compatible GPU(s) display devices in GPU Configuration. Use the GPU Resource dropdown menu(s) to configure device allocation.
See Allocating GPU for more information about allocating GPU devices in TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
This section has tutorials for using the MinIO apps available for TrueNAS SCALE.
SCALE has two version of the MinIO application. The community version of the S3 application available in the charts train of TRUENAS catalog application. The MinIO Enterprise version of the application is a smaller version of MinIO that is tested and polished for a safe and supportable experience for TrueNAS Enterprise customers. Community members can install either the Enterprise or community version.
MinIO High Performance Object Storage, released under the Apache Licenses v2.0 is an Open Source, Kubernetes Native, and Amazon S3 cloud storage compatible object storage solution. For more on MinIO, see MinIO Object Storage for Kubernetes.
The Minio applications, chart and enterprise train versions, allow users to build high performance infrastructure for machine learning, analytics, and application data workloads.
MinIO supports distributed mode. Distributed mode, allows pooling multiple drives, even on different systems, into a single object storage server. For information on configuring a distributed mode cluster in SCALE using MinIO, see Setting Up MinIO Clustering.
For information on installing and configuring MinIO Enterprise, see Installing MinIO Enterprise.
The instructions in this section cover the basic requirements and instruction on how to install and configure the community MinIO application, charts train version. For instructions on installing the Enterprise version of the MinIO application see Configuring Enterprise MinIO.
Before configuring MinIO, create a dataset and shared directory for the persistent MinIO data.
Go to Datasets and select the pool or dataset where you want to place the MinIO dataset. For example, /tank/apps/minio or /tank/minio. You can use either an existing pool or create a new one.
After creating the dataset, create the directory where MinIO stores information the application uses. There are two ways to do this:
In the TrueNAS SCALE CLI, use storage filesystem mkdir path="/PATH/TO/minio/data"
to create the /data directory in the MinIO dataset.
In the web UI, create a share (i.e. an SMB share), then log into that share and create the directory.
MinIO uses /data but allows users to replace this with the directory of their choice.
To install the S3 MinIO (community app), go to Apps, click on Discover Apps, then either begin typing MinIO into the search field or scroll down to locate the charts version of the MinIO widget.
Click on the widget to open the MinIO application information screen.
Click Install to open the Install MinIO screen.
Accept the default values for Application Name and Version. The best practice is to keep the default Create new pods and then kill old ones in the MinIO update strategy. This implements a rolling upgrade strategy.
Next, enter the MinIO Configuration settings.
The MinIO application defaults include all the arguments you need to deploy a container for the application.
Enter a name in Root User to use as the MinIO access key. Enter a name of five to 20 characters in length, for example admin or admin1. Next enter the Root Password to use as the MinIO secret key. Enter eight to 40 random characters, for example MySecr3tPa$$w0d4Min10.
Refer to MinIO User Management for more information.
Keep all passwords and credentials secured and backed up.
MinIO containers use server port 9000. The MinIO Console communicates using port 9001.
You can configure the API and UI access node ports and the MinIO domain name if you have TLS configured for MinIO.
To store your MinIO container audit logs, select Enable Log Search API and enter the amount of storage you want to allocate to logging. The default is 5 disks.
Configure the storage volumes. Accept the default /export value in Mount Path. Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above. Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section in Host Path.
If you want to create volumes for postgres data and postgres backup, select Postgres Data Volume and/or Postgres Backup Volume to add the mount and host path fields for each. If not set, TrueNAS uses the defaults for each postgres-data and postgres-backup.
Accept the defaults in Advanced DNS Settings.
If you want to limit the CPU and memory resources available to the container, select Enable Pod resource limits then enter the new values for CPU and/or memory.
Click Install when finished entering the configuration settings.
The Installed applications screen displays showing the MinIO application in the Deploying state. It changes to Running when the application is ready to use.
Click Web Portal to open the MinIO sign-in screen.
The following section provide more detailed explanations of the settings found in each section of the Install MinIO configuration screen.
Accept the default value or enter a name in Application Name field. Accept the default version number in Version.
The MinIO Workload Configuration section includes the MinIO update strategy setting that sets how application updates occur.
Select Create new pods then kill old ones to implement a rolling update strategy where the existing container (pod) remains until the update completes, then it is removed. Select Kill existing pods before creating new ones to implement a recreate update strategy where you remove the existing container (pod) and then create a new one. The recommended option is to keep the default and use the the rolling update strategy.
The MinIO Configuration section provides options to set up a cluster, add arguments, credentials, and environment variables to the deployment.
Select Enable Distributed Mode when setting up a cluster of SCALE systems in a distributed cluster.
MinIO in distributed mode allows you to pool multiple drives or TrueNAS SCALE systems (even if they are different machines) into a single object storage server for better data protection in the event of single or multiple node failures because MinIO distributes the drives across several nodes. For more information, see the Distributed MinIO Quickstart Guide.
To create a distributed cluster, click Add to show the Distributed MinIO Instance URI(s) fields for each TrueNAS system (node) IP addresses/host names to include in the cluster. Use the same order across all the nodes.
The app is preconfigured with arguments it needs to deploy a container. Do not enter the server and URL argument earlier versions of the app required.
Enter the name for the root user (MinIO access key) in Root User. Enter a name of five to 20 characters in length. For example admin or admin1. Next enter the root user password (MinIO secret key) in Root Password. Enter eight to 40 random characters. For example MySecr3tPa$$w0d4Min10.
You do not need to enter extra arguments or environment variables to configure the MinIO app.
Accept the default port settings in MinIO Service Configuration. Before changing ports, refer to Default Ports.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function. This deploys a postgres database to store the logs. Enabling this option displays the Disk Capacity in GB field. Use this to specify the storage in gigabytes the logs are allowed to occupy.
MinIO storage settings include the option to add mount paths and storage volumes to use inside the container (pod). There are three storage volumes, data, postgres data, and postgres backup. The data volume is the only required storage volume.
Accept the default /export value in Mount Path. Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above. Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section above in Host Path.
Of the three volume options, adding the data volume and directory are required. Adding postgres data volumes is optional.
To add host paths for postgress storage volumes, select Enable Host Path for Postgres Data Volume and/or Enable Host Path for Postgres Backup Volumes. SCALE default values for each of these postgres volumes are postgres-data and postgres-backup.
MinIO does not require configuring advanced DNS options. Accept the default settings or click Add to the right of DNS Options to show the Name and Value fields for a DNS option.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) the MinIO app uses, select Enable Pod resource limits. This adds the CPU Resource Limit and Memory Limit fields. Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
This article applies to the public release of the S3 MinIO community application in the charts train of the TRUENAS catalog.
MinIO fails to deploy if you update your version 2022-10-24_1.6.58 Minio app to 2022-10-29_1.6.59 or later using the TrueNAS web UI.
Your app logs display an error similar to the following:
ERROR Unable to use the drive /export: Drive /export: found backend type fs, expected xl or xl-single: Invalid arguments specified.
If you get this error after upgrading your MinIO app, use the app Roll Back function, found on the Application Info widget on the Installed applications screen, and return to 2022-10-24_1.6.58 to make your MinIO app functional again.
You need WSL2 (Windows Subsystem for Linux) if you are using a Windows computer.
If your system has sharing (SMB, NFS, iSCSI) configured, disable the share service before adding and configuring a new MinIO deployment. After completing the installation and starting MinIO, enable the share service.
When adding a new MinIO deployment, verify your storage settings are correct in the MinIO application configuration. If not set, click Install and enter the required information.
To manually update your MinIO application:
Follow the instructions here to make a new, up-to-date MinIO deployment in TrueNAS. Make sure it is version 2022-10-29_1.6.59 or later.
Download the MinIO Client here for your OS and follow the installation instructions. The MinIO Client (mc) lets you create and manage MinIO deployments via your system command prompt.
Open a terminal or CLI.
If you are on a Windows computer, open PowerShell and enter wsl
to switch to the Linux subsystem.
Change directories to the folder that contains
Add your old deployment to mc by entering: ./mc alias set old-deployment-name http://IPaddress:port/ rootuser rootpassword
.
Add your new deployment to mc using the same command with the new alias: ./mc alias set new-deployment-name http://IPaddress:port/ rootuser rootpassword
.
To port your configuration from your old MinIO deployment to your new, export your old MinIO app configurations by entering ./mc.exe admin config export old-deployment-name > config.txt
.
MinIO Client exports the config file to the current directory path.
Next, import the old app config file into the new app by entering: ./mc.exe admin config import old-deployment-name < config.txt
.
Restart the new MinIO app to apply the configuration changes.
./mc.exe admin service restart new-minio-deployment
Export the old app bucket metadata by entering ./mc.exe admin cluster bucket export old-minio-deployment
.
Import the metadata into the new app with ./mc.exe admin cluster bucket import new-minio-deployment cluster-metadata.zip
Export the old app IAM settings by entering ./mc.exe admin cluster iam export old-minio-deployment
.
Import the IAM settings into the new app with ./mc.exe admin cluster iam import new-minio-deployment alias-iam-info.zip
.
Create buckets in your new MinIO app to move data and objects to.
Move the objects and data from your old MinIO app to your new one using ./mc.exe mirror --preserve --watch source/bucket target/bucket
.
Repeat for every bucket you intend to move.
After moving all data from the old app to the new one, return to the TrueNAS UI Apps screen and stop both MinIO apps.
Delete the old MinIO app. Edit the new one and change the API and UI Access Node Ports to match the old MinIO app.
Restart the new app to finish migrating.
When complete and the app is running, restart any share services.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
This article applies to the public release of the S3 MinIO charts application in the TRUENAS catalog.
On TrueNAS SCALE 23.10 and later, users can create a MinIO S3 distributed instance to scale out and handle individual node failures. A node is a single TrueNAS storage system in a cluster.
The examples below use four TrueNAS systems to create a distributed cluster. For more information on MinIO distributed setups, refer to the MinIO documentation.
Before configuring MinIO, create a dataset and shared directory for the persistent MinIO data.
Go to Datasets and select the pool or dataset where you want to place the MinIO dataset. For example, /tank/apps/minio or /tank/minio. You can use either an existing pool or create a new one.
After creating the dataset, create the directory where MinIO stores information the application uses. There are two ways to do this:
In the TrueNAS SCALE CLI, use storage filesystem mkdir path="/PATH/TO/minio/data"
to create the /data directory in the MinIO dataset.
In the web UI, create a share (i.e. an SMB share), then log into that share and create the directory.
MinIO uses /data but allows users to replace this with the directory of their choice.
For a distributed configuration, repeat this on all system nodes in advance.
Take note of the system (node) IP addresses or host names and have them ready for configuration. Also, have your S3 user name and password ready for later.
Configure the MinIO application using the full version Minio charts widget. Go to Apps, click Discover Apps then
We recommend using the Install option on the MinIO application widget.
If your system has sharing (SMB, NFS, iSCSI) configured, disable the share service before adding and configuring a new MinIO deployment. After completing the installation and starting MinIO, enable the share service.
If the dataset for the MinIO share has the same path as the MinIO application, disable host path validation before starting MinIO. To use host path validation, set up a new dataset for the application with a completely different path. For example, for the share /pool/shares/minio and for the application /pool/apps/minio.
Begin on the first node (system) in your cluster.
To install the S3 MinIO (community app), go to Apps, click on Discover Apps, then either begin typing MinIO into the search field or scroll down to locate the charts version of the MinIO widget.
Click on the widget to open the MinIO application information screen.
Click Install to open the Install MinIO screen.
Accept the default values for Application Name and Version. The best practice is to keep the default Create new pods and then kill old ones in the MinIO update strategy. This implements a rolling upgrade strategy.
Next, enter the MinIO Configuration settings.
Select Enable Distributed Mode when setting up a cluster of SCALE systems in a distributed cluster.
MinIO in distributed mode allows you to pool multiple drives or TrueNAS SCALE systems (even if they are different machines) into a single object storage server for better data protection in the event of single or multiple node failures because MinIO distributes the drives across several nodes. For more information, see the Distributed MinIO Quickstart Guide.
To create a distributed cluster, click Add to show the Distributed MinIO Instance URI(s) fields for each TrueNAS system (node) IP addresses/host names to include in the cluster. Use the same order across all the nodes.
The MinIO application defaults include all the arguments you need to deploy a container for the application.
Enter a name in Root User to use as the MinIO access key. Enter a name of five to 20 characters in length, for example admin or admin1. Next enter the Root Password to use as the MinIO secret key. Enter eight to 40 random characters, for example MySecr3tPa$$w0d4Min10.
Refer to MinIO User Management for more information.
Keep all passwords and credentials secured and backed up.
For a distributed cluster, ensure the values are identical between server nodes and have the same credentials.
MinIO containers use server port 9000. The MinIO Console communicates using port 9001.
You can configure the API and UI access node ports and the MinIO domain name if you have TLS configured for MinIO.
To store your MinIO container audit logs, select Enable Log Search API and enter the amount of storage you want to allocate to logging. The default is 5 disks.
You can also configure a MinIO certificate.
Configure the storage volumes. Accept the default /export value in Mount Path. Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above. Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section in Host Path.
Accept the defaults in Advanced DNS Settings.
If you want to limit the CPU and memory resources available to the container, select Enable Pod resource limits then enter the new values for CPU and/or memory.
Click Install when finished entering the configuration settings.
Now that the first node is complete, configure any remaining nodes (including datasets and directories).
After installing MinIO on all systems (nodes) in the cluster, start the MinIO applications.
After you create datasets, you can navigate to the TrueNAS address at port :9000 to see the MinIO UI. After creating a distributed setup, you can see all your TrueNAS addresses.
Log in with the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD keys you created as environment variables.
Click Web Portal to open the MinIO sign-in screen.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The TrueNAS SCALE Netdata app provides an easy way to install and access the Netdata infrastructure monitoring solution. SCALE deploys the Netdata app in a Kubernetes container using the Helm package manager. After successfully deploying the app, you can access the Netdata web portal from SCALE. The Netdata web portal opens on the local dashboard, and where you can create new dashboards, add plugins, metric databases, physical and virtual systems, containers, and other cloud deployments you want to monitor. The portal also provides access to the Netdata Cloud sign-in screen.
The SCALE Netdata app does not require advance preparation.
You can allow SCALE to automatically create storage volumes for the Netdata app or you can create specific datasets to use for configuration, cache, and library storage and extra storage volumes in the container pod. If using specific datasets, create these before beginning the app installation.
The administrator account must have sudo permissions enabled. To verify, go to Credentials > Local User. Click on the administrator user (e.g., admin), then click Edit. Scroll down to the sudo permissions. Select either Allow all sudo commands to permit changes after entering a password (not recommended in this instance) or Allow all sudo commands with not password to permit changes without requiring a password. If you upgraded from Angelfish or early releases of Bluefin that do not have an admin user account, see Creating an Admin User Account for instructions on correctly creating an administrator account with the required permissions.
You can create a Netdata account before or after installing and deploying the Netdata app.
To install the Netdata application, go to Apps, click on Discover Apps, then either scroll down to the Netdata app widget or begin typing Netdata in the search field to filter the list to find the Netdata app widget.
Click on the widget to open the Netdata application details screen.
Click Install to open the Install Netdata screen.
Application configuration settings presented in several sections, are explained in Understanding Netdata Settings below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default settings in Netdata Configuration and the default port in Node Port to use for Netdata UI. The SCALE Netdata app uses the default port 20489 to communicate with Netdata and show the Netdata local dashboard.
Make no changes in the Storage section to allow SCALE to create the storage volumes for the app, or to use datasets created for Netdata configuration storage, select Enable Host Path for Netdata to show the Host Path for Netdata Configuration settings.
Enter or browse to select the dataset created for Netdata configuration storage to populate the mount path. If using datasets created for cache and library storage, enable these options, then enter or browse to the datasets for each.
Accept the default settings in Advanced DNS Settings.
Accept the default values in Resources Limits or select Enable Pod Resource limits to show resource configuration options for CPU and memory and enter new values to suit your use case.
Click Install. The system opens the Installed Applications screen with the Netdata app in the Deploying state. When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Netdata web interface showing the local dashboard.
The following sections provide more detailed explanations of the settings found in each section of the Install Netdata screen.
Accept the default value or enter a name in Application Name. In most cases use the default name, but if adding a second deployment of the application you must change the name.
Accept the default version number in Version. When a new version becomes available, the application shows an update badge on the Installed Applications screen and adds Update buttons to the Application Info widget and the Installed applications screen.
You can accept the defaults in the Netdata Configuration settings or enter the settings you want to use.
Click Add to the right of Netdata image environment to display the environment variable Name and Value fields. Netdata does not require using environment variables to deploy the application but you can enter any you want to use to customize your container.
The SCALE Netdata app uses port 20489 to communicate with Netdata and open the web portal. Netdata documentation states it uses 19999 as the default port, but it recommends restricting access to this for security reasons. Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
SCALE defaults to automatically creating storage volumes for Netdata without enabling the host path options.
To create and use datasets for the Netdata configuration, cache, and library storage or extra storage volumes inside the container pod, first create these datasets. Go to Datasets and create the datasets before you begin the app installation process. See Add Datasets for more information. Select Enable Host Path for Netdata to show the volume mount path field to add the configuration storage dataset.
Enter or browse to select the dataset and populate the mount path field. To use datasets created for cache and library storage volumes, first enable each option and then enter or browse to select the datasets tp populate the mount path fields for each.
If you want to add storage volumes inside the container pod for other storage, click Add to the right of Extra Host Path Volumes for each storage volume (dataset) you want to add.
You can add extra storage volumes at the time of installation or edit the application after it deploys. Stop the app before editing settings.
The default DNS Configuration is sufficient for a basic installation. To specify additional DNS options, click Add to the right of DNS Options to add the DNS Option Name and Option Value fields.
Accept the default values in Resources Limits or select Enable Pod Resource limits to show CPU and memory resource configuration options.
By default, the application is limited to use no more than four CPU cores and eight gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Netdata uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi.
After deploying the SCALE Netdata app click on Web Portal to open the Netdata agent local dashboard. This Netdata dashboard provides a system overview of CPU usage and other vital statistics for the TrueNAS server connecting to Netdata.
The Netdata System Overview dashboard displays a limited portion of the reporting capabilities. Scroll down to see more information or click on a listed metric on the right side of the screen to show the graph and reporting on that metric. Click the other tabs at the top left of the dashboard to view other dashboards for nodes, alerts, anomalies, functions, and events. You can add your own Netdata dashboards using Netdata configuration documentation to guide you. Click on the Nodes tab to better understand the differences between the Netdata agent and Netdata Cloud service reporting. The Netdata Cloud monitors your cloud storage providers added to Netdata.
Click Sign In to open the Netdata Cloud sign-in screen.
Use the Netdata-provided documentation to customize Netdata dashboards to suit your use case and monitoring needs.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Nextcloud is a drop-in replacement for many popular cloud services, including file sharing, calendar, groupware and more. One of its more common uses for the home environment is serving as a media backup, and organizing and sharing service. This procedure demonstrates how to set up Nextcloud on TrueNAS SCALE, and configure it to support hosting a wider variety of media file previews, including High Efficiency Image Container (HEIC), MP4 and MOV files.
Before using SCALE to install the Nextcloud application you need to create four datasets to use as storage for the Nextcloud application.
If you are creating a new user account to manage this application or using the local administrator account, enable sudo permissions for that account.
If creating a new user for Nextcloud, add the user to the dataset ACL permissions.
If you want to use a certificate for this application, create a new self-signed CA and certificate, or import the CA and create the certificate if using one already configured for Nextcloud. A certificate is not required to deploy the application.
Set up an account with Nextcloud if you don’t already have one. Enter this user account in the application configuration.
In this procedure you:
Add the storage for Nextcloud to use.
Install the Nextcloud app in SCALE.
Nextcloud needs five datasets. A primary dataset for the application (nextcloud) with four child datasets. The four child datasets are named and used as follows:
SCALE creates the ix-applications dataset in the pool you set as the application pool when you first go to the Apps screen. This dataset is internally managed, so you cannot use this as the parent when you create the required Nextcloud datasets.
To create the Nextcloud app datasets, go to Datasets, select the dataset you want to use as the parent dataset, then click Add Dataset to add a dataset. In this example, we create the Nextcloud datasets under the root parent dataset tank.
Enter nextcloud in Name, select Apps as the Dataset Preset. Click Advanced Options to make any other setting changes you want to make, and click Save. When prompted, select Return to Pool List.
Next, select the nextcloud dataset, click Add Dataset to add the first child dataset. Enter appdata in Name and select Apps as the Dataset Preset. Click Advanced Options to make any other setting changes you want to make for the dataset, and click Save.
Repeat this three more times to add the other three child datasets to the nextcloud parent dataset. When finished you should have the nextcloud parent dataset with four child datasets under it. Our example paths are:
Go to Apps. If the pool for apps is not already set, do it when prompted.
When set, the Installed Applications screen displays Apps Service Running on the top screen banner.
Click Discover Apps and then locate the Nextcloud app. Change the Sort to App Name, then type Nextcloud in the search field to display the app widget.
Click on the widget to open the Nextcloud details screen, then click Install. If this is the first application installed, SCALE displays a dialog about configuring apps.
Click Confirm then Agree to close the dialog and open the Nextcloud details screen opens.
Click Install to open the Install Nextcloud wizard.
Accept the default name for the app in Application Name or enter a new name if you want to change what displays or have multiple Nextcloud app deployments on your system. This example uses the default nextcloud.
Scroll down to or click on Nextcloud Configuration to show the app configuration settings. For a basic installation you can leave the default values in all settings except Username and Password.
a. Enter the username and password created in the Before You Begin section or for the existing Nextcloud administrator user account credentials. This example uses admin as the user.
TrueNAS populates Host with the IP address for your TrueNAS server and Nextcloud data directory populates with the correct path.
b. Click Add to the right of Command to show the Command field then click in that field and select Install ffmpeg to automatically install the FFmpeg utility when the container starts.
c. (Optional) Click in the Certificate Configuration field and select the certificate for Nextcloud if already created and using a certificate. Select Install ffmpeg to automatically install the utility FFmpeg when the container starts.
d. Leave Cronjobs selected which enables this by default. Select the schedule you want to use for the cron job option.
e. To specify an optional Environment Variable name and value, click the Add button.
Accept the port number TrueNAS populates in the Web Port field in Network Configuration.
Enter the storage settings for each of the four datasets created for the Nextcloud app.
Do not select Pre v2 Storage Structure if you are deploying Nextcloud for the first time as this slows down the installation and is not necessary. If you are upgrading where your Nextcloud deployment in SCALE was a 1.x.x release, select this option.
a. Select Host Path (Path that already exists on the system) in Type, then browse to and select the appdata dataset to populate the Host Path for the Nextcloud AppData Storage fields.
You can set the ACL permissions here by selecting Enable ACL but it not necessary. You can also change dataset permissions from the Datasets screen using the Edit button on the Permissions widget for the Nextcloud Data dataset.
b. Select Host Path (Path that already exists on the system) in Type, then browse to and select the userdata dataset to populate the Host Path for the Nextcloud User Data Storage fields.
c. Scroll down to the Nextcloud Postgres Data Storage option. Select Host Path (Path that already exists on the system) in Type, then browse to and select the pgpdata dataset to populate the Host Path.
d. Scroll down to Nextcloud Postgres Backup Storage, select Host Path, and then enter or browse to the path for the pgbbackup dataset. When complete, the four datasets for Nextcloud are configured.
Accept the remaining setting defaults.
Scroll up to review the configuration settings and fix any errors or click Install to begin the installation.
The Installed screen displays with the nextcloud app in the Deploying state. It changes to Running when ready to use. Click Web Portal on the Application Info widget to open the Nextcloud web portal sign-in screen.
There are known issues with Nextcloud app releases earlier than 2.0.4. Use the Upgrade option in the SCALE UI to update your Nextcloud release to 2.0.4. For more information on known issues, click here.
For information on Nextcloud fixes involving TN Charts, see PR 2447 nextcloud:fixes
If the app does not deploy, add the www-data user and group to the /nextcloud dataset but do not set recursive. Stop the app before editing the ACL permissions for the datasets.
Next, try adding the www-data user and group to the /nextcloud/data dataset. You can set this to recursive, but it is not necessary. To do this:
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
SCALE includes the ability to run Docker containers using Kubernetes.
Always read through the Docker Hub page for the container you are considering installing so that you know all of the settings that you need to configure. To set up a Docker image, first determine if you want the container to use its own dataset. If yes, create a dataset for host volume paths before you click Launch Docker Image.
If you want to create a dataset for Pi-hole data storage, you must do this before beginning the Pi-hole application install.
When you are ready to create a container, click Apps to open the Applications screen, then click on Available Applications. Locate the pihole widget and click Install on the widget.
Fill in the Application Name and click Version to verify the default version is the only, and most current version.
Enter the password to use for the administrative user in Admin password in the Container Environment Variables section. The password entered can not be edited after you click Save. Adjust the Configure timezone setting if it does not match where your TrueNAS is located.
To add the WEBPASSWORD environment variable, click Add for Pihole Environment to add a block of environment variable settings. Enter WEBPASSWORD in Name, then a secure password like the example the one used, s3curep4$$word.
Scroll down to the Storage settings. Select Enable Custom Host Path for Pihole Configuration Volume to add the Host Path for Pihole Configuration Volume field and dataset browse option. Click the arrow to the left of
/mnt and at each dataset to expand the tree and browse to the dataset and directory paths you created before beginning the container deployment. Pi-hole uses volumes store your data between container upgrades.You need to create these directories in a dataset on SCALE before you begin installing this container. To create a directory, open the TrueNAS SCALE CLI and enterstorage filesystem mkdir path="/PATH/TO/DIRECTORY"
.
Click Add to display setting options to add extra host path volumes to the container if you need them.
Enter any Networking settings you want to use or customize. TrueNAS adds the port assignments Pi-hole requires in the Web Port for pihole, DNS TCP Port for pihole, and DNS UDP Port for pihole fields. TrueNAS SCALE requires setting all Node Ports above 9000. Select Enable Host Network to add host network settings. Click Add for DNS Options to add a block of DNS settings if you want to configure DNS options.
Click Add for DNS Options if you want to configure DNS for your pod. Select Enable Pod resource limits if you want to limit the CPU and memory for your Pi-hole application.
Click Save. TrueNAS SCALE deploys the container. If correctly configured, the Pi-Hole widget displays on the Installed Applications screen.
When the deployment is completed the container becomes active. If the container does not automatically start, click Start on the widget. Clicking on the App card reveals details on the app.
With Pi-hole as our example we navigate to the IP of our TrueNAS system with the port and directory address :9080/admin/.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Prometheus is a monitoring platform that collects metrics from targets it monitors. Targets are system HTTP endpoints configured in the Prometheus web UI. Prometheus is itself an HTTP endpoint so it can monitor itself.
Prometheus collects and stores metrics such as time series data. Information stored is time-stamped at the point when it is recorded. Prometheus uses key-value pairs called labels to differentiate characteristics of what is measured.
Use the Prometheus application to record numeric time series. Also use it to diagnose problems with the monitored endpoints when there is a system outage.
TrueNAS SCALE makes installing Prometheus easy, but you must use the Prometheus web portal to configure targets, labels, alerts, and queries.
The Prometheus app in SCALE installs, completes the initial configuration, then starts the Prometheus Rule Manager. When updates become available, SCALE alerts and provides easy updates.
Before installing the Prometheus app in SCALE, review their Configuration documentation and list of feature flags and environment variables to see if you want to include any during installation. You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
If not using the default user and group to manage the application, create a new user (and group) and take note of the IDs.
You can allow SCALE to create the two datasets Prometheus requires automatically during app installation. Or before beginning app installation, create the datasets named data and config to use in the Storage Configuration section during installation.
To install the Prometheus application, go to Apps, click Discover Apps, either begin typing Prometheus into the search field or scroll down to locate the Prometheus application widget.
Click on the widget to open the Prometheus application details screen.
Click Install to open the Prometheus application configuration screen.
Application configuration settings are presented in several sections, each explained below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default value in Retention Time or change to suit your needs. Enter values in days (d), weeks (w), months (m), or years (y). For example, 15d, 2w, 3m, 1y.
Enter the amount of storage space to allocate for the application in Retention Size. Valid entries include integer and suffix, for example: 100MB, 10GB, etc.
You can add arguments or environment variables to customize your installation but these are not required. To show the Argument entry field or the environment variable Name and Value fields, click Add for whichever type you want to add. Click again to add another argument or environment variable.
Accept the default port in API Port. Select Host Network to bind to the host network, but we recommend leaving this disabled.
Prometheus requires two storage datasets. You can allow SCALE to create these for you, or use the datasets named data and config created before in First Steps. Select the storage option you want to use for both Prometheus Data Storage and Prometheus Config Storage. Select ixVolume in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Accept the defaults in Resources or change the CPU and memory limits to suit your use case.
Click Install. The system opens the Installed Applications screen with the Prometheus app in the Deploying state. When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Prometheus web interface to begin configuring targets, alerts, rules and other parameters.
The following sections provide more detailed explanations of the settings found in each section of the Install Prometheus screen.
Accept the default value or enter a name in Application Name field. In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
You can accept the defaults in the Prometheus Configuration settings, or enter the settings you want to use.
Accept the default in Retention Time or change to any value that suits your needs. Enter values in days (d), weeks (w), months (m), or years (y). For example, 15d, 2w, 3m, 1y.
Retention Size is not required to install the application. To limit the space allocated to retain data, add a value such as 100MB, 10GB, etc.
Select WAL Compression to enable compressing the write-ahead log.
Add Prometheus environment variables in SCALE using the Additional Environment Variables option. Click Add for each variable you want to add. Enter the Prometheus flag in Name and desired value in Value. For a complete list see Prometheus documentation on Feature Flags.
Accept the default port numbers in API Port. The SCALE Prometheus app listens on port 30002.
Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network. This binds to the host network.
You can install Prometheus using the default setting ixVolume (dataset created automatically by the system) or use the host path option with the two datasets created before installing the app.
Select Host Path (Path that already exists on the system) to browse to and select the data and config datasets. Set Prometheus Data Storage to the data dataset path, and Prometheus Config Storage to the config dataset path.
Accept the default values in Resources Configuration or enter new CPU and memory values By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Prometheus uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
This application in not needed when rsync is configured externally with SSH or with the TrueNAS built-in rsync task in SSH mode. It is always recommended to use rsync with SSH as a security best practice.
You do not need this application to schedule or run rsync tasks from the Data Protections screen using the Rsync Task widget.
This application is an open source server that provides fast incremental file transfers. When installed, the Rsync Daemon application provides the server function to rsync clients given the server information and ability to connect.
The before installing the Rsync Daemon application (rsyncd) add a dataset the application can use for storage.
To install this application, go to Apps, click on Discover Apps, then either begin typing rsync into the search field or scroll down to locate the Rsync Daemon application widget.
Click on the widget to open the application Rsync Daemon information screen.
Click Install to open the Install Rsync Daemon configuration screen.
Accept the default value or enter a name in Application Name.
Accept the Network Configuration default port number the Rsync app listens on.
Add and configure at least one module. A module creates an alias for a connection (path) to use rsync with. Click Add to display the Module Configuration fields. Enter a name and specify the path to the dataset this module uses for the rsync server storage. Leave Enable Module selected. Select the type of access from the Access Mode dropdown list. Accept the rest of the module setting defaults. To limit clients that connect, enter IP addresses in Hosts Allow and Hosts Deny.
Accept the default for the rest of the settings.
Accept the default values in Resources Configuration or enter the CPU and memory values for the destination system.
Click Save.
The Installed applications displays with the app in the Deploying state until the installation completes, then it changes to Running.
The following sections provide more detailed explanations of the settings found in each section of the Install Rsync Daemon configuration screen.
The Application Name section includes only the Application Name setting. Accept the default rsyncd or enter a new name to show on the Installed applications screen in the list and on the Application Info widget.
The Rysnc Configuration section Auxiliary Parameters allow you to customize the rsync server deployment. Enter rsync global or module parameters using the Auxiliary Parameters fields.
Click Add to the right of Auxiliary Parameters for each parameter you want to add. Enter the name of the parameter in Parameter and the value for that parameter in Value.
The Network Configuration section includes the Host Network and Rsync Port settings.
Accept the default port number 30026 which is the port the Rsync app listens on. Before changing the port number refer to Default Ports to verify the port is not already assigned. Enter a new port number in Rsync Port.
We recommend that you leave Host Network unselected.
The Module Configuration section includes settings to add and customize a module for the rsync server and to configure the clients allowed or denied access to it. Click Add for each module to add.
There are seven required settings to add a module and four optional settings.
Module Name is whatever name you want to give the module and is an alias for access to the server storage path. A name can include upper and lowercase letters, numbers, and the special characters underscore (_), hyphen (-) and dot (.). Do not begin or end the name with a special character.
Enable Module, selected by default, allows the list client IP addresses added to connect to the server after the app is installed and started.
Use optional Comment to enter a description that displays next to the module name when clients obtain a list of available modules. Default is to leave this field blank.
Enter or browse to the location of the dataset to use for storage for this module on the rsync server in Host Path.
Select the access permission for this storage path from the Access Mode dropdown list. Options are Read Only, Read Write, and Write Only.
Enter a number in Max Connections for the number of client connections to allow. The default, 0, allows unlimited connections to the rsync server.
Accept the UID (user ID) and GID (group ID) default 568. If you create an administration user and group to use for this module in this application, enter that UID/GID number in these fields.
Use Hosts Allow and Hosts Deny to specify IP addresses for client systems that can to connect to the rsync server through this module. Enter multiple IP addresses separated by a comma and space between entries in the field. Leave blank to allow all or deny hosts.
Use the Auxiliary Parameters to enter parameters and their values to further custiize the module. Do not enter parameters already available as the settings included in this section. You can specify rsync global or module parameters using the module Auxiliary Parameters fields.
By default, the rsync daemon will allow access to everything within the dataset that has been specified for each module, without authentication. In order to set up password authentication you needs to add two auxilary parameters for the module:
Parameter: “auth users” Value: comma separated list of usernames
Parameter: “secrets file” Value: path to the rsyncd.secrets file
You will have to place the file inside your module dataset and use the value:
“/data/
The file will have to be chmod 600 and owned by root:root in order for the rsync daemon to accept it for authentication.
The file should contain list of username:password in plaintext, one user per line: admin:password1234 user:password5678
The Resources Configuration section allows you to limit the amount of CPU and memory the application can use. By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory. The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
Storj is an open-source decentralized cloud storage (DCS) platform. Storj permits a computer running this software to configure the system as a node, and where you can rent unused system storage capacity and bandwidth on your system to other users.
Before you can configure your system to act as a Storj node:
Review the Storj node hardware and bandwidth considerations at Storj Node.
Update TrueNAS SCALE to the latest public release.
Create a wallet address.
Generate a Storj authentication token.
Configure your router and firewall. Open ports on your router and configure port forwarding. Configure firewall rules to allow access for these ports.
Alternatively, use a dynamic DNS (DDNS) service such as NoIP to to create a host name if you do not have a static IP address for the system nodes.
Create a publicly-available domain name to access the Storj application. Point this to your router public IP address.
Create a Storj identity and authorize it for every node. Every node must have a unique identifier on the network. Use NFS/SMB shares or or a file transfer service such as FTP to upload the credentials generated. If the identity is not present on the storage directory, it generates and authorizes one automatically. This can take a long time and consume resources of the system while it generates one.
Install the Storj application in SCALE.
Storj provides a Quickstart Node Setup Guide with step-by-step instructions to help users create a Storj node.
Use Google Chrome MetaMask extension to create a wallet address, or if you already have one, you can use the exiting wallet. See Storj Wallet Configuration.
Special considerations regarding how to protect and manage a wallet are outside the scope of this article.
Open a browser window and go to Storj Host a Node. Enter an email address to associate with the account, select the I’m not a robot checkbox, then click Continue.
Copy the auth token to use later in this procedure. Keep this token in a secure location.
To allow the Storj application to communicate with Storj and the nodes, configure your router with port forwarding and the firewall to allow these ports to communicate externally:
With the TrueNAS system up and running, then check your open port using something like https://www.yougetsignal.com/tools/open-ports/. If your port forwarding is working, port 20988 is open.
This enables QUIC, which is a protocol based on UDP that provides more efficient usage of the Internet connection with both parallel uploads and downloads.
Create a DDNS host name that points to your router WAN IP address, and provide a domain name to use for access the Storj application. You can use a dynamic DNS service that allows you to set up a DDNS host name. You can use a service such as NoIP to create a domain name (i.e., name.ddns.net) and then point it at the WAN IP address of your router.
Use nislookup name.ddns.net
to verfiy it works.
Create three new datasets, one a parent to two child datasets nested under it.
Enter a name for the first dataset in Name. For example, storj-node, and click Save.
Select the new dataset storj-node, click Add Dataset again to create a new child dataset. For example, config.
Click Save.
Select the storj-node dataset again, click Add Dataset and create the second child dataset. For example, identity.
Click Save.
TrueNAS displays two nested datasets config and identity underneath the storj-node dataset.
Go to Apps, click on Available Applications, then scroll down to the Storj application, and click Install to open the Storj configuration wizard.
Accept the default name or enter a new name for your Storj application.
You can enter a name for the Storj app using lowercase alphanumeric characters that begin and end with an alphanumeric characters. Do not use a hyphenas the first or last character. For example, storjnode, or storj-node, but not -storjnode or storjnode-.
Enter the authentication token copied from Storj in Configure Auth token for Storj Node. Enter the email address associated with the token in Configure Email for Storj.
Enter the storage domain (i.e., the public network DNS name) added for Storj in Add Your Storage Domain for Storj. If using Dynamic DNS (DDNS), enter that name here as well. For example, name.ddns.net.
Accept the default values in Owner User ID and Owner Group ID.
Configure the storage size (in GB) you want to share. Enter the value in Configure Storage Size You Want to Share in GB’s.
Enter the host paths for the new datasets created for the Storj application. Select Enable Custom Host Path for Storj Configuration Volume and browse to the newly created dataset (config). Next, select Configure Identity Volume for Storage Node and browse to the second newly created dataset (identity).
Enter the web port 28967 in Web Port for Storj, and 20988 in Node Port for Storj.
The time required to install the Storj App varies depending on your hardware and network configuration. When complete, the Installed Applications screen displays the Storj app with the status of active.
Enviromental variables are optional. If you want to include additional variables, see Storj Environment Variables for a list. Click Add for each variable you want to add.
Click the Web Portal button to view additional details about the application.
The Storj Node dashboard displays stats for the storage node. These could include bandwidth utilization, total disk space, and disk space used for the month. Payout information is also provided.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The new TFTP Server application provides Trivial File Transfer Protocol (TFTP) server functions. The TFTP Server application is a lightweight TFTP-server container in TrueNAS SCALE. It is not intended for use as a standalone container.
The app runs as root and drops privileges to the tftp (9069) user for the TFTP service. Every application start launches a container with root privileges. This checks the parent directory permissions and ownership. If it finds a mismatch, the container applies the correct permissions to the TFTP directories. If Allow Create is selected, the container also checks and chmods TFTP directories to 757 or to 555 if not checked. Afterwards, the TFTP container runs as root user, dropping privileges to the tftp (9069) user for the TFTP service.
Configure your DHCP server for network boot to work.
To grant access to a specific user (and group) different from defaults, add the new non-root administrative user and note the UID and GID for this user.
To use a specific dataset or volume for files, create this in the Storage screen first.
You can install the application using all default settings, or you can customize setting to suit your use case.
To install the TFTP Server app, go to Apps, click Discover Apps. Either begin typing TFTP into the search field or scroll down to locate the TFTP Server application widget.
Click on the widget to open the *TFTP Server information screen.
Click Install to open the TFTP Server configuration screen.
Application configuration settings are presented in several sections. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
After accepting or changing the default settings explained in the sections below, click Install to start the installation process. The TFTP Server application displays on the Installed applications screen when the installation completes.
Accept the default values or enter a name in Application Name. Accept the default Version.
Select the location of the TrueNAS server in Timezone.
Select Allow Create to allow creating new files. This sets CREATE to 1 and MAPFILE to "". This changes the permissions of the tftpboot directory to 757, otherwise the tftpboot directory permissiong is 555.
Click Add to the right of Additional Environmental Variables to display the Name and Value fields. Enter the name as shown in the environment variables table below. Do not enter variables that have setting fields or the system displays an error.
When selected, Host Network sets the app to use the default port 69, otherwise the default port is 30031.
To change the default port number, clear the Host Network checkmark to display the TFTP Port field. Enter a new port number in TFTP Port within the range 9000-65535. Refer to the TrueNAS default port list for a list of assigned port numbers.
Storage sets the path to store TFTP boot files. The default storage type is ixVolume (Dataset created automatically by the system) where the system automatically creates a dataset named tftpboot. Select Host Path (Path that already exists on the system) to show the Host Path field. Enter or browse to select a dataset you created on the system for the application to use.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
The WebDAV application is a set of extensions to the HTTP protocol that allows users to collaboratively edit and manage files on remote web servers. It serves as the replacement for the built-in TrueNAS SCALE WebDAV feature.
When installed and configured with at least one share, a container launches with temporary root privileges to configure the shares and activate the service.
To grant access to a specific user (and group) other than the default for the webdav user and group (666), add a new non-root administrative user and take note of the UID and GID for this user.
If you want to create a dataset to use for the WebDAV application share(s), create it before you install the application.
To install the application, you can accept the default values or customize the deployment to suit your use case. You create the WebDAV share as part of the application installation.
To install the WebDAV application, go to Apps, click Discover Apps, then either begin typing WebDAV into the search field or scroll down to locate the WebDAV application widget.
Click on the widget to open the WebDAV information screen.
Click Install to open the Install WebDAV configuration screen.
Application configuration settings are presented in several sections. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the defaults in Application Name and Version.
Accept the defaults or customize the settings in WebDAV Configuration. Accept the default authentication setting No Authentication or to require entering authentication credentials, select Basic Authentication as the type. The application includes all the setting fields you need to install and deploy the container, but you can add additional environment variables to further configure the container.
The default network protocol is HTTP and uses the port 30035. To use HTTPS and add encryption to the web traffic, clear the checkmark in Enable HTTP and select Enable HTTPS. HTTPS uses port 30035 and adds the Certificate field. The default certificate is 0.
We recommend not selecting Host Network as this binds to the host network.
Create at least one share in Storage Configuration. Click Add to display the share settings. Enable the share is selected by default. It enables the share at start (when the app starts). Enter a name using lower or uppercase letters and or numbers. Names can include the underscore (_) or dash (-).
Accept the default Resource Configuration, or enter the CPU and memory settings you want to apply to the WebDAV application container.
After configuring the container settings, click Install to save the application configuration, deploy the app, and make the share(s) accessible.
After the installation completes, the application displays on the Installed applications screen.
The WebDAV widget on the Discover and WebDAV information screens includes the Installed badge.
Accept the default values in Application Name and Version. If you want to change the application name, enter a new name.
WebDAV configuration settings include the type of share authentication to use, none or basic. No Authentication means any system can discover TrueNAS and access the data shared by the WebDAV application share, so this is not recommended. Basic Authentication adds the Username and Password fields and provides some basic security.
The WebDAV application configuration includes all the settings you need to install the Docker container for the app. You can use the Docker container environment variables listed in the table below to further customize the WebDAV Docker container.
The default user and group for WebDAV is 666. To specify a different user, create the user and group before installing the application, then enter the user ID (UID) and group ID (GID) in the fields for these settings.
The container for the WebDAV app has Enable HTTP selected by default. The port for HTTP is 30034.
To add encryption to the web traffic between clients and the server, clear the checkmark in Enable HTTP and select Enable HTTPS. This changes the default port in HTTPS Port to 30035, and adds a system Certificate.
The default certificate is 0. You can use the default as the Certificate if no other specific certificate is available.
Create one or more shares in the Storage Configuration section. For the application to work, create at least one share. Click Add for each share you want to create. Each share must have a unique name.
To add a WebDAV share to the application, click Add to the right of Shares in the Storage Configuration section.
Enter a name in Share Name. The name can have upper and lowercase letters and numbers. It can include an underscore (_) and/or a dash (-).
Enter share purpose or other descriptive information about the share in Description. This is not required.
Enter or browse to the Host Path location for the where the app adds the WebDAV share. If you created a dataset before installing the app, you can browse to it here.
Select Read Only to disable write access to the share. When selected, data accessed by clients cannot be modified.
Select Fix Permissions to change the Host Path file system permissions. When enabled, the dataset owner becomes the User ID and Group ID set in the User and Group Configuration section. By default, this is the webdav user with UID and GID 666. Fix Permissions allows TrueNAS to apply the correct permissions to the WebDAV shares and directories and simplify app deployment. After first configuration, the WebDAV container runs as the dedicated webdav user (UID: 666).
WebDAV only supports Unix-style permissions. When deployed with Fix Permissions enabled, it destroys any existing permissions scheme on the shared dataset. It is recommended to only share newly created datasets that have the Share Type set to Generic.
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory. The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
At the end of the installation process, test access to your WebDAV share.
In a browser, this is done by opening a new tab and entering the configured protocol, system host name or IP address, WebDAV port number, and Share Name.
Example: https://my-truenas-system.com:30001/mywebdavshare
When authentication is set to something other than No Authentication, a prompt requests a user name and password. Enter the saved Username and Password entered in the webdav application form to access the shared data.
To change files shared with the WebDAV protocol, use client software such as WinSCP to connect to the share and make changes. The WebDAV share and dataset permissions must be configured so that the User ID and Group ID can modify shared files.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
WG Easy is the easiest way to install and manage WireGuard on any Linux host. The application is included in the Community catalog of applications.
WG EASY is a Docker image designed to simplify setting up and managing WireGuard connections. This app provides a pre-configured environment with all the necessary components and a web-based user interface to manage VPN connections.
WG Easy does not require advanced preparation before installing the application.
To install the WG Easy application, go to Apps, click Discover Apps, then either begin typing WG Easy into the search field or scroll down to locate the WG Easy application widget.
Click on the widget to open the WG Easy application information screen.
Click Install to open the WG Easy application configuration screen. Application configuration settings are presented in several sections. To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
After installing WG Easy the app displays on the Installed screen.
Click Web PortaL on the Application Info widget to open the WG Easy web interface where you can add a new client.
Accept the default value or enter a name in Application Name field. Accept the default version number in Version.
You can accept the defaults in the Configuration settings, or enter the configuration settings you want to use.
Enter the public host name or IP of your VPN server in Hostname or IP.
If you use or want to protect access to the WG Easy web UI, enter a password in Password for WebUI.
Accept the default values in Persistent Keep Alive or change the value to the number of seconds you want to keep the session alive. When set to zero, connections are not kept alive. Alternate value to use 25.
Accept the default setting for WireGuard (1420) in Clients MTU or enter a new value.
Accept the default IPs in Clients IP Address Range and Clients DNS Server or enter the IP addresses the client uses. If not provided, the default value 1.1.1.1 is used.
To specify allowed IP addresses, click Add to the right of Allowed IPs for each IP address you want to enter. If you do not specify allowed IPs, the application uses 0.0.0.0/0.
To specify environment variables, click Add to the right of WG Easy Environment for each environment variable you want to add.
You can install WG Easy using the default settings or enter your own values to customize the storage settings.
Select Enable Custom Host Path for WG-Easy Configuration Volume to add the Host Path for WG-Easy Configuration Volume field. Enter or browse to select a preconfigured mount path for the host path.
To add additional host path volumes, click Add to the right of Extra Host Path Volumes.
Enter the path in Mount Path in Pod where you want to mount the volume inside the pod. Enter or browse to the host path for the WG Easy application dataset.
Accept the default port numbers in WireGuard UDP Node Port for WG-Easy and WebUI Node Port for WG-Easy. WireGuard always listens on 51820 inside the Docker container. Refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
WG Easy does not require configuring advanced DNS options. Accept the default settings or click Add to the right of DNS Options to show the fields for option name and value.
Accept the default values in Resources Configuration or select Enable Pod resource limits to show the fields to enter new CPU and memory values for the destination system.
Enter CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi, or enter the memory limit in bytes. Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi
Click Save.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
TrueNAS is certified with leading hypervisors and backup solutions to streamline storage operations and ensure compatibility with your existing IT infrastructure. TrueNAS Enterprise storage appliances deliver a wide range of features and scalability for virtualization and private cloud environments, with the ability to create off-site backups with scheduled sync and replication features.
TrueNAS applications expand the capabilities of your system by adding third-party software but can add significant risk to system stability and security. There are general best practices to keep in mind when using applications with TrueNAS SCALE:
We recommend users keep the container use case in mind when choosing a pool. Select a pool that has enough space for all the application containers you intend to use. TrueNAS creates an ix-applications dataset on the chosen pool and uses it to store all container-related data. This is for internal use only. If you intend to store your application data in a location that is separate from other storage on your system, create a new dataset.
Since TrueNAS considers shared host paths non-secure, apps that use shared host paths (such as those services SMB is using) might fail to deploy. The best practice is to create datasets for applications that do not share the same host path as an SMB or NFS share.
Kubernetes is an open-source container orchestration system that manages container scheduling and deployment, load balancing, auto-scaling, and storage. The default system-level Kubernetes Node IP settings are found in Apps > Settings > Advanced Settings.
The Custom App button starts the configuration wizard where users can install applications not included in the approved application catalog. You cannot interrupt the configuration wizard and save settings to leave and go create data storage or directories in the middle of the process. We recommend having your storage, user, or other configuration requirements ready before starting the wizard. You should have access to information such as:
TrueNAS SCALE allows you to configure an Active Directory or LDAP server to handle authentication and authorization services, domain, and other account settings. You should know your Kerberos realm and keytab information. You might need to supply your LDAP server host name, LDAP server base and bind distinguished names (DN), and the bind password.
Determine the container and node port numbers. TrueNAS SCALE requires a node port to be greater than 9000. Refer to the Default Ports for a list of used and available ports before changing default port assignments.
iXsystems Support can assist Enterprise customers with configuring directory service settings in SCALE with the information customers provide, but they do not configure customer Active Directory system settings.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
The instructions in this article apply to the Official TrueNAS Enterprise MinIO application. This smaller version of MinIO is tested and polished for a safe and supportable experience for TrueNAS Enterprise customers.
The Enterprise MinIO application is tested and verified as an immutable target for Veeam Backup and Replication.
Community members can add and use the MinIO Enterprise app or the default community version.
If your system has active sharing configurations (SMB, NFS, iSCSI), disable them in System Settings > Services before adding and configuring the MinIO application. Start any sharing services after MinIO completes the installation and starts.
This basic procedure covers the required MinIO Enterprise app settings. It does not provide instructions for optional settings.
To install the Minio Enterprise app, go to Apps, click on Discover Apps, then scroll down to locate the enterprise version of the Minio widget.
Click on the MinIO Official Enterprise widget to open the MinIO information screen.
Click Install to open the Install MinIO configuration screen.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version. SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user. If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO. The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise. If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Do not select Host Network.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000. Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
The Certificates setting is not required for a basic configuration but is required when setting up multi-mode configurations and when using MinIO as an immutable target for Veeam Backup and Replication. The Certificates dropdown list includes valid unrevoked certificates, added using Credentials > Certificates.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, http://ipaddress:30000. Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, http://ipaddres:30001.
Scroll down to the Storage Configuration section.
Select the storage type you want to use. ixVolume (Dataset created automatically by the system) is the default storage type. This creates a dataset for your deployment and populates the rest of the storage fields.
To use an existing dataset, select Host Path (Path that already exists on the system). Mount Path populates with the default /data1. Browse to the dataset location and click on it to populate the Host Path.
If you are setting up a cluster configuration, select Enable Multi Mode (SNMD or MNMD), then click Add in MultiMode Configuration. MinIO recommends using MNMD for enterprise-grade performance and scalability. See the related MinIO articles listed below for SNMD and MNMD configuration tutorials.
The following section provides more detailed explanations of the settings in each section of the Install MinIO configuration screen.
Accept the default value or enter a name in Application Name field. Accept the default version number in Version.
MinIO credentials establish the login credentials for the MinIO web portal and as the MinIO administration user.
If you have existing MinIO credentials, enter them or create new login credentials for the first time you log into MinIO. The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Enter the name of five to 20 characters in length for the root user (MinIO access key) in Root User. For example admin or admin1.
Enter eight to 40 random characters for the root user password (MinIO secret key) in Root Password. For example, MySecr3tPa$$w0d4Min10.
Accept the default values in User and Group Configuration. If you configured SCALE with a new administration user for MinIO, enter the UID and GID in these fields.
Accept the default port numbers in API Port and Web Port, which are the port numbers MinIO uses to communicate with the app and web portal.
Do not select Host Network.
MinIO does not require a certificate for a basic configuration and installation of MinIO Enterprise, but if installing and configuring multi-mode SNMD or MNMD, you must use a certificate. A SNMD configuration can use the same self-signed certificate created for MNMD, but a MNMD configuration cannot use the certificate created for a SNMD configuration because that certificate would only include the IP address for one system.
Enter the system IP address in URL format followed by the port number for the API separated by a colon in MinIO Server URL (API). For example, https://10.123.12.123:30000. Enter the system IP address in URL format followed by the port number for the web portal separated by a colon in MinIO Browser Redirect URL. For example, https://10.123.12.123:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
MinIO storage settings include the option to add storage volumes to use inside the container (pod). The default storage Type is ixVolume *(Dataset created automatically by the system), which adds a storage volume for the application.
To select an existing dataset, select Host Path (Path that already exists on the system) from the Type dropdown list. The Host Path and Mount Path fields display. Enter or browse to select and populate the Host Path field.
Accept the default Mount Path /data1 for the first storage volume for a basic installation.
Click Add to add a block of storage volume settings.
When configuring multi-mode, click Add three times to add three additional datasets created to serve as the drives in these configurations. Multi mode uses four dataset named data1, data2, data3, and data4. Change the Mount Path for the added volumes to /data2, /data3, or /data4, then either enter or browse to select the dataset of the same name to populate the Host Path.
When configuring MNMD, repeat the storage settings on each system in the node.
Multi-mode installs the app in either a MinIO Single-Node Multi-Drive (SNMD) or Multi-Node Multi-Drive (MNMD) cluster. MinIO recommends using MNMD for enterprise-grade performance and scalability.
Click Enable Multi Mode (SNMD or MNMD) to enable multi-mode and display the Multi Mode (SNMD or MNMD) and Add options. Click Add to display the field where you enter the storage or system port and storage URL string.
Enter /data{1…4} in the field if configuring SNMD. Where /data represents the dataset name and the curly brackets enclosing 1 and 4 separated by three dots represent the numeric value of the dataset names.
Enter https://10.123.123.10{0…3}:30000/data{1…4} in the field if configuring MNMD. The last number in the final octet of the IP address number is the first number in the {0…3} string. Separate the numbers in the curly brackets with three dots. If your sequential IP addresses are not using 100 - 103, for example 10.123.123.125 through 128, then enter them as https://10.123.123.12{5…8}:30000/data{1…4}.
If you do not have sequentially numbered IP addresses assigned to the four systems, assign sequentially numbered host names. For example, minio1.mycompany.com through minio4.mycompany.com. Enter https://minio{1…4}.mycompany.com:30000/data{1…4} in the Multi Mode (SNMD or MNMD) field.
Logging is an optional setting. If setting up logging, select Anonymous to hide sensitive information from logging or Quiet to omit (disable) startup information.
Select Enable Log Search API to enable LogSearch API and configure MinIO to use this function and add the configuration settings for LogSearch. This deploys a Postgres database to store the logs.
Enter the disk capacity LogSearch can use in Disk Capacity (GB).
Accept the default ixVolume in Postgres Data Storage to allow the app to create a storage volume. To select an existing dataset instead of the default, select Host Path from the dropdown list. Enter or browse to the dataset to populate the Host Path field.
Accept the default ixVolume in Postgres Backup Storage to allow the app to create the storage volume. To select an existing dataset instead of the default, select Host Path from the dropdown list. Enter or browse to the dataset to populate the Host Path field.
By default, TrueNAS limits this application to using no more than 4 CPU cores and 8 Gigabytes of available memory. The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
The instructions in this article apply to the TrueNAS MinIO Enterprise application installed in a Multi-Node Multi-Disk (MNMD) multi-mode configuration.
For more information on MinIO multi-mode configurations see MinIO Single-Node Multi-Drive (SNMD) or Multi-Node Multi-Drive (MNMD). MinIO recommends using MNMD (distributed) for enterprise-grade performance and scalability.
Community members can add and use the MinIO Enterprise app or the default community version.
To add the Enterprise MinIO application to the list of available applications, go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, click on enterprise to add it to the list of trains, and then click Save.
Both the charts and enterprise train versions of the MinIO app widget display on the Discover application screen.
Complete these steps for every system (node) in the cluster.
Assign four sequential IP addresses or host names such as minio1.mycompany.com through minio4.mycompany.com to the TrueNAS SCALE system. If you assign IP address numbers such as *#.#.#.*100 - 103 or *#.#.#.134 - .137, you can use these in the command string in the Multi Mode field. If not using sequential IP addresses, use sequentially numbered host names. Add network settings using either the Network screen. Enter host names on the Global Configuration screen.
When creating the certificate, enter the system IP addresses for each system in Subject Alternate Names. If configuring MinIO in an MNMD cluster, enter the system IP addresses for each system in the cluster.
If the system has active sharing configurations (SMB, NFS, iSCSI), disable these sharing services in System Settings > Services before adding and configuring the MinIO application. Start any sharing services after MinIO completes the install and starts.
Multi mode configurations require a self-signed certificate. If creating a cluster each system requires a self-signed certificate.
Add a self-signed certificate for the MinIO application to use.Create four datasets named, data1, data2, data3, and data4. Do not nest these datasets under each other. Select the parent dataset, for example apps, before you click Create Dataset. Set the Share Type to apps for each dataset. MinIO assigns the correct properties during the installation process so you do not need to configure the ACL or permissions.
This procedure covers the required Enterprise MinIO App settings.
Repeat this procedure for every system (node) in the MNND cluster.
To install the Minio Enterprise app, go to Apps, click on Discover Apps, then scroll down to locate the enterprise version of the Minio widget.
Click on the MinIO Official Enterprise widget to open the MinIO information screen.
Click Install to open the Install MinIO configuration screen.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version. SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user. If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO. The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise. If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Do not select Host Network.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000. Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
Scroll down to or click on Storage Configuration on the list of sections at the right of the screen. Click Add three times in the Storage Configuration section to add three more sets of storage volume settings. In the first set of storage volume settings, select Host Path (Path that already exists on the system) and accept the default /data1 in Mount Path. Enter or browse to the data1 dataset to populate Host Path with the mount path. For example, /mnt/tank/apps/data1.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data2, and enter or browse to the location of the data2 dataset to populate the Host Path.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data3, and enter or browse to the location of the data3 dataset to populate the Host Path.
Scroll down to the last set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data4, and enter or browse to the location of the data4 dataset to populate the Host Path.
Select Enable Multi Mode (SNMD or MNMD), then click Add. If the systems in the cluster have sequentially assigned IP addresses, use the IP addresses in the command string you enter in the Multi Mode (SNMD or MNMD) field. For example, https://10.123.12.10{0…3}:30000/data{1…4} where the last number in the last octet of the IP address number is the first number in the {0…3} string. Separate the numbers in the curly brackets with three dots. If your sequential IP addresses are not using 100 - 103, for example 10.123.12.125 through 128, then enter them as https://10.123.12.12{5…8}:30000/data{1…4}. Enter the same string in the Multi Mode (SNMD or MNMD) field in all four systems in the cluster.
If you do not have sequentially numbered IP addresses assigned to the four systems, assign sequentially numbered host names. For example, minio1.mycompany.com through minio4.mycompany.com. Enter https://minio{1…4}.mycompany.com:30000/data{1…4} in the Multi Mode (SNMD or MNMD) field.
If you want to set up logging, select Anonymous to hide sensitive information from logging, or Quiet to disable startup information.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function and deploy a postgres database to store the logs.
Specify the storage in gigabytes that the logs are allowed to occupy in Disk Capacity in GB. Accept the default ixVolume in Postgres Data Storage and Postgres Backup Storage to let the system create the datasets, or select Host Path to select an existing dataset on the system to use for these storage volumes.
Accept the default values in Resources Configuration or to customize the CPU and memory allocated to the container (pod) the Minio app uses, enter new values in the CPU Resource Limit and Memory Limit fields. Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
Click Install to complete the installation.
The Installed applications screen opens showing the MinIO application in the Deploying state. It changes to Running when the application is ready to use.
Click Web Portal to open the MinIO sign-in screen.
After installing and getting the MinIO Enterprise application running in SCALE, log into the MinIO web portal and complete the MinIO setup.
Go to Monitoring > Metrics to verify the servers match the total number of systems (nodes) you configured. Verify the number of drives match the number you configured on each system, four systems each with four drives (4 systems x 4 drives each = 16 drives).
Refer to MinIO documentation for more information.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
The instructions in this article apply to the TrueNAS MinIO Enterprise application installed in a Single-Node Multi-Disk (SNMD) multi-mode configuration.
For more information on MinIO multi-mode configurations see MinIO Single-Node Multi-Drive (SNMD) or Multi-Node Multi-Drive (MNMD). MinIO recommends using MNMD (distributed) for enterprise-grade performance and scalability.
Community members can add and use the MinIO Enterprise app or the default community version.
To add the Enterprise MinIO application to the list of available applications, go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, click on enterprise to add it to the list of trains, and then click Save.
Both the charts and enterprise train versions of the MinIO app widget display on the Discover application screen.
This procedure covers the required Enterprise MinIO App settings.
To install the Minio Enterprise app, go to Apps, click on Discover Apps, then scroll down to locate the enterprise version of the Minio widget.
Click on the MinIO Official Enterprise widget to open the MinIO information screen.
Click Install to open the Install MinIO configuration screen.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version. SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user. If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO. The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise. If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Do not select Host Network.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000. Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
Scroll down to or click on Storage Configuration on the list of sections at the right of the screen. Click Add three times in the Storage Configuration section to add three more sets of storage volume settings. In the first set of storage volume settings, select Host Path (Path that already exists on the system) and accept the default /data1 in Mount Path. Enter or browse to the data1 dataset to populate Host Path with the mount path. For example, /mnt/tank/apps/data1.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data2, and enter or browse to the location of the data2 dataset to populate the Host Path.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data3, and enter or browse to the location of the data3 dataset to populate the Host Path.
Scroll down to the last set of storage volume settings and select Host Path (Path that already exists on the system). Change the Mount Path to /data4, and enter or browse to the location of the data4 dataset to populate the Host Path.
Select Enable Multi Mode (SNMD or MNMD), then click Add. Enter **/data{1…4} in the Multi Mode (SNMD or MNMD) field.
If you want to set up logging, select Anonymous to hide sensitive information from logging, or Quiet to disable startup information.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function and deploy a postgres database to store the logs.
Specify the storage in gigabytes that the logs are allowed to occupy in Disk Capacity in GB. Accept the default ixVolume in Postgres Data Storage and Postgres Backup Storage to let the system create the datasets, or select Host Path to select an existing dataset on the system to use for these storage volumes.
Accept the default values in Resources Configuration or to customize the CPU and memory allocated to the container (pod) the Minio app uses, enter new values in the CPU Resource Limit and Memory Limit fields. Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
Click Install to complete the installation.
The Installed applications screen opens showing the MinIO application in the Deploying state. It changes to Running when the application is ready to use.
Click Web Portal to open the MinIO sign-in screen.
Application maintenance is independent from TrueNAS SCALE version release cycles. This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes. To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker. To propose documentation changes for a separately versioned Docker-based app, first use the Product and Version dropdowns to switch to the Nightly version Apps documentation, then click Edit Page.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
This article provides information on installing and using the TrueNAS Syncthing app.
SCALE has two versions of the Syncthing application, the community version in the charts train and a smaller version tested and polished for a safe and supportable experience for enterprise customers in the enterprise train. Community members can install either the enterprise or community version.
Syncthing is a file synchronization application that provides a simple and secure environment for file sharing between different devices and locations. Use it to synchronize files between different departments, teams, or remote workers.
Syncthing is tested and validated to work in harmony with TrueNAS platforms and underlying technologies such as ZFS to offer a turnkey means of keeping data across many systems. It can seamlessly integrate with TrueNAS.
Syncthing does not use or need a central server or cloud storage. All data is encrypted and synchronized directly between devices to ensure files are protected from unauthorized access.
Syncthing is easy to use and configure. You can install on a wide range of operating systems, including Windows, MacOS, Linux, FreeBSD, iOS or Android. The Syncthing web UI provides users with easy management and configuration of the application software.
Users migrating data from an existing third-party NAS solution to TrueNAS SCALE 24.04 (Dragonfish) or newer can use the Syncthing Enterprise application to mount the source with a remote SMB share that preserves metadata.
See Third-Party SMB Data Migration for considerations and a full tutorial.
Create a self-signed certificate for the Syncthing enterprise app.
You can allow the app to create a storage volume(s) or use existing datasets created in SCALE. The TrueNAS Syncthing app requires a main configuration storage volume for application information. You can also mount existing datasets for storage volume inside the container pod.
If you want to use existing datasets for the main storage volume, [create any datasets]/scaletutorials/datasets/datasetsscale/ before beginning the app installation process (for example, syncthing for the configuration storage volume). If also mounting storage volume inside the container, create a second dataset named data1. If mounting multiple storage volumes, create a dataset for each volume (for example, data2, data3, etc.).
You can have multiple Syncthing app deployments (two or more Charts, two or more Enterprise, or Charts and Enterprise trains, etc.). Each Syncthing app deployment requires a unique name that can include numbers and dashes or underscores (for example, syncthing2, syncthing-test, syncthing_1, etc.).
Use a consistent file-naming convention to avoid conflict situations where data does not or cannot synchronize because of file name conflicts. Path and file names in the Syncthing app are case sensitive. For example, a file named MyData.txt is not the same as mydata.txt file in Syncthing.
If not already assigned, set a pool for applications to use.
Either use the default user and group IDs or create the new user with Create New Primary Group selected. Make note of the UID/GID for the new user.
Go to Apps > Discover Apps, and locate the Syncthing enterprise app widget.
Click on the widget to open the Syncthing details screen.
Click Install to open the Install Syncthing screen.
Application configuration settings are presented in several sections, each explained below. To find specific fields click in the Search Input Fields search field, scroll down to a particular section, or click on the section heading in the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Select the timezone where the TrueNAS server is located from the Timezone dropdown list.
Accept the default user and group ID settings. If selected, Host Network binds to the default host settings programmed for Syncthing. Accept the default web port 31000.
If changing ports, see Default Ports for a list of assigned port numbers.
Select the certificate created for Syncthing from the Certificates dropdown list.
Configure the storage settings. To allow Syncthing to create the configuration storage volume, leave Type set to ixVolume (Dataset created automatically by the system), then enter or browse to the location of the data1 dataset to populate the Host Path field under the Mount Path field.
To use an existing dataset created for Syncthing, select Host Path (Path that already exists on the system). Enter or browse to the dataset created to populate the Host Path field (for example, /mnt/tank/syncthing/config), then enter or browse to the location of the data1 dataset to populate the Host Path field under the Mount Path field.
To add another dataset path inside the container, see Storage Settings below for more information.
Click Install. The system opens the Installed Applications screen with the Syncthing app in the Deploying state. After installation completes the status changes to Running.
Click Web Portal on the Application Info widget to open the Syncthing web portal and begin configuring folders, devices, and other settings.
Secure Syncthing by setting up a username and password.
The following sections provide detailed explanations of the settings found in each section of the Install Syncthing screen for the Enterprise train app.
Accept the default value or enter a name in Application Name field. In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version. When a new version becomes available, the application has an update badge. The Installed Applications screen shows the option to update applications.
Select the timezone where your TrueNAS SCALE system is located.
You can accept the default settings in User and Group Configuration, or enter new user and group IDs. The default value for User Id and Group ID is 568.
Accept the default port numbers in Web Port for Syncthing. The SCALE Syncthing chart app listens on port 31000. Before changing the default port and assigning a new port number, refer to the TrueNAS default port list for a list of assigned port numbers. To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network. This binds to the host network.
Select the self-signed certificate created in SCALE for Syncthing from the Certificate dropdown list. You can edit the certificate after deploying the application.
You can allow the Syncthing app to create the configuration storage volume or you can create datasets to use for the configuration storage volume and to use for storage within the container pod.
To allow the Syncthing app to create the configuration storage volume, leave Type set to ixVolume (Dataset created automatically…).
To use existing datasets, select Host Path (Path that already exist on the system) in Type to show the Host Path field, then enter or browse to and select the dataset an existing dataset created for the configuration storage volume.
If mounting a storage volume inside the container pod, enter or browse to the location of the data1 dataset to populate the Host Path field below the Mount Path populated with data1.
In addition to the data1 dataset, you can mount additional datasets to use as other storage volumes within the pod. Click Add to the right of Additional Storage to show another set of Mount Path and Host Path fields for each dataset to mount. Enter or browse to the dataset to populate the Host Path and Mount Path fields.
The TrueNAS SCALE Syncthing Enterprise app includes the option to mount an SMB share inside the container pod. This allows data synchronization between the share and the app.
The SMB share mount does not include ACL protections at this time. Permissions are currently limited to the permissions of the user that mounted the share. Alternate data streams (metadata), finder colors tags, previews, resource forks, and MacOS metadata are stripped from the share along with filesystem permissions, but this functionality is undergoing active development and implementation planned for a future TrueNAS SCALE release.
To mount an SMB share inside the Syncthing application, select SMB Share (Mounts a persistent volume claim to a system) in Type if not mounting a dataset in the container pod. If mounting a dataset inside the pod and to mount an SMB share, click Add to the right of Additional Storage to add a set of select settings then select the SMB share option.
Enter the server for the SMB share in Server, the name of the share in Share, then enter the username and password credentials for the SMB share Determine the total size of the SMB share to mount and access via TrueNAS SCALE and Syncthing, and enter this value in Size. You can edit the size after deploying the application if you need to increase the storage volume capacity for the share.
Accept the default values in Resources Configuration or enter new CPU and memory values. By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes of available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Syncthing uses, enter new CPU values as a plain integer value followed by the suffix m (milli). The default is 4000m.
Accept the default value 8Gb allocated memory or enter a new limit in bytes. Enter a plain integer followed by the measurement suffix, for example, 129M or 123MiB.
Syncthing uses inotify to monitor filesystem events, with one inotify watcher per monitored directory. Linux defaults to a maximum of 8192 inotify watchers. Using the Syncthing Enterprise app to sync directories with greater than 8191 subdirectories (possibly lower if other services are also utilizing inotify) produces errors that prevent automatic monitoring of filesystem changes.
Increase inotify values to allow Syncthing to monitor all sync directories. Add a sysctl variable to ensure changes persist through reboot.
Go to System Settings > Advanced and locate the Sysctl widget.
Click Add to open the Add Sysctl screen.
Enter fs.inotify.max_user_watches in Variable.
Enter a Value larger than the number of directories monitored by Syncthing. There is a small memory impact for each inotify watcher of 1080 bytes, so it is best to start with a lower number, we suggest 204800, and increase if needed.
Enter a Description for the variable, such as Increase inotify limit.
Select Enabled and click Save.
After installing and starting the Syncthing application, launch the Syncthing webUI. Go to Actions > Settings and set a user password for the web UI.
The Syncthing web portal allows administrators to monitor and manage the synchronization process, view logs, and adjust settings.
Folders list configured sync folders, details on sync status and file count, capacity, etc. To change folder configuration settings, click on the folder.
This Device displays the current system IO status including transfer/receive rate, number of listeners, total uptime, sync state, and the device ID and version.
Actions displays a dropdown list of options. Click Advanced to access GUI, LDAP, folder, device, and other settings.
You can manage directional settings for sync configurations, security, encryption, and UI server settings through the Actions options.
TrueNAS Sandboxes and Jailmaker are not supported by iXsystems. This is provided solely for users with advanced command-line, containerization, and networking experience.
There is significant risk that using Jailmaker causes conflicts with the built-in Apps framework within SCALE. Do not mix the two features unless you are capable of self-supporting and resolving any issues caused by using this solution.
Beginning with 24.04 (Dragonfish), TrueNAS SCALE includes the systemd-nspawn containerization program in the base system. This allows using tools like the open-source Jailmaker to build and run containers that are very similar to Jails from TrueNAS CORE or LXC containers on Linux. Using the Jailmaker tool allows deploying these containers without modifying the base TrueNAS system. These containers persist across upgrades in 24.04 (Dragonfish) and later SCALE major versions.
Log in to the web interface and go to Datasets.
Select your root pool and click Add Dataset:
a. Name the dataset jailmaker.
b. Leave all other settings at their defaults.
c. Click Save.
Open a Shell (SSH preferred) session and run these commands as root:
a.
cd /mnt/tank/jailmaker/
.
Replace tank with the name of your pool.
b.
curl --location --remote-name https://raw.githubusercontent.com/Jip-Hop/jailmaker/main/jlmkr.py
c.
chmod +x jlmkr.py
Before making any sandboxes, configure TrueNAS to run the Jailmaker tool when the system starts. This ensures the sandboxes start properly.
Log in to the web interface and go to System Settings > Advanced.
Find the Init/Shutdown Scripts widget and click Add:
a. Enter this or a similar note in Description: Jailmaker Startup
b. Set Type to Command.
c. Enter this string in Command:
/mnt/tank/jailmaker/jlmkr.py startup
.
Replace tank with the name of your pool.
d. Set When to Post Init.
e. Set the Enabled checkbox.
f. Leave Timeout at the default and click Save. If you intend to create many sandboxes, increase the timeout integer to a longer wait period.
With a TrueNAS dataset configured for sandboxes and the Jailmaker script set to run at system startup, sandboxes can now be created.
Creating and managing sandboxes is done only in TrueNAS Shell sessions using the
jlmkr
command.
For full usage documentation, refer to the open-source Jailmaker project.
From a TrueNAS Shell session, go to your sandboxes dataset and enter
./jlmkr.py -h
for embedded usage notes.
Report any issues encountered when using Jailmaker to the project Issues Tracker.
TrueNAS has a built-in reporting engine that provides helpful graphs and information about the system.
Reporting data is saved to permit viewing and monitoring usage trends over time. This data is preserved across system upgrades and restarts.
TrueCommand offers enhanced features for reporting like creating custom graphs and comparing utilization across multiple systems.
Click on and drag a certain range of the graph to expand the information displayed in that selected area in the Graph. Click on the icon to zoom in on the graph. Click on the icon to zoom out on the graph. Click the to move the graph forward. Click the to move the graph backward.
Click Netdata from the Reporting screen to see the built-in Netdata UI. This UI bases metrics off your local system and browser time, which might be different from the default TrueNAS system time.
The Netdata UI opens in a new browser tab and automatically logs in.
A Netdata dialog also opens in the TrueNAS SCALE UI. If automatic log in fails, use the generated password from this dialog to access the Netdata UI.
A new password generates each time the Netdata button is clicked on the Reporting screen. Click Generate New Password on the dialog to force regeneration. The Netdata UI opens a log in prompt. Enter the new generated password to regain access.
See Dashboards and Charts from Netdata for more information about the Netdata UI.
You can configure TrueNAS to export Netdata information to any time-series database, reporting cloud service or application installed on a server. For example, Graphite, Grafana, etc., installed on a server or use their cloud service.
Creating reporting exporters enables SCALE to send Netdata reporting metrics, formatted as a JSON object, to another reporting entity.
For more information on exporting Netdata records to other servers or services, refer to the Netdata exporting reference guide.
Graphite is a monitoring tool available as an application you can deploy on a server or use their cloud service. It stores and renders time-series data based on a plaintext database. Netdata exports reporting metrics to Graphite in the format prefix.hostname.chart.dimension. For additional information, see the Netdata Graphite exporting guide.
To configure a reporting exporter in SCALE, you need the:
For more information on reporting exporter settings, see Add Reporting Exporter.
Go to Reporting and click on Exporters to open the Reporting Exporters screen. Any reporting exporters configured on the system display on the Reporting Exporters screen.
Click Add to open the Add Reporting Exporter screen to configure a third party reporting integration.
Enter a unique name for the exporter configuration in Name. When configuring multiple exporter instances, give each a distinct name.
Select the report object format from Type. At present, only GRAPHITE is available. The screen shows the exporter configuration fields.
Select Enable to send reporting metrics to the configured exporter instance. Clearing the checkmark disables the exporter without removing configuration.
Enter the IP address for the data collection server or cloud service.
Enter the port number the report collecting server, etc. listens on.
Enter the file hierarchy structure, or where in the collecting server, etc. to send the data. First enter the top-level in Prefix and then the data collection folder in the Namespace field. For example, entering DF in Prefix and test in Namespace creates two folders in Graphite with DF as the parent to Test.
You can accept the defaults for all other settings, or enter configuration settings to match your use case.
Click Save.
To view the Graphite web UI, enter the IPaddress:Port# of the system hosting the application.
SCALE can now export the data records as Graphite-formatted JSON objects to the other report collection and processing application, service, or servers.
SCALE also populates the exporter screen with default settings. To view these settings, click Edit on the row for the exporter.
SCALE system management options are collected in this section of the UI and organized into a few different screens:
Update controls when the system applies a new version. There are options to download and install an update, have the system check daily and stage updates, or apply a manual update file to the system.
General shows system details and has basic, less intrusive management options, including web interface access, localization, and NTP server connections. This is also where users can input an Enterprise license or create a software bug ticket.
Advanced contains options that are more central to the system configuration or meant for advanced users. Specific options include configuring the system console, log, and dataset pool, managing sessions, adding custom system controls, kernel-level settings, scheduled scripting or commands, global two-factor authentication, and determining any isolated GPU devices. Warning: Advanced settings can be disruptive to system function if misconfigured.
Boot lists each ZFS boot environment stored on the system. These restore the system to a previous version or specific point in time.
Services displays each system component that runs continuously in the background. These typically control data sharing or other external access to the system. Individual services have their own configuration screens and activation toggles, and can be set to run automatically.
Shell allows users to use the TrueNAS Operating System command-line interface (CLI) directly in the web UI. Includes an experimental TrueNAS SCALE-specific CLI for configuring the system separately from the web interface. See the CLI Reference Guide for more information.
Alert Settings allows users to configure Alert Services and to adjust the threshold and frequency of various alert types. See Alerts Settings Screens for more information.
Enclosure appears when the system is attached to compatible SCALE hardware. This is a visual representation of the system with additional details about disks and other physical hardware components.
TrueNAS has several software branches (linear update paths) known as trains. If SCALE is in a prerelease train it can have various preview/early build releases of the software.
The Update Screen only displays the current train. When looking to upgrade SCALE to a new major version, make sure to upgrade SCALE along the path of major versions until the system is on the desired major version release. For more information on other available trains and the upgrade path from one version to the next, see Release Schedules.
See the Software Status page for the latest recommendations for software usage. Do not change to a prerelease or nightly release unless the system is intended to permanently remain on early versions and is not storing any critical data.
If you are using a non-production train, be prepared to experience bugs or other problems. Testers are encouraged to submit bug reports and debug files. For information on how to file an issue ticket see Filing an Issue Ticket in SCALE.
The TrueNAS SCALE Update screen provides users with two different methods to update the system, automatic or manual.
We recommend updating SCALE when the system is idle (no clients connected, no disk activity, etc.). The system restarts after an upgrade. Update during scheduled maintenance times to avoid disrupting user activities.
All auxiliary parameters are subject to change between major versions of TrueNAS due to security and development issues. We recommend removing all auxiliary parameters from TrueNAS configurations before upgrading.
If an update is available, click Apply Pending Update to install it.
The Save configuration settings from this machine before updating? window opens.
Click Export Password Secret Seed then click Save Configuration. The Apply Pending Updates window opens.
Click Confirm, then Continue to start the automatic installation process. TrueNAS SCALE downloads the configuration file and the update file, then starts the install.
After updating, clear the browser cache (CTRL+F5) before logging in to SCALE. This ensures stale data doesn’t interfere with loading the SCALE UI.
If the system detects an available update, to do a manual update click Download Updates and wait for the file to download to your system.
SCALE Manual update files are available from the TrueNAS SCALE Download page website.
Click Install Manual Update File. The Save configuration settings from this machine before updating? window opens. Click Export Password Secret Seed then click Save Configuration. The Manual Update screen opens.
Click Choose File to locate the update file on the system. Select a temporary location to store the update file. Select Memory Device or select one of the mount locations on the dropdown list to keep a copy in the server.
Click Apply Update to start the update process. A status window opens and displays the installation progress. When complete, a Restart window opens.
Click Confirm, then Continue to restart the system.
When a system update starts, appears in the toolbar at the top of the UI. Click the icon to see the current status of the update and which TrueNAS administrative account initiated the update.
TrueNAS Enterprise
This procedure only applies to SCALE Enterprise (HA) systems. If attempting to migrate from CORE to SCALE, see Migrating from TrueNAS CORE.
If the system does not have an administrative user account, create the admin user as part of this procedure.
Take a screenshot of the license information found on the Support widget on the System Settings > General screen. You use this to verify the license after the update.
To update your Enterprise (HA) system to the latest SCALE release, log into the SCALE UI using the virtual IP (VIP) address and then:
Check for updates. Go to the main Dashboard and click Check for Updates on the System Information widget for the active controller. This opens the System Settings > Update screen. If an update is available it displays on this screen.
Save the password secret seed and configuration settings to a secure location. Click Install Manual Updates. The Save configuration settings window opens. Select Export Password Secret Seed then click Save Configuration. The system downloads the file. The file contains sensitive system data and should be maintained in a secure location.
Select the update file and start the process. Click Choose File and select the update file downloaded to your system, then click Apply Update to start the update process. After the system to finishes updating it reboots.
Sign into the SCALE UI. If using root to sign in, create the admin account now. If using admin, continue to the next step.
Verify the system license after the update. Go to System Settings > General. Verify the license information in the screenshot of the Support widget you took before the update matches the information on the Support widget after updating the system.
Verify the admin user settings, or if not created, create the admin user account now.
If you want the admin account to have the ability to execute sudo
commands in an SSH session, select the option for the sudo access you want to allow.
Also, verify Shell is set to bash if you want the admin user to have the ability to execute commands in Shell.
To set a location where the admin user can save to, browse to, and select the dataset in Home Directory. If set to the default /nonexistent files are not saved for this user.
Test the admin user access to the UI.
a. Log out of the UI.
b. Enter the admin user credentials in the sign-in splash screen.
After validating access to the SCALE UI using the admin credentials, disable the root user password. Go to Credentials > Local User and edit the root user. Select Disable Password and click Save.
Finish the update by saving your updated system configuration file to a secure location and create a new boot environment to use as a restore point if it becomes necessary.
The TrueNAS SCALE General Settings section provides settings options for support, graphic user interface, localization, NTP servers, and system configuration.
TrueNAS SCALE allows users to manage the system configuration by uploading or downloading configurations, or by resetting the system to the default configuration.
The Manage Configuration option on the System Settings > General screen provides three options:
The Download File option downloads your TrueNAS SCALE current configuration to the local machine.
When you download the configuration file, you have the option to Export Password Secret Seed, which includes encrypted passwords in the configuration file. This allows you to restore the configuration file to a different operating system device where the decryption seed is not already present. Users must physically secure configuration backups containing the seed to prevent unauthorized access or password decryption.
We recommend backing up the system configuration regularly. Doing so preserves settings when migrating, restoring, or fixing the system if it runs into any issues. Save the configuration file each time the system configuration changes.
Go to System Settings > General and click on Manage Configuration. Select Download File.
The Save Configuration dialog displays.
Click Export Password Secret Seed and then click Save. The system downloads the system configuration. Save this file in a safe location on your network where files are regularly backed up.
Anytime you change your system configuration, download the system configuration file again and keep it safe.
The Upload File option gives users the ability to replace the current system configuration with any previously saved TrueNAS SCALE configuration file.
All passwords are reset if the uploaded configuration file was saved without selecting Save Password Secret Seed.
TrueNAS Enterprise
Save the current system configuration with the Download File option before resetting the configuration to default settings! If you do not save the system configuration before resetting it, you could lose data that was not backed up, and you cannot revert to the previous configuration.
The Reset to Defaults option resets the system configuration to factory settings. After the configuration resets, the system restarts and users must set a new login password.
SCALE does not automatically back up the system configuration file to the system dataset.
Users who want to schedule an automatic backup of the system configuration file should:
Users can manually back up the SCALE config file by downloading and saving the file to a location that is automatically backed up.
The TrueNAS SCALE General Settings section provides settings options for support, graphic user interface, localization, NTP servers, and system configuration.
The Support widget shows information about the TrueNAS version and system hardware. Links to the open source documentation, community forums, and official Enterprise licensing from iXsystems are also provided.
Add License opens the sidebar with a field to paste a TrueNAS Enterprise license (details).
File Ticket opens a window to provide feedback directly to the development team.
The GUI widget allows users to configure the TrueNAS SCALE web interface address. Click Settings to open the GUI Settings configuration screen.
The system uses a self-signed certificate to enable encrypted web interface connections. To change the default certificate, select a different certificate that was created or imported in the Certificates section from the GUI SSL Certificate dropdown list.
To set the WebUI IP address, if using IPv4 addresses, select a recent IP address from the Web Interface IPv4 Address dropdown list. This limits the usage when accessing the administrative GUI. The built-in HTTP server binds to the wildcard address of 0.0.0.0 (any address) and issues an alert if the specified address becomes unavailable. If using an IPv6 address, select a recent IP address from the Web Interface IPv6 Address dropdown list.
To allow configuring a non-standard port to access the GUI over HTTPS, enter a port number in the Web Interface HTTPS Port field.
Select the cryptographic protocols for securing client/server connections from the HTTPS Protocols dropdown list. Select the Transport Layer Security (TLS) versions TrueNAS SCALE can use for connection security.
To redirect HTTP connections to HTTPS, select Web Interface HTTP -> HTTPS Redirect. A GUI SSL Certificate is required for HTTPS. Activating this also sets the HTTP Strict Transport Security (HSTS) maximum age to 31536000 seconds (one year). This means that after a browser connects to the web interface for the first time, the browser continues to use HTTPS and renews this setting every year. A warning displays when setting this function.
Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to System Settings > General > GUI > Settings and locate the Web Interface HTTP -> HTTPS Redirect checkbox. To disable HTTPS redirects, clear this option and click Save, then clear the browser cache before attempting to connect to the app again.
To send failed HTTP request data which can include client and server IP addresses, failed method call tracebacks, and middleware log file contents to iXsystems, select Crash Reporting.
To send anonymous usage statistics to iXsystems, select the Usage Collection option.
To display console messages in real time at the bottom of the browser, select the Show Console Messages option.
To change the WebUI on-screen language and set the keyboard to work with the selected language, click Settings on the System Settings > General > Localization widget. The Localization Settings configuration screen opens.
Select the language from the Language dropdown list, and then the keyboard layout in Console Keyboard Map.
Enter the time zone in Timezone and then select the local date and time formats to use.
Click Save.
The NTP Servers widget allows users to configure Network Time Protocol (NTP) servers. These sync the local system time with an accurate external reference. By default, new installations use several existing NTP servers. TrueNAS SCALE supports adding custom NTP servers.
The Email widget displays information about current system mail settings. When configured, an automatic script sends a nightly email to the administrator account containing important information such as the health of the disks.
To configure the system email send method, click Settings to open the Email Options screen. Select either SMTP or GMail OAuth to display the relevant configuration settings.
For users with a valid TrueNAS license, click Add License. Copy your license into the box and click Save.
You are prompted to reload the page for the license to take effect, click RELOAD NOW. Log back into the WebUI where the End User License Agreement (EULA) displays. Read it thoroughly and completely. After you finish, click I AGREE. The system information updates to reflect the licensing specifics for the system.
Silver and Gold level Support customers can also enable Proactive Support on their hardware to automatically notify iXsystems if an issue occurs. To find more details about the different Warranty and Service Level Agreement (SLA) options available, see iXsystems Support.
When the system is ready to be in production, update the status by selecting This is a production system and then click the Proceed button. This sends an email to iXsystems declaring that the system is in production.
While not required for declaring the system is in production, TrueNAS has the option to include an initial debug with the email that can assist support in the future.
Silver/Gold Coverage Customers can enable iXsystems Proactive Support. This feature automatically emails iXsystems when certain conditions occur in a TrueNAS system.
To configure proactive support, click Get Support on the Support widget located on the System Settings > General screen. Select Proactive Support from the dropdown list.
Complete all available fields and select Enable iXsystems Proactive Support, then click Save.
An automatic script sends a nightly email to the administrator account containing important information such as the health of the disks. Configure the system to send these emails to the administrator remote email account for fast awareness and resolution of any critical issues.
Scrub Task issues and S.M.A.R.T. reports are mailed separately to the address configured in those services.
Configure the email address for the admin user as part of your initial system setup or using the procedure below. You can also configure email addresses for additional user accounts as needed.
Before configuring anything else, set the local administrator email address.
Add a new user as an administrative or non-administrative account and set up email for that user. Follow the directions in Configuring the Admin User Email Address above for an existing user or see Managing Users for a new user.
After setting up the admin email address, you need to set up the send method for email service.
There are two ways to access email configuration options. Go to the Systems Settings > General screen and locate the Email widget to view current configuration or click the Alerts icon in the top right of the UI, then click the gear icon, and select Email to open the General settings screen. Click Settings on the Email Widget to open the Email Options configuration screen.
Send Mail Method shows two different options:
The configuration options change based on the selected method.
After configuring the send method, click Send Test Mail to verify the configured email settings are working. If the test email fails, verify that the Email field is correctly configured for the admin user. Return to Credentials > Users to edit the admin user.
Save stores the email configuration and closes the Email Options screen.
To set up SMTP service for the system email send method, you need the outgoing mail server and port number for the email address.
To set up the system email using Gmail OAuth, you need to log in to your Gmail account through the TrueNAS SCALE web UI.
If the system email send method is configured, the admin email receives a system health email every night/morning.
You can also add/configure the Email Alert Service to send timely warnings when a system alert hits a warning level that is specified in Alert Settings.
From the Alerts
panel, select the icon and then Alert Settings, or go to System Settings > Alert Settings.Locate Email under Alert Services, select the
icon, and then click Edit to open the Edit Alert Service screen.Add the system email address in the Email Address field.
Use the Level dropdown to adjust the email warning threshold or accept the default Warning.
Use Send Test Alert to generate a test alert and confirm the email address and alert service works.
Advanced Settings provides configuration options for the console, syslog, kernel, sysctl, replication, cron jobs, init/shutdown scripts, system dataset pool, isolated GPU device(s), self-encrypting drives, system access sessions, allowed IP addresses, audit logging, and global two-factor authentication.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
This article provides information on sysctl, system dataset pool, setting the maximum number of simultaneous replication tasks the system can perform, and managing sessions.
Use the System Settings > Advanced screen Allowed IP Addresses configuration screen to restrict access to the TrueNAS SCALE web UI and API.
Entering an IP address limits access to the system to only the address(es) entered here. To allow unrestricted access to all IP addresses, leave this list empty.
Use Add on the Sysctl widget to add a tunable that configures a kernel module parameter at runtime.
The Add Sysctl or Edit Sysctl configuration screens display the settings.
Enter the sysctl variable name in Variable. Sysctl tunables configure kernel module parameters while the system runs and generally take effect immediately.
Enter a sysctl value for the loader in Value.
Enter a description and then select Enabled. To disable but not delete the variable, clear the Enabled checkbox.
Click Save.
Storage widget displays the pool configured as the system dataset pool and allows users to select the storage pool they want to hold the system dataset. The system dataset stores core files for debugging and keys for encrypted pools. It also stores Samba4 metadata, such as the user and group cache and share-level permissions.
Configure opens the Storage Settings configuration screen.
If the system has one pool, TrueNAS configures that pool as the system dataset pool. If your system has more than one pool, you can set the system dataset pool using the Select Pool dropdown. Users can move the system dataset to an unencrypted pool, or an encrypted pool without passphrases.
Users can move the system dataset to a key-encrypted pool, but cannot change the pool encryption type afterward. If the encrypted pool already has a passphrase set, you cannot move the system dataset to that pool.
Swap Size lets users enter an amount (in GiB) of hard disk space to use as a substitute for RAM when the system fully utilizes the actual RAM.
By default, the system creates all data disks with the specified swap amount. Changing the value does not affect the amount of swap on existing disks, only disks added after the change. Swap size does not affect log or cache devices.
The Replication widget displays the number of replication tasks that can execute simultaneously configured on the system. It allows users to adjust the maximum number of replication tasks the system can execute simultaneously.
Click Configure to open the Replication configuration screen.
Enter a number for the maximum number of simultaneous replication tasks you want to allow the system to process and click Save.
The Access widget displays a list of all active sessions, including the user who initiated the session and what time it started. It also displays the Token Lifetime setting for your current session. It allows administrators to manage other active sessions and to configure the token lifetime for their account.
The Terminate Other Sessions button ends all sessions except for the one you are currently using. You can also end individual sessions by clicking the logout button next to that session. You must check a confirmation box before the system allows you to end sessions.
The logout icon is inactive for the currently logged in administrator session and active for any other current sessions. It cannot be used to terminate the currently logged in active administrator session.
Token Lifetime displays the configured token duration for the current session (default five minutes). TrueNAS SCALE logs out user sessions that are inactive for longer than that configured token setting for the user. New activity resets the token counter.
If the configured token lifetime is exceeded, TrueNAS SCALE displays a Logout dialog with the exceeded ticket lifetime value and the time that the session is scheduled to terminate.
Click Extend Session to reset the token counter. If the button is not clicked, the TrueNAS SCALE terminates the session automatically and returns to the log in screen.
Click Configure to open the Token Settings screen and configure Token Lifetime for the current account.
Select a value that fits user needs and security requirements. Enter the value in seconds.
The default lifetime setting is 300 seconds, or five minutes.
The minimum value allowed is 30 seconds.
The maximum is 2147482 seconds, or 24 days, 20 hours, 31 minutes, and 22 seconds.
Click Save.
Cron jobs allow users to configure jobs that run specific commands or scripts on a regular schedule using cron(8). Cron jobs help users run repetitive tasks.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
The Cron Jobs widget on the System > Advanced screen displays No Cron Jobs configured until you add a cron job, and then it displays information on cron job(s) configured on the system.
Click Add to open the Add Cron Job configuration screen and create a new cron job. If you want to modify an existing cron job, click anywhere on the item to open the Edit Cron Jobs configuration screen populated with the settings for that cron job. The Add Cron Job and Edit Cron Job configuration screens display the same settings.
Enter a description for the cron job.
Next, enter the full path to the command or script to run in Command. For example, for a command string to create a list of users on the system and write that list to a file, enter cat /etc/passwd > users_$(date +%F).txt
.
Select a user account to run the command from the Run As User dropdown list. The user must have permissions allowing them to run the command or script.
Select a schedule preset or choose Custom to open the advanced scheduler. An in-progress cron task postpones any later scheduled instances of the task until the one already running completes.
If you want to hide standard output (stdout) from the command, select Hide Standard Output. If left cleared, TrueNAS emails any standard output to the user account cron that ran the command.
To hide error output (stderr) from the command, select Hide Standard Error. If left cleared, TrueNAS emails any error output to the user account cron that ran the command.
Select Enabled to enable this cron job. Leave this checkbox cleared to disable the cron job without deleting it.
Click Save.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
The Console widget on the System Setting > Advanced screen displays current console settings for TrueNAS.
Click Configure to open the Console configuration screen. The Console configuration settings determine how the Console setup menu displays, the serial port it uses and the speed of the port, and the banner users see when it is accessed.
To display the console without being prompted to enter a password, select Show Text Console without Password Prompt. Leave it clear to add a login prompt to the system before showing the console menu.
Select Enable Serial Console to enable the serial console but do not select this if the serial port is disabled.
Enter the serial console port address in Serial Port and set the speed (in bits per second) from the Serial Speed dropdown list. Options are 9600, 19200, 38400, 57600 or 115200.
Finally, enter the message you want to display when a user logs in with SSH in MOTD Banner.
Click Save
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
By default, TrueNAS writes system logs to the system boot device. The Syslog widget on the System > Advanced screen allows users determine how and when the system sends log messages to a connected syslog server. The Syslog widget displays the existing system logging settings.
Before configuring your syslog server to use TLS as the Syslog Transport method, first make sure you add a certificate and certificate authority (CA) to the TrueNAS system. Go to Credentials > Certificates and use the Certificate Authority (CA) and Certificates widgets to verify you have the required certificates or to add them.
Click Configure to open the Syslog configuration screen. The Syslog configuration screen settings specify the logging level the system uses to record system events, the syslog server DNS host name or IP, the transport protocol it uses, and if using TLS, the certificate and certificate authority (CA) for that server, and finally if it uses the system dataset to store the logs.
Enter the remote syslog server DNS host name or IP address in Syslog Server. To use non-standard port numbers like mysyslogserver:1928, add a colon and the port number to the host name. Log entries are written to local logs and sent to the remote syslog server.
Enter the transport protocol for the remote system log server connection in Syslog Transport. Selecting Transport Layer Security (TLS) displays the Syslog TLS Certificate and Syslog TSL Certificate Authority fields.
Next, select the transport protocol for the remote system log server TLS certificate from the Syslog TLS Certificate dropdown list, and select the TLS CA for the TLS server from the Syslog TLS Certificate Authority dropdown list.
Select Use FQDN for Logging to include the fully-qualified domain name (FQDN) in logs to precisely identify systems with similar host names.
Select the minimum log priority level to send to the remote syslog server from Syslog Level the dropdown list. The system only sends logs at or above this level.
Click Save.
The Init/Shutdown Scripts widget on the System > Advanced screen allows you to add scripts to run before or after initialization (start-up), or at shutdown. For example, creating a script to backup your system or run a systemd command before exiting and shutting down the system.
Init/shutdown scripts are capable of making OS-level changes and can be dangerous when done incorrectly. Use caution before creating script or command tasks.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before creating and executing script tasks.
The Init/Shutdown Scripts widget displays No Init/Shutdown Scripts configured until you add either a command or script, and then the widget lists the scripts configured on the system.
Click Add to open the Add Init/Shutdown Script configuration screen.
Enter a description and then select Command or Script from the Type dropdown list. Selecting Script displays additional options.
Enter the command string in Command, or if using a script, enter or use the browse to the path in Script. The script runs using dash(1).
Select the option from the When dropdown list for the time this command or script runs.
Enter the number of seconds after the script runs that the command should stop in Timeout.
Select Enable to enable the script. Leave clear to disable but not delete the script.
Click Save.
Click a script listed on the Init/Shutdown Scripts widget to open the Edit Init/Shutdown Script configuration screen populated with the settings for that script.
You can change from a command to a script, and modify the script or command as needed.
To disable but not delete the command or script, clear the Enabled checkbox.
Click Save.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
The Self-Encrypting Drive(s) widget on the System > Advanced screen allows you set the user and global SED password in SCALE.
The Self-Encrypting Drive (SED) widget displays the ATA security user and password configured on the system.
Click Configure to open the Self-Encrypting Drive configuration screen. The Self-Encrypting Drive configuration screen allows users set the ATA security user and create a SED global password.
Select the user passed to camcontrol security -u to unlock SEDs from the ATA Security User dropdown list. Options are USER or MASTER.
Enter the global password to unlock SEDs in SED Password and in Confirm SED Password.
Click Save.
Systems with more than one graphics processing unit (GPU) installed can isolate additional GPU device(s) from the host operating system (OS) and allocate them for use by a virtual machine (VM). Isolated GPU devices are unavailable to the OS and for allocation to applications.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
The Isolated GPU Device(s) widget on the System > Advanced screen shows configured isolated GPU device(s).
To isolate a GPU, you must have at least two in your system; one available to the host system for system functions and the other available to isolate for use by a VM. One isolated GPU device can be used by a single VM. Isolated GPU cannot be allocated to applications.
To allocate an isolated GPU device, select it while creating or editing VM configuration. When allocated to a VM, the isolated GPU connects to the VM as if it were physically installed in that VM and becomes unavailable for any other allocations.
Click Configure on the Isolated GPU Device(s) widget to open the Isolate GPU PCI Ids screen, where you can select a GPU device to isolate.
Select the GPU device(s) to isolate from the dropdown list.
Click Save.
Global Two-factor authentication (2FA) is great for increasing security.
TrueNAS offers global 2FA to ensure that entities cannot use a compromised administrator root password to access the administrator interface.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes. Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
Make sure you are comfortable with ZFS, Linux, and system configuration, backup, and restoration before making any changes.
To use 2FA, you need a mobile device with the current time and date, and an authenticator app installed. We recommend Google Authenticator. You can use other authenticator applications, but you must confirm the settings and QR codes generated in TrueNAS are compatible with your particular app before permanently activating 2FA.
Two-factor authentication is time-based and requires a correct system time setting. Ensure Network Time Protocol (NTP) is functional before enabling two-factor authentication is strongly recommended!
Unauthorized users cannot log in since they do not have the randomized six-digit code.
Authorized employees can securely access systems from any device or location without jeopardizing sensitive information.
Internet access on the TrueNAS system is not required to use 2FA.
2FA requires an app to generate the 2FA code.
If the 2FA code is not working or users cannot get it, the system is inaccessible through the UI and SSH (if enabled). You can bypass or unlock 2FA using the CLI.
Set up a second 2FA device as a backup before proceeding.
Before you begin, download Google Authenticator to your mobile device.
Go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config.
Check Enable Two Factor Authentication Globally, then click Save.
If you want to enable two-factor authentication for SSH logins, select Enable Two-Factor Auth for SSH before you click Save.
TrueNAS takes you to the Two-Factor Authentication screen to finish 2FA setup.
When using Google Authenticator, set Interval to 30 or the authenticator code might not function when logging in.
Click Show QR and scan the QR code using Google Authenticator.
After scanning the code click CLOSE to close the dialog on the Two-Factor Authentication screen.
Accounts that are already configured with individual 2FA are not prompted for 2FA login codes until Global 2FA is enabled. When Global 2FA is enabled, user accounts that have not configured 2FA settings yet are shown the Two-Factor Authentication screen on their next login to configure and enable 2FA authentication for that account.
Go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config. Clear the Enable Two-Factor Authentication Globally checkbox and click Save.
If the device with the 2FA app is not available, you can use the system CLI to bypass 2FA with administrative IPMI or by physically accessing the system.
To unlock 2FA in the SCALE CLI, enter:
auth two_factor update enabled=false
If you want to enable 2FA again, go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config.
Check Enable Two Factor Authentication Globally, then click Save. To change the system-generated Secret, go to Credentials > 2FA and click Renew 2FA Secret.
Enabling 2FA changes the login process for both the TrueNAS web interface and SSH logins.
The login screen adds another field for the randomized authenticator code. If this field is not immediately visible, try refreshing the browser.
Enter the code from the mobile device (without the space) in the login window and use the root username and password.
If you wait too long, a new number code displays in Google Authenticator, so you can retry.
Confirm that you set Enable Two-Factor Auth for SSH in System Settings > Advanced > Global Two Factor Authentication.
Go to System Settings > Services and edit the SSH service.
a. Set Log in as Admin with Password, then click Save.
b. Click the SSH toggle and wait for the service status to show that it is running.
Open the Google Authentication app on your mobile device.
Open a terminal (such as Windows Shell) and SSH into the system using either the host name or IP address, the administrator account user name and password, and the 2FA code.
Developer mode is for developers only. Users that enable this functionality will not receive support on any issues submitted to iXsystems.
Only enable when you are comfortable with debugging and resolving all issues encountered on the system. Never enable on a system that has production storage and workloads.
TrueNAS is an Open Source Storage appliance, not a standard Linux operating system (OS) that allows customization of the OS environment.
By default, the root/boot filesystem and tools such as apt
are disabled to prevent accidental misconfiguration that renders the system inoperable or puts stored data at risk.
However, as an open-source appliance, there are circumstances in which software developers want to create a development environment to install new packages and do engineering or test work before creating patches to the TrueNAS project.
Do not make system changes using the TrueNAS UI web shell. Using package management tools in the web shell can result in middleware changes that render the system inaccessible.
Connect to the system using SSH or a physically connected monitor and keyboard before enabling or using developer mode.
To enable developer mode, log into the system as the root account and access the Linux shell.
Run the install-dev-tools
command.
Running install-dev-tools
removes the default TrueNAS read-only protections and installs a variety of tools needed for development environments on TrueNAS.
These changes do not persist across updates and install-dev-tools
must be re-run after every system update.
System Settings > Boot contains options for monitoring and managing the ZFS pool and devices that store the TrueNAS operating system.
The Stats/Settings option displays current system statistics and provides the option to change the scrub interval, or how often the system runs a data integrity check on the operating system device.
Go to System Settings > Boot screen and click Stats/Settings. The Stats/Settings window displays statistics for the operating system device: Boot pool Condition as ONLINE or OFFLINE, Size in GiB and the space in use in Used, and Last Scrub Run with the date and time of the scrub. By default, the operating system device is scrubbed every 7 days.
To change the default scrub interval, input a different number in Scrub interval (in days) and click Update Interval.
From the System Settings > Boot screen, click the Boot Pool Status button to open the Boot Pool Status screen. This screen shows the boot-pool and expands to show the devices that are allocated to that pool. Read, write, or checksum errors are also shown for the pool.
A manual data integrity check (scrub) of the operating system device can be initiated at any time.
On the System Settings > Boot screen, and click Scrub Boot Pool to open the Scrub dialog.
Click Confirm and then Start Scrub.
TrueNAS supports a ZFS feature known as boot environments. These are snapshot clones of the TrueNAS boot-pool install location that TrueNAS boots into. Only one boot environment is used for booting at a time.
A boot environment allows rebooting into a specific point in time and greatly simplifies recovering from system misconfigurations or other potential system failures. With multiple boot environments, the process of updating the operating system becomes a low-risk operation.
For example, the TrueNAS update process automatically creates a snapshot of the current boot environment and adds it to the boot menu before applying the update. If anything goes wrong during the update, the system administrator can activate the snapshot of the pre-update environment and reboot TrueNAS to restore system functionality.
Boot environments do not preserve or restore the state of any attached storage pools or apps, only the system boot-pool. Storage backups must be handled through the ZFS snapshot feature or other backup options. TrueNAS applications also use separate upgrade and container image management methods to provide app update and rollback features.
To view the list of boot environments on the system, go to System Settings > Boot. Each boot environment entry contains this information:
To access more options for a boot environment, click to display the list of options:
System Settings > Services displays each system component that runs continuously in the background. These typically control data-sharing or other external access to the system. Individual services have configuration screens and activation toggles, and you can set them to run automatically.
Documented services related to data sharing or automated tasks are in their respective Shares and Tasks articles.
The File Transfer Protocol (FTP) is a simple option for data transfers. The SSH options provide secure transfer methods for critical objects like configuration files, while the Trivial FTP options provide simple file transfer methods for non-critical files.
Options for configuring FTP, SSH, and TFTP are in System Settings > Services. Click the edit to configure the related service.
FTP requires a new dataset and a local user account.
Go to Storage to add a new dataset to use as storage for files.
Next, add a new user. Go to Credentials > Local Users and click Add to create a local user on the TrueNAS.
Assign a user name and password, and link the newly created FTP dataset as the user home directory. You can do this for every user or create a global account for FTP (for example, OurOrgFTPaccnt).
Edit the file permissions for the new dataset. Go to Datasets, then click on the name of the new dataset. Scroll down to Permissions and click Edit.
Enter or select the new user account in the User and Group fields. Select Apply User and Apply Group. Select the Read, Write, and Execute for User, Group, and Other you want to apply. Click Save.
To configure FTP, go to System Settings > Services and find FTP, then click edit to open the Services > FTP screen.
Configure the options according to your environment and security considerations. Click Advanced Settings to display more options.
To confine FTP sessions to the home directory of a local user, select both chroot and Allow Local User Login.
Do not allow anonymous or root access unless it is necessary. Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this FTPS for better security.
Click Save and then start the FTP service.
FTP requires a new dataset and a local user account.
Go to Storage and add a new [dataset]](/scaletutorials/datasets/datasetsscale/).
Next, add a new user. Go to Credentials > Local Users and click Add to create a local user on the TrueNAS.
Assign a user name and password, and link the newly created FTP dataset as the user home directory. Then, add ftp to the Auxiliary Groups field and click Save.
Edit the file permissions for the new dataset. Go to Datasets, then click on the name of the new dataset. Scroll down to Permissions and click Edit.
Enter or select the new user account in the User and Group fields. Enable Apply User and Apply Group. Select the Read, Write, and Execute for User, Group, and Other you want to apply, then click Save.
Go to System Settings > Services and find FTP, then click edit to open the Services > FTP screen.
Configure the options according to your environment and security considerations. Click Advanced Settings to display more options.
When configuring FTP bandwidth settings, we recommend manually entering the units you want to use, e.g. KiB, MiB, GiB.
To confine FTP sessions to the home directory of a local user, select chroot.
Do not allow anonymous or root access unless it is necessary. Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this FTPS for better security.
Click Save, then start the FTP service.
Use a browser or FTP client to connect to the TrueNAS FTP share. The images below use FileZilla, which is free.
The user name and password are those of the local user account on the TrueNAS system. The default directory is the same as the user home directory. After connecting, you can create directories and upload or download files.
The Services > NFS configuration screen displays settings to customize the TrueNAS NFS service.
You can access it from System Settings > Services screen. Locate NFS and click edit to open the screen, or use the Config Service option on the Unix (NFS) Share widget options menu found on the main Sharing screen.
Select Start Automatically to activate the NFS service when TrueNAS boots.
We recommend using the default NFS settings unless you require specific settings.
Select the IP address from the Bind IP Addresses dropdown list if you want to use a specific static IP address, or leave this field blank for NFS to listen to all available addresses.
By default, TrueNAS dynamically calculates the number of threads the kernel NFS server uses. However, if you want to manually enter an optimal number of threads the kernel NFS server uses, clear Calculate number of threads dynamically and enter the number of threads you want in the Specify number of threads manually field.
If using NFSv4, select NFSv4 from Enabled Protocols. NFSv3 ownership model for NFSv4 clears, allowing you to enable or leave it clear.
If you want to force NFS shares to fail if the Kerberos ticket is unavailable, select Require Kerberos for NFSv4.
Next, enter a port to bind to in the field that applies:
The UDP protocol is deprecated and not supported with NFS. It is disabled by default in the Linux kernel. Using UDP over NFS on modern networks (1Gb+) can lead to data corruption caused by fragmentation during high loads.
Only select Allow non-root mount if the NFS client requires it to allow serving non-root mount requests.
Select Manage Groups Server-side to allow the server to determine group IDs based on server-side lookups rather than relying solely on the information provided by the NFS client.
This can support more than 16 groups and provide more accurate group memberships.
It is equivalent to setting the --manage-gids
flag for rpc.mountd.
This setting assumes group membership is configured correctly on the NFS server.
Click Save.
Start the NFS service.
When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.
There is a special consideration when installing TrueNAS in a Virtual Machine (VM), as S.M.A.R.T services monitor actual physical devices, which are abstracted in a VM. After the installation of TrueNAS completes on the VM, go to System Settings > Services > and click the blue toggle button on the S.M.A.R.T. service to stop the service from running. Clear the Start Automatically checkbox so the service does not automatically start when the system reboots.
Use the Services > S.M.A.R.T. screen to configure when S.M.A.R.T. tests run and when to trigger alert warnings and send emails.
Click the edit Configure icon to open the screen.
Enter the time in minutes smartd to wake up and check if any tests are configured to run in Check Interval.
Select the Power Mode from the dropdown list. Choices include Never, Sleep, Standby, and Idle. TrueNAS only performs tests when you select Never.
Set the temperatures that trigger alerts in Difference, Informational and Critical.
Click Save after changing any settings.
Start the service.
The Services > SMB screen displays after going to the Shares screen, finding the Windows (SMB) Shares section, and clicking
+ Config Service. Alternatively, you can go to System Settings > Services and click the edit icon for the SMB service.The SMB Services screen displays setting options to configure TrueNAS SMB settings to fit your use case. In most cases, you can set the required fields and accept the rest of the setting defaults. If you have specific needs for your use case, click Advanced Options to display more settings.
Enter the name of the TrueNAS host system if not the default displayed in NetBIOS Name. This name is limited to 15 characters and cannot be the Workgroup name.
Enter any alias name or names that do not exceed 15 characters in the NetBIOS Alias field. Separate each alias name with a space between them.
Enter a name that matches the Windows workgroup name in Workgroup. TrueNAS detects and sets the correct workgroup from these services when unconfigured with enabled Active Directory or LDAP active.
If using SMB1 clients, select Enable SMB1 support to allow legacy SMB1 clients to connect to the server. Note: SMB1 is deprecated. We advise you to upgrade clients to operating system versions that support modern SMB protocol versions.
If you plan to use the insecure and vulnerable NTLMv1 encryption, select NTLMv1 Auth to allow smbd attempts to authenticate users. This setting enables backward compatibility with older versions of Windows, but we don’t recommend it. Do not use on untrusted networks.
Enter any notes about the service configuration in Description
For more advanced settings, see SMB Services Screen.
Use Auxiliary Parameters to enter additional smb.conf options, or to log more details when a client attempts to authenticate to the share, add log level = 1, auth_audit:5
. Refer to the Samba Guide for more information on these settings.
Click Save.
Start the SMB service.
SNMP (Simple Network Management Protocol) monitors network-attached devices for conditions that warrant administrative attention. TrueNAS uses Net-SNMP to provide SNMP. To configure SNMP, go to System Settings > Services page, find SNMP, and click the edit.
See SNMP Service Screen for setting information.
Port UDP 161 listens for SNMP requests when starting the SNMP service.
Click to view or download a static copy of the SCALE 24.04 Dragonfish MIB file.
To download an MIB from your TrueNAS system, you can enable SSH and use a file transfer command like scp
.
When using SSH, make sure to validate the user logging in has SSH login permissions enabled and the SSH service is active and using a known port (22 is default).
Management Information Base (MIB) files are located in
Example (replace mytruenas.example.com with your system IP address or hostname):
PS C:\Users\ixuser> scp admin@mytruenas.example.com:/usr/local/share/snmp/mibs/* .\Downloads\
admin@mytruenas.example.com's password:
TRUENAS-MIB.txt 100% 11KB 112.0KB/s 00:00
PS C:\Users\ixuser>
The SSH service lets users connect to TrueNAS with the Secure SHell Transport Layer Protocol. When using TrueNAS as an SSH server, the users in the network must use SSH client software to transfer files with SSH.
Allowing external connections to TrueNAS is a security vulnerability! Do not enable SSH unless you require external connections. See Security Recommendations for more security considerations when using SSH.
To configure SSH go to System Settings > Services, find SSH, and click edit to open the basic settings General Options configuration screen.
Use the Password Login Groups and Allow Password Authentication settings to allow specific TrueNAS account groups the ability to use password authentication for SSH logins.
Click Save. Select Start Automatically and enable the SSH service.
If your configuration requires more advanced settings, click Advanced Settings. The basic options continue to display above the Advanced Settings screen. Configure the options as needed to match your network environment.
These Auxiliary Parameters can be useful when troubleshooting SSH connectivity issues:
ClientAliveInterval
if SSH connections tend to drop.MaxStartups
value (10 is default) when you need more concurrent SSH connections.Remember to enable the SSH service in System Settings > Services after making changes.
Create and store SSH connections and keypairs to allow SSH access in Credentials > Backup Credentials or by editing an administrative user account. See Adding SSH Credentials for more information.
SFTP (SSH File Transfer Protocol) is available by enabling SSH remote access to the TrueNAS system. SFTP is more secure than standard FTP as it applies SSL encryption on all transfers by default.
Go to System Settings > Services, find the SSH entry, and click the edit to open the Services > SSH basic settings configuration screen.
Select Allow Password Authentication.
Go to Credentials > Local Users. Click anywhere on the row of the user you want to access SSH to expand the user entry, then click Edit to open the Edit User configuration screen. Make sure that SSH password login enabled is selected. See Managing Users for more information.
SSH with root is a security vulnerability. It allows users to fully control the NAS remotely with a terminal instead of providing SFTP transfer access.
Choose a non-root administrative user to allow SSH access.
Review the remaining options and configure them according to your environment or security needs.
Remember to enable the SSH service in System Settings > Services after making changes.
Create and store SSH connections and keypairs to allow SSH access in Credentials > Backup Credentials or by editing an administrative user account. See Adding SSH Credentials for more information.
Open an FTP client (like FileZilla) or command line. This article shows using FileZilla as an example.
Using FileZilla, enter SFTP://{TrueNAS IP} {username} {password} {port 22}
. Where {TrueNAS IP} is the IP address for your TrueNAS system, {username} is the administrator login user name, and {password} is the adminstrator password, and {port 22} to connect.
SFTP does not offer chroot locking. While chroot is not 100% secure, lacking chroot lets users move up to the root directory and view internal system information. If this level of access is a concern, FTP with TLS might be the more secure choice.
An Uninterruptible Power Supply (UPS) is a power backup system that ensures continuous electricity during outages, preventing downtime and damage.
TrueNAS uses NUT (Network UPS Tools) to provide UPS support. For supported device and driver information, see their hardware compatibility list.
Report UPS bugs and feature requests to the NUT project.
Connect the TrueNAS system to the UPS device. To configure the UPS service, go to System settings > Services, finding UPS, and click edit.
See UPS Service Screen for details on the UPS service settings.
TrueNAS Enterprise
TrueNAS High Availability (HA) systems are not compatible with uninterruptible power supplies (UPS).
Some UPS models are unresponsive with the default polling frequency (default is two seconds).
TrueNAS displays the issue in logs as a recurring error like libusb_get_interrupt: Unknown error.
If you get an error, decrease the polling frequency by adding an entry to Auxiliary Parameters (ups.conf): pollinterval = 10
.
The SCALE Shell is convenient for running command line tools, configuring different system settings, or finding log files and debug information.
Warning! The supported mechanisms for making configuration changes are the TrueNAS WebUI, CLI, and API exclusively. All others are not supported and result in undefined behavior that can result in system failure!
The Set font size slider adjusts the Shell displayed text size. Restore Default resets the font size to default.
The Shell stores the command history for the current session.
Leaving the Shell screen clears the command history.
Click Reconnect to start a new session.
This section provides keyboard navigation shortcuts you can use in Shell.
zsh is the default shell, but you can change this by going to Credentials > Local Users. Select the admin or other user to expand it. Click Edit to open the Edit User screen. Scroll down to Shell and select a different option from the dropdown list. Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh. Click Save.
Most Linux command-line utilities are available in the Shell. Clicking other SCALE UI menu options closes the shell session and stops commands running in the Shell screen. Tmux allows you to detach sessions in Shell and then reattach them later. Commands continue to run in a detached session.
The new SCALE command-line interface (CLI) lets you directly configure SCALE features using namespaces and commands based on the SCALE API.
TrueNAS CLI is still in active development. We are not accepting bug reports or feature requests at this time.
See SCALE CLI Reference Guide for more information on using the TrueNAS CLI.
We intend the CLI to be an alternative method for configuring TrueNAS features. Because of the variety of available features and configurations, we include CLI-specific instructions in their respective UI documentation sections.
TrueNAS SCALE auditing and logs provide a trail of all actions performed by a session, user, or service (SMB, middleware).
The audit function backends are both the syslog and the Samba debug library. Syslog sends audit messages via explicit syslog call with configurable priority (WARNING is the default) and facility (for example, USER). The default is syslog sent audit messages. Debug sends audit messages from the Samba debug library and these messages have a configurable severity (WARNING, NOTICE, or INFO).
The System Settings > Audit screen lists all session, user, or SMB events. Logs include who performed the action, timestamp, event type, and a short string of the action performed (event data).
SCALE includes a manual page with more information on the VFS auditing functions.
Administrative users can enter
man vfs_truenas_audit
in a SCALE command prompt to view the embedded manual page.
Events are organized by session and user, and SMB auditing.
Session and user auditing events
Audit records contain information that establishes:
Each audit message is a single JSON file containing mandatory fields. It can also include additional optional records. Message size is limited to not exceed 1024 bytes for maximum portability with different syslog implementations.
Use the Export to CSV button on an audit screen to download audit logs in a format readable in a spreadsheet program. Use the Copy to Clipboard option on the Event Data widget to copy the selected audit message event record to a text or JSON object file. The JSON object for an audit message contains the version information, the service which is the name of the SMB share, a session ID and the tree connection (tcon_id).
Authentication and other events are captured by the TrueNAS audit logging functions. The TrueNAS SCALE auditing logs event data varies based on the type of event tracked.
Users have access to audit information from three locations in the SCALE UI:
The audit screen includes basic and advanced search options. Click Switch to Basic to change to the basic search function or click Switch to Advanced to show the advanced search operators.
You can enter any filters in the basic Search field to show events matching the entry.
To enter advanced search parameters, use the format displayed in the field, for example, Service = “SMB” AND Event = “CLOSE” to show closed SMB events. Event types are listed in Auditing Event Types.
Advanced search uses a syntax similar to SQL/JQL and allows several custom variables for filtering. Parentheses define query priority. Clicking the advanced Search field prompts you with a dropdown of available event types, options, and operators to help you complete the search string.
For example, to search for any SMB connect or close event from the user smbuser or any non-authentication SMB events, enter (Service = "SMB" AND Event in ("Connect", "Close") AND User in ("smbuser")) OR (Event != "Authentication" AND Service = "SMB")
.
The advanced search automatically checks syntax and shows done when the syntax is valid and warning for invalid syntax.
Click on a row to show details of that event in the Metadata and Event Data widgets.
Export as CSV sends the event log data to a csv file you can open in a spreadsheet program (i.e., MS Excel, Google Sheets, etc.) or other data management app that accept CSV files.
The assignment (Copy to Clipboard) icon shows two options, Copy Text and Copy Json. Copy Text copies the event to a text file. Copy Json copies the event to a JSON object.
Configure and enable SMB auditing for an SMB share at creation or when modifying an existing share.
SMB auditing is only supported for SMB2 (or newer) protocol-negotiated SMB sessions. SMB1 connections to shares with auditing enabled are rejected.
From the Add SMB Share or Edit SMB Share screen, click Advanced Options and scroll down to Audit Logging.
Selecting Enable turns auditing on for the share you are creating or editing.
Use the Watch List and Ignore List functions to add audit logging groups to include or exclude. Click in Watch List to see a list of user groups on the system. Click on a group to add it to the list and record events generated by user accounts that are members of the group. Leave Watch List blank to include all groups, otherwise auditing is restricted to only the groups added.
Click in Ignore List to see a list of user groups on the system.. Click on a group to add it to the list and explicitly avoid recording any events generated by user accounts that are members of this group.
The Watch List takes precedence over the Ignore List when using both lists.
Click Save.
To configure Audit storage and retention settings, go to System Settings > Advanced, then click Configure on the Audit widget.
The Audit configuration screen sets the retention period, reservation size, quota size and percentage of used space in the audit dataset that triggers warning and critical alerts.
For example, to change the percent usage warning threshold for the storage allocated to the Audit database: