Network config is a joke.

Status
Not open for further replies.

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
I wrote much of this extremely angry, but hopefully not disrespectful. Regardless let me just say that disrespect is not my intent. Okay, moving on. Skip to bottom for a direct question.

First things first, I have been in the IT field in one form or another for 10 years professionally and have been trying FreeNAS/TrueNAS on and off over it's life span for a good chunk of that. To date I have NEVER gotten an install working reliably enough for me to trust any data, even junk, to it. That being said I figured now (With SCALE in Release Candidate status) would be a good shot to confirm my assumptions that BSD or my unfimilarity with it was my shortfall and that TrueNAS was actually a solid product. So far with SCALE (being based on Linux which I am far more familiar with) I am switching my assumptions from "I was the problem" to "TrueNAS? More like TrueTRASH!" I KNOW that it is impossible that a company and user base of this size cant be promoting this if it really is this bad. So I have come to the forums requesting aid in either showing that I am unfit to be in my field OR that I am using this application far above and beyond its (pathetically low if that is the case) use target.

Tech Specs:
Before I start getting hammered with "this broken" or "that broken". I KNOW that my hardware is 100% good as I spent over 4 weeks testing things out to make sure that the hardware is working. I am fairly certain my hardware will have no impact on the solution to the issue but here they are:
Dell Poweredge R320 Chassis (4x3.5 HDD Config)
an Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz
192GB of DDR3 Quad Rank RAM @ 1066 MHz in 6 DIMMS
2 SAS controllers:
LSI 9207-8e SAS HBA (IT Mode)
Dell H310 Mini (IT Mode)
One Intel x520-da2

The H310 has no drives attached. (Future use)
The 9207 has 42 3TB drives attached in 3 pools. (4x6 in rz2 "media" | 2x6 in rz2 "general" | 1x6 in rz2 "iscsi" [Used for a steam library])
Note: Got the pools working with another OS, as I am am certain at this point TrueNAS will damage them given its shoddy working elsewhere.

I also have several other boxes with similar-ish specs that host my VM infrastructure. Their specs will not be listed as it is not relevant unless I can get to the point that I can actually start working with shares.

Okay now to the problem itself.
I install TrueNAS SCALE and the first thing I go to do is se the IPs correctly. I have a multi level network. All management interfaces are on their own vLAN so I disable DHCP on the 2 connected 10G SFP+ interfaces (DAC cables) on the Intel x520 and set one of them to the static IP that the management UI should be using. I hit test settings or whatever it says, and nothing. All access to the box is lost. It takes the full 60 seconds to time out and then some and then comes back up as it was before. I BRIEFLY for like one day had the setting working while I was out using a VPN so I assumed that something funky was happening with my routing but when I returned home to my lab and tested everything worked as it should except the TrueNAS box. No routing tomfoolery present. So I checked the switch, the port has the correct vLANs on it and the correct one for native. I made sure that the pfSense box was passing traffic along properly and blocking things that were NOT supposed to be bouncing around.

One thing I did notice that was extremely strange and lead me to believe that TrueNAS was the issue was that the system worked out of the box (not a good thing in this case) when BOTH of the ports were plugged in but with only one it failed. My first remark is "What the <expletive> is this <another expletive>?" My next is more a question, can TrueNAS even HANDLE multi interface installs? Over all the years I have tried FreeNAS/TrueNAS I have ALWAYS had 2 or more NICs, it's going to be a storage server after all and I expect to push a significant amount of data through it and I wanted at the very least my management UI to be on a separate interface so it would not be slowed down by the application actually doing it's job. I guess this all can boil down to a single question to start but i decided to explain my arrival to asking such, as it seems stupid to me without context.


If you dont care about the "how I got here" part, here is my question:
Can TrueNAS even handle more than a single network interface?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Can TrueNAS even handle more than a single network interface?

Yes. Many of us here run multiple interfaces. However, you might've gotten your sequencing wrong. If you're going to configure multiple interfaces, you should be doing so from IPMI out of band. Otherwise, you risk disconnecting yourself by working entirely in-band. This is basic system management 101.
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
First before anything else, thank you.

Although you are normally correct, the TrueNAS documentation itself states that any changes made from the command line (last time I checked neither IPMI or local display are non command line) are lost on either reboot or upgrade. I actually found a few threads going back to mid last year where you even stated that changes outside the GUI are lost on reboot. This is not to discredit your statement because I know things change so I tried it via IPMI and to my surprise I DID have limited success. Not where I want to be yet but in the right direction I hope, so thank you for that. I would have assumed the old info to still be correct. I do still find the idea of having to set it in the CLI and then reset it in the GUI to make the changes persistent a little stupid but that is just me I guess.

I persisted to test the crap out of every possible set of interfaces I could use and I encountered an issue that I can't wrap my head around. I have added an interface on the vlan I am working on (same as working PC) but trying to navigate to the interface I have set for the webui refuses to connect. When I check my networking equipment for packet movements the stupid box is trying to force feed packets out a different interface than it is receiving them on. I will start by saying that I do not remember a time where I set something up to be controlled on a different vlan while having a access interface on my working/core vlan but I have similar setups crossing other vlans and never encountered this before. When testing with other vlans I am getting similar results with TrueNAS. Is there a way to lock this out so that packet streams can only use the same interface as they were initiated on? This seems too stupid to be a real problem. I have to be missing something.

Again thank you for you time and patients.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
As a rule of thumb, random CLI commands aren't saved to the configuration database. However, the console menu, which is reachable from IPMI, and API CLI calls (e.g., midclt call xx.yy) do make changes to the configuration database which persist between reboots.

You'll have to diagram what you're trying to set up, because I'm unsure from your description what your packet flow is supposed to be.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
To clarify, the console runs a TrueNAS supplied scrolling text menu, which includes an option for configuring the network.

There is also an option to drop to Shell in that menu. That's useful for testing your configuration after you make your network changes.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Can TrueNAS even handle more than a single network interface?

It sure can. Here, I use 5 physical interfaces and 7 logical ones.

You need to be aware of things like routing, dual stacking, bridges, VLANs and more to do so correctly. These may be too advanced concepts for many, so be extra careful. Also, depending of what you are trying to do, you could end up with something that will be working but would be unsafe.

42 3TB drives attached in 3 pools. (4x6 in rz2 "media" | 2x6 in rz2 "general" | 1x6 in rz2 "iscsi"

Surely not ideal.... It is very rare that you benefit from multiple pool. If you do not need different pool-level options, you should have a single pool and separate datasets instead.

Also, Raid-Z2 for iSCSI is not a good idea either.

Considering that all your vDev are Raid-Z2 with the same size, they would really have been better in a single pool.

I have a multi level network. All management interfaces are on their own vLAN

So you probably have a problem with routing for the return packets...

that the pfSense box was passing traffic along properly and blocking things that were NOT supposed to be bouncing around

Good to have such a firewall in front of your TrueNAS.

If you are comfortable with packet capture, I would suggest you do it from that pfSense box, on the interface facing TrueNAS and see what happen when you try to reach it and it fails...
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
Okay so I took an extra day to double check what I was seeing and to make sure the rest of my systems were set correctly.
You'll have to diagram what you're trying to set up, because I'm unsure from your description what your packet flow is supposed to be.
My PC -vLAN 301-> Switch -vLAN 301-> Router -vLAN 401> Switch -vLAN 401-> TrueNAS
TrueNAS -vLAN 301-> Switch -vLAN 301-> My PC
Is what is currently happening. This should not be so. The first half of the path should be taken to go back as the TrueNAS box should be responding with the same address it received on. It receives on address 192.168.251.201 and sends on 10.31.200.201. Not at all normal. (At least in the past 10 years of my experiences.)
Note: 10.31.200.201 existed for storage share only. Not webUI access.

To clarify, the console runs a TrueNAS supplied scrolling text menu, which includes an option for configuring the network.
Noted

Considering that all your vDev are Raid-Z2 with the same size, they would really have been better in a single pool.
No, not in this case. I have different QUALITY of drives that make up each pool. I EXPECT the 1x6 to fail at some point sooner than later and as such it is only running a steam library. (Some games do not work with non local storage so iscsi is my only option.)
For the other 2 pools, one is a media library made of drives I have no experience with and would not be significantly harmed if I lost. The other is a smaller set of drives I know are in good condition, are ones I have experience with and am comfortable trusting higher value data to.
As I said in my first post, I have never gotten to a point where I trusted ANY kind of data to FreeNAS/TrueNAS. So I am walling off the potential failure points if the software is as poor as my previous experiences have led me to believe. You may think this an overly cautions or foolish approach but I gain little from making a larger pool and gain quite a lot if my assumptions are correct and the lower quality drives go bad and TrueNAS does not flag them early.

I probably should have stated that although there are 42 equal sized drives they are not identical drives and that they are broken up along model lines. 24 WD purple, 12 WD Green (perfect condition), 6 WD green (good but not perfect condition). I can tell the difference between drive failure and software failure so I am not putting any really good drives in until I know TrueNAS will not shred them. (Again not a good experience in years of trying so being cautions)

Good to have such a firewall in front of your TrueNAS.
It is not exclusive to the TrueNAS box. Not sure if I was insinuating otherwise. It is pulling double duty as the router and firewall for my entire home network/workshop (not commercial, hobby)


If you are comfortable with packet capture, I would suggest you do it from that pfSense box, on the interface facing TrueNAS and see what happen when you try to reach it and it fails
I am, and I have done. The returns from TrueNAS are not sent back the way they came. TrueNAS is trying to be "smart" I guess and thinking that it knows a better route because different traffic can take another path?



Also thank you all again. I feel like I am getting better help than I do for commercial products that have paid contracts at my job.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
The returns from TrueNAS are not sent back the way they came.

Ok, so your problem is return packets.

One option is to NAT them from your pfSense box before sending them to your management. That way, the reply from TrueNAS will have to reach an IP that is directly connected, so it will not use its default gateway for that.

Another option is to have a management system deployed directly in your out-of-band management network. That way again, the reply will be sent to a directly connected IP, so no routing.

Policy-based routing (routing based on the source and not the destination) exists but creates more problems. I would not recommend you to work in that direction.

Finally, to filter access to your TrueNAS management interface over its main IP is another option. You can have a VPN on your pfSense and only IPs from that VPN would be allowed to connect TrueNAS over its management port. Other IPs will be allowed only on file services ports. Such a VPN can enforce certificate-based authentication, 2FA using a Radius service like DUO or expose an SSH port that will require a specific RSA key to connect to. Here, I use all of these strong authentication for different purposes and they all work great.

Hope this will give you a few idea about how you can achieve what you are looking for,
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
My PC -vLAN 301-> Switch -vLAN 301-> Router -vLAN 401> Switch -vLAN 401-> TrueNAS
TrueNAS -vLAN 301-> Switch -vLAN 301-> My PC
Is what is currently happening. This should not be so. The first half of the path should be taken to go back as the TrueNAS box should be responding with the same address it received on. It receives on address 192.168.251.201 and sends on 10.31.200.201. Not at all normal. (At least in the past 10 years of my experiences.)
Note: 10.31.200.201 existed for storage share only. Not webUI access.

What's the routing table on your TrueNAS box?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It sure can. Here, I use 5 physical interfaces and 7 logical ones.

And that's actually trite. I think at the peak, here, we had a system that had two separate LACP links each serving a various subset of dozens of vlans (there's well north of a hundred vlans here). BSD networking is the cornerstone of a number of networking devices, and even what I describe here is not a LARGE network.

You need to be aware of things like routing, dual stacking, bridges, VLANs and more to do so correctly.

Improper network design is the typical issue when things don't seem to be working right. You really do need a properly designed network, and the improper introduction or configuration of firewalls, NAT, gateways, VLAN's, etc., can really hose you over very quickly. It seems clear from the following discussion that you have some issues in the network, but I don't have any better guesses as to what they might be than what the other members have already provided feedback on.

First, I wanted to come and comment:

If you dont care about the "how I got here" part, here is my question:
Can TrueNAS even handle more than a single network interface?

saw me mumbling to myself, "I can't even think of an example where I have it running on a single network interface."

Secondly, you might want to review


Which may not be directly your problem, but the conceptual issues involved in the way the network stack works at an abstract level is probably relevant. Many people are shocked to find that if they have:

PC at 192.168.1.10 defaultgw 192.168.1.1
NAS at 192.168.1.11 and 10.1.1.11
Router at 192.168.1.1 and 10.1.1.1

If the PC sends to 10.1.1.11, the traffic will go to the router, because of the defaultgw, the router will send to 10.1.1.11 because it is on a directly connected interface, and the NAS will return the packet "FROM" 10.1.1.11 out its 192.168.1.11 interface directly to 192.168.1.10, because 192.168.1.10 appears as a directly connected route.

And this is CORRECT. It is what is supposed to happen.
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
BSD networking is the cornerstone of a number of networking devices, and even what I describe here is not a LARGE network.
I hate to be a prick but in my years BSD has been the root cause of too many issues for me to count. More so than Windows as shocking as it might be to say. This is exactly why I mentioned SCALE in my original post
So far with SCALE (being based on Linux which I am far more familiar with)
I am not using CORE, as far as I can tell CORE is the BSD based variant and SCALE is roughly based on Debian. I am not anywhere near as comfortable on BSD as I am the various *nix's but I am fairly certain that the networking of the two are different.

To your next point,
If the PC sends to 10.1.1.11, the traffic will go to the router, because of the defaultgw, the router will send to 10.1.1.11 because it is on a directly connected interface, and the NAS will return the packet "FROM" 10.1.1.11 out its 192.168.1.11 interface directly to 192.168.1.10, because 192.168.1.10 appears as a directly connected route.

And this is CORRECT. It is what is supposed to happen.
What is NOT correct is sending FROM 192.168.1.11 to the 192.168.x.x network in response to getting packets from the 10.x.x.x network. That is what this, thing, is doing. I find THAT problematic. It is responding with a different address on the different network, which is why the "client" is dropping the response packets, that are not in fact responses.

For a intermediate bandage I removed all but 2 interfaces, one for management and one for share access which is not on the same vlan as the PC i will use for testing. If everything else works I will take the time to revisit this and properly fix it, if it even can be fixed. I will however take a look at that thread now and see if it points me to any issues with the rest of my systems (I just finished rebuilding half of them so I really hope not).

I decided to connect the pools and just see if I would even get something to share. Given my poor luck thus far with things working I was not expecting much. To my non surprise TrueNAS fragged all three pools shortly after import. Frag meaning that it wrote nothing but crap data to them and failed them all out. I pulled the boot SSD and reinserted the one with my testing OS on it and it picked up the pools and showed all disks as working (except the shredded data, and yes I did wipe one of them and run some test data onto and off of it to be sure). I have removed the drives for the "important" pool and put them in a different box entirely to repair (read as restore from backup) over night and disconnected the media pool physically to avoid more headache when this continues to go south.

I do want to get to the bottom of this if for no other reason to be able to be absolutely confidant in my hatred of TrueNAS. That being said I will proceed to re-download and re-install to see if I just got a f***ed up ISO before. I am also floating the possibility that part of my disk shelf may be damaged, or maybe a bad cable, or even a slightly loose SAS port. I know I said I tested it thoroughly in my original post and I did, BUT I do move the rack everything is in to get easier access to part of it when working on it (half height rack on rollers). Because of the fact that it moves AND the fact that the disk shelf is a 4U box I am going to test if the physical size causes movement of parts in the shelf box making electrical connectivity issues. Time to stay up all night testing while slowly shifting the rack....... yay


Again I will say it, I do appreciate the help. My apathetic seeming demeanor stems from the fact that this is just one macro problem in a very long line of problems I have been chasing down for well over a year now. I am just worn down and was hoping that this would not be the same gear grinding-ly difficult task that everything else seems to be.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I hate to be a prick but in my years BSD has been the root cause of too many issues for me to count. More so than Windows as shocking as it might be to say. This is exactly why I mentioned SCALE in my original post

And yet, Linux, Windows, and OS X networking all evolved out of BSD, and it powers all sorts of stuff including that pfSense box.

What is NOT correct is sending FROM 192.168.1.11 to the 192.168.x.x network in response to getting packets from the 10.x.x.x network. That is what this, thing, is doing. I find THAT problematic. It is responding with a different address on the different network, which is why the "client" is dropping the response packets, that are not in fact responses.

That sounds like you have some broken NAT'ing going on, and you may not be aware of what the actual ingress address is. "in response to getting packets from the 10.x.x.x network" may indicate a misunderstanding on your part. A general purpose UNIX host generally doesn't care where traffic originates; ingress traffic via em0 on 192.168.1.11 and ingress traffic via em1 on 10.1.1.11 are handled identically. What matters is what the source IP address is. It is totally possible for traffic to be sent to the NAS in via the 10.1.1.11 interface from the 192.168.1.0/24 network and then have the reply traffic return to the 192.168.1.0/24 host directly, because that appears as a connected route, but it is also possible for that traffic to have been NAT'd or other things happen to it on the way in that cause strange-looking results, and I'm sorta suspecting you have something along those lines happening.

I decided to connect the pools and just see if I would even get something to share. Given my poor luck thus far with things working I was not expecting much. To my non surprise TrueNAS fragged all three pools shortly after import. Frag meaning that it wrote nothing but crap data to them and failed them all out. I pulled the boot SSD and reinserted the one with my testing OS on it and it picked up the pools and showed all disks as working (except the shredded data, and yes I did wipe one of them and run some test data onto and off of it to be sure). I have removed the drives for the "important" pool and put them in a different box entirely to repair (read as restore from backup) over night and disconnected the media pool physically to avoid more headache when this continues to go south.

This simply doesn't make much sense. ZFS itself is highly allergic to corrupting data, and most data corruption issues come down to bad hardware. We usually see these kinds of problems when people are using RAID controllers or incorrect firmware. RAID controllers have a bad habit of corrupting things for various reasons documented elsewhere, and even IT mode controllers can hose things up if the proper firmware isn't used. ZFS does have an Achilles heel in that it HAS to have reliable I/O to the disks, and most corruption comes as a result of design failures there. Most of the time this is easily identified by consolespew of horrid sounding kernel messages, but it might be a good time to review some underlying assumptions here, such as that your HBA's are actually crossflashed to IT mode and running firmware 20.00.07.00-IT

There are literally billions of aggregate problem-free run-hours of time running FreeNAS and TrueNAS with the LSI HBA's, but that is absolutely dependent on running the right firmware, etc. The reason that FreeNAS and TrueNAS are so popular as a high end storage solution is because it works so damn well. However, you really do have to cross all the t's and dot all the i's in order to get that blissful experience.
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
You know the saying of Murphys law? Anything that can go wrong will go wrong and at the worst time? That is my life. I am 100% certain that you are 100% correct in everything you say but dammed be me because I am me. That being said I am going to strip everything to the studs and start from scratch. Including taking everything out of the rack and reassembling that too (it was creaking when I moved it last). I did check as I had a rare lightbulb moment on the cards. They are running the right firmware. The other OS showed things working correctly so I am leaning to the physical rack movement and a bad ISO for install. So I will be back in a day or two report. Will also look at routing while I am at it. Might as well rip everything down and start over while I am at it if something is broken at the foundation.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You know the saying of Murphys law? Anything that can go wrong will go wrong and at the worst time? That is my life.

We've all been there at some point or other in our IT careers. Good luck with your rebuild.
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
So I ripped out and up basically everything, rebuilt most things configs manually and installed a clean copy from a checksum verified ISO of TrueNAS. I also said to hell with it and rebuilt my AD domain in case that was adding weirdness somewhere, unlikely but am trying to rule everything out. The network issue is still there but I am far less annoyed about it now than before as it is just non consequential compared to the drives constancy erroring out. I emptied the rack, disassembled and reassembled it best I could. It is made from pieces so being perfectly square and non-moveable is just not possible unless I buy/rent a welder and make it not bolted pieces anymore. I squared everything up gently moved it into place and loaded everything in. Re squared and did basic cabling. After bringing on the essentials and the TrueNAS box along with the disk shelf everything "seemed" better with all pools showing green. I have removed everything from all of them and loaded "junk" (not important but still real data) on to the two smaller ones just to see if things would work. It took the approximately 400GB of data and reported no errors. Ran a pool scrub, no errors. I took a rest and left it alone for a few hours. When I came back it was riddled with errors and approximately 70% of the drives in the 2 smaller pools were degraded or faulted according to TrueNAS. I don't get it. A) Why did the data write and make the initial read with no errors, then proceeded to error out later. and B) WTF was TrueNAS doing when nothing was interacting with it? Why was it even interacting with the drives at all?

I found some forum threads else where when digging up the few errors I could pull from the local screen (everything was locked up and required a hard reset to gain any form of access again, even local console was hosed) and they were pointing at faulty cables and such. If that were the case I should have expected to have issues when running my initial tests on the drives, no? I am going to pull all of the drives that are marked as degraded and faulted and try to make a pool of the remaining good drives but specifically put them in the slots that the bad drives were in to see if they too error out. If so then I know that some how I got a bad backplane between the full tests and now. If they work then I guess ZFS or TrueNAS just can't handle some drives.

I have been searching around for information about the drives I have to see if anyone else has issues at all with them and it seems that WD greens are hit and miss. Some people have no issues and others have numerous issues. WD purple seem to have less users but of those that do report their experiences it looks like there are fewer of them reporting major issues. So maybe my drives are part of the issue? Not in that the drives are bad but that they just do not play nice? It would be a shame as I got these as a really good deal and planned to upgrade slowly over time. I don't know about any of you but at 300+ a drive buying 42 drives outright is not financially viable.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Greens and Purples aren't recommended for NAS applications, due to lack of TLER, which is what you're probably seeing with the errors. (See https://www.truenas.com/community/t...een-with-wdidle-set-to-disabled-drives.11280/) Also, it's recommended to burn in your drives before using them to build a pool. See https://www.truenas.com/community/resources/hard-drive-burn-in-testing.92/.

Drives that are known to work well are the EFRX/EFZX Reds (avoid the EFAX Reds which are SMR) and IronWolfs. Golds and Heliums also work, but are usually too pricy to be cost effective. HGSTs also have a decent track record.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
due to lack of TLER
Actually (I hate starting sentences that way) the Spec Sheet for the purples I have state that they DO support TLER.(I cant get to WDs site now so this one will have to do. Will update when I get a direct link to WDs.)
This might very well explain my issues with the greens though and why the purples have not errored out yet. I found similar data to your first link about that but also found some more recent data stating that TLER was not an issue anymore. I'll take the safe route then and abandon the Greens for now, further down the thread it mentioned a way to enable TLER on the greens so I may circle back when I get to the end of this and try that to see if I can get them working again.

Also, it's recommended to burn in your drives before using them to build a pool. See https://www.truenas.com/community/resources/hard-drive-burn-in-testing.92/.
This was what my next question was going to be about so thank you very much for being physic and giving this to me before hand.

Knew about this from long ago and did that before testing the drives. But thank you and I will review this to be sure I did it right.
 

Lightning

Dabbler
Joined
Nov 16, 2021
Messages
11
So after throughly reading through the burn in test thread I can safely say I did this (and more) during my 4 week long testing period before I even touched TrueNAS. So according to that the disks are fine. This leaves me with A) TrueNAS just not playing nice with those disks, and B) TLER. I am, again, going to take the "safe" route to getting a working system and abandon the Greens for now. Once I get a fully working system I will circle back and revisit if I can successfully enable TLER on the Greens. (I will probably move these drives to a different system regardless but it would be good to know the cause of problem.)
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
This may sound like a basic question but did you check the smart data on any of the drives that were giving you errors? Also it would be quite helpful if you can post the actual error generated so others can help in the diagnosis.
 
Status
Not open for further replies.
Top