It was always intended to be a very loose rule of thumb (as
@jgreco, who IIRC created it, can attest),
I certainly did not create the root of that rule, it came from earlier ZFS.
However, I substantially expanded and loosened it and have spent a huge amount of time explaining it over the years. The "rule" only exists to get people in the right ballpark. There's nothing specific about ZFS where it will start drowning puppies if you don't have 1GB per 1TB of memory, but if you try to run a 30TB pool with only 8GB of RAM, you'll find that there's a lot of memory contention between the ARC, desperately trying to keep pool metadata for free space etc., in core, and the middleware, trying not to be swapped out, and performance tanks. (We also saw corruption and kernel panic issues early on with severe mismatches, especially on 4GB and 6GB systems.)
But if you give that 30TB of storage 32GB of RAM, what does the "30TB" mean? Does it mean 6 x 5TB HDD (30TB raw) space? Does it mean 8 x 5TB HDD in RAIDZ2 (30TB pool) space? Does it mean 30TB==80% of a 38TB pool? Does it mean 30TB==worth of stored data on a 100TB pool?
Because it could plausibly mean any of those things, but some of those will be more performant than others, and once you're out at 32GB of RAM, all of those are likely to be workable for at least some workloads. For enterprise use, where someone with a checkbook simply needed to know what to buy, I always encouraged them to use the raw disk interpretation, because that would give them the best experience, while budget hobbyists were told that they could take a less expensive option if they needed to. This overall description and explainer is definitely something I created.