No truth at all to this, linux software raid is far better in all aspects
I'm speaking from experience. On paper, software raid looks great. We've had servers run it no problems. My point is, WHEN a drive fails, software RAID is generally less reliable at fixing.
When a disk partialy fails on a software RAID array, the OS continues trying to access the damaged sectors of the disk. This causes the disk I/O to stall. After a much longer period than with hardware raid, the disk is finally dripped. If the server is rebooted, the delay happens again.
In comparison, a hardware based RAID setup will drop the disk much earlier, with no reboot delay.
It's all about the recovery time. As far as I'm concerned, if a recovery of my data is going to take a couple of hours on software raid, vs half an hour on hardware, I'd take hardware. Then you've got the CPU usage to take into account as well. On a Quad Core 2.5Ghz you'll probably not notice it a huge amount however.
So by saying software RAID is better 'in all aspects' - theres no truth in that. Hardware raid has the benefit of lower CPU loads and faster recovery times. Then you've got the BBU on a hardware setup, reducing the risk of any pending writes getting lost during a power failure. Then theres the hot swapping. If a drive goes bad in a hardware array, assuming you've got it setup correctly, its a case of swapping out the bad drive - no downtime. You cant do that with software, you can only do a hot spare.
So, if performance, uptime and reliability are things important to anyone, hardware is a better option.