搜索
bottom↓
回复: 0

软件Raid,硬件Raid,你需要哪个看完这篇文章就明白了。

[复制链接]

出0入0汤圆

发表于 2017-2-16 14:31:12 | 显示全部楼层 |阅读模式
想做一台使用Raid 5的NAS,最开始的时候知道主板好像有Raid功能的, 了解了一下发现其实主板带的Raid功能是软件实现的,特别是Raid 5这种模式性能比较差。
这样的话就想用一块Raid卡吧,估计几百搞定了吧,谁知道一了解,发现还真没有那么简单, 主要是真正能用的支持Raid 5的阵列卡很贵,然后就搜索了一下资料, 找到这篇文章, 看完后就释然了,决定就用软件NAS算了。

原文地址: http://serverfault.com/questions ... nce-and-cache-usage

In short: if using a low-end RAID card (without cache), do yourself a favor and switch to software RAID. If using a mid-to-high-end card (with BBU or NVRAM), then hardware is often (but not always! see below) a good choice.

Long answer: when computing power was limited, hardware RAID cards had the significantly advantage to offload parity/syndrome calculation for RAID schemes involving them (RAID 3/4/5, RAID6, ecc).

However, with the ever increasing CPU performance, this advantage basically disappeared: even my laptop's ancient CPU (Core i5 M 520, Westmere generation) have XOR performance of over 4 GB/s and RAID-6 syndrome performance over 3 GB/s over a single execution core.

The advantage that hardware RAID maintains today is the presence of a power-loss protected DRAM cache, in the form of BBU or NVRAM. This protected cache give very low latency for random write access (and reads that hit) and basically transform random writes into sequential writes. A RAID controller without such a cache is near useless. Moreover, some low-end RAID controllers do not only came without a cache, but forcibly disable the disk's private DRAM cache, leading to slower performance than without RAID card at all. An example are DELL's PERC H200 and H300 cards: if newer firmware has not changed that, they totally disable the disk's private cache (and it can not be re-enabled while the disks are connected to the RAID controller). Do a favor yourself and do not, ever, never buy such controllers. While even higher-end controller often disable disk's private cache, they at least have they own protected cache - making HDD's (but not SSD's!) private cache somewhat redundant.

This is not the end, though. Even capable controllers (the one with BBU or NVRAM cache) can give inconsistent results when used with SSD, basically because SSD really need a fast private cache for efficient FLASH page programming/erasing. And while some (most?) controller let you re-enable disk's private cache (eg: PERC H700/710/710P let the user re-enable it), if that private cache is not write-protected you risks to lose data in case of power loss. The exact behavior really is controller and firmware dependent (eg: on a DELL S6/i with 256 MB WB cache and enabled disk's cache, I had no losses during multiple, planned power loss testing), giving uncertainty and much concerns.

Open source software RAIDs, in the other hand, are much more controllable beasts - their software is not enclosed inside a proprietary firmware, and have well-defined metadata patterns and behaviors. Software RAID make the (right) assumption that disk's private DRAM cache is not protected, but at the same time it is critical for acceptable performance - so they typically do not disable it, rather they use ATA FLUSH / FUA commands to be certain that critical data land on stable storage. As they often run from the SATA ports attached to the chipset SB, their bandwidth is very good and driver support is excellent.

However, if used with mechanical HDDs, synchronized, random write access pattern (eg: databases, virtual machines) will greatly suffer compared to an hardware RAID controller with WB cache. On the other hand, when used with SSDs, they often excel and give results even higher than what achievable with hardware RAID cards.

Also consider that software RAIDs are not all created equals. Windows software RAID has a bad reputation, performance wise, and even Storage Space seems not too different. Linux MD Raid is exceptionally fast and versatile, but Linux I/O stack is composed of multiple independent pieces that you need to carefully understand to extract maximum performance. ZFS ZRAID is extremely advanced, but if not configured correctly it will give you very poor IOPs. Moreover, it need a fast SLOG device for synchronous write handling (ZIL).

Bottom line:

if your workloads is not synchronized random write sensitive, you don't need a RAID card
if you need a RAID card, do not buy a RAID controller without WB cache
if you plan to use SSD, software RAID is preferred. My best choice is Linux MD Raid. If you need ZFS advanced features, go with ZRAID but carefully think about your vdev setup!
if you, even using SSD, really need an hardware RAID card, use SSDs with write-protected caches (Micron M500/550/600 have partial protection - not really sufficient but better than nothing - while Intel DC and S series have full power loss protection, and the same can be said for enterprise Samsung SSDs)
if you need RAID6 and you will use normal, mechanical HDDs, consider to buy a fast RAID card with 512 MB (or more) WB cache. RAID6 has high write performance penaly, and a properly-sized WB cache can at least provide a fast intermediate storage for small synchronous writes (eg: filesystem journal).
if you need RAID6 with HDDs but you can't / don't want to buy an hardware RAID cards, carefully think about your software RAID setup. For example, a possible solution with Linux MD Raid is to use two arrays: a small RAID10 array for journal writes / DB logs, and a RAID6 array for raw storage (as fileserver). On the other hand, software RAID5/6 with SSDs is very fast, so you probably don't need a RAID card for an all-SSDs setup.

阿莫论坛20周年了!感谢大家的支持与爱护!!

月入3000的是反美的。收入3万是亲美的。收入30万是移民美国的。收入300万是取得绿卡后回国,教唆那些3000来反美的!
回帖提示: 反政府言论将被立即封锁ID 在按“提交”前,请自问一下:我这样表达会给举报吗,会给自己惹麻烦吗? 另外:尽量不要使用Mark、顶等没有意义的回复。不得大量使用大字体和彩色字。【本论坛不允许直接上传手机拍摄图片,浪费大家下载带宽和论坛服务器空间,请压缩后(图片小于1兆)才上传。压缩方法可以在微信里面发给自己(不要勾选“原图),然后下载,就能得到压缩后的图片】。另外,手机版只能上传图片,要上传附件需要切换到电脑版(不需要使用电脑,手机上切换到电脑版就行,页面底部)。
您需要登录后才可以回帖 登录 | 注册

本版积分规则

手机版|Archiver|amobbs.com 阿莫电子技术论坛 ( 粤ICP备2022115958号, 版权所有:东莞阿莫电子贸易商行 创办于2004年 (公安交互式论坛备案:44190002001997 ) )

GMT+8, 2024-4-27 06:30

© Since 2004 www.amobbs.com, 原www.ourdev.cn, 原www.ouravr.com

快速回复 返回顶部 返回列表