The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
rchisholm wrote:I know 1680's with the BIOS versions before 1.47 had problems with I/O under load, but have you had problems with them on 1.47 and the max payload size for the PCI-E slots set to 128? I did some very extensive testing with IOPS reaching over 100k for extended periods of time on these cards using 12 Intel X25-E SSD's on each of them and never ran into any strange I/O delays. If you are running into all these problems with at least version 1.47, I guess I'll have to consider myself very lucky. Worries me a little since I have 4 of the 24 port 1880's sitting here new in boxes getting ready to be installed. Of course, they will never be under extreme I/O loads since they will only end up with 24 7.2K SAS drives each.
anton (staff) wrote:We had issues ~2 years ago so if firware had been fixed it's fine. But negative impression is still present
rchisholm wrote:I know 1680's with the BIOS versions before 1.47 had problems with I/O under load, but have you had problems with them on 1.47 and the max payload size for the PCI-E slots set to 128? I did some very extensive testing with IOPS reaching over 100k for extended periods of time on these cards using 12 Intel X25-E SSD's on each of them and never ran into any strange I/O delays. If you are running into all these problems with at least version 1.47, I guess I'll have to consider myself very lucky. Worries me a little since I have 4 of the 24 port 1880's sitting here new in boxes getting ready to be installed. Of course, they will never be under extreme I/O loads since they will only end up with 24 7.2K SAS drives each.
On one of my two Starwind servers, both of which had 1680 cards, I ran into a situation where the server would not boot because the Areca wouldn't recognise the cards attached to it. Some of the time. Actually, most of the time, and initially it seemed that the card wouldn't even allow the system to POST until I removed it from the slot and reinserted it. I tested the card on a different server and it was fine, so sent back card and server to supplier, who tried lots of things, like changing power suppliers and motherboard, but couldn't make the problem go away.awedio wrote:Aitor,
Curious why you are switching to the 9280 & not the Areca 1880x?
Aitor_Ibarra wrote:The main drawbacks - well, LSI aren't yet compatible with an HP SAS expander which is very popular as it provides loads of SAS ports for a low price. Areca is compatible. Also, Areca have an out-of-band management port, which is very useful, although I don't have issues with LSI's in-band management (in fact, I prefer it, but this is a matter of opinion). And I miss 4GB cache, but with Starwind now using system RAM as cache, this is less of an advantage.awedio wrote:
No, still with Supermicro chassis and motherboards, and yes, all Supermicro SAS stuff these days is LSI based. One of the reasons I switched to LSI rather than stayed with Areca is that Supermicro support LSI RAID cards with their JBODs etc but not Areca.awedio wrote:I thought you used SuperMicro chassis? Did you switch to something else.
The SMicro expanders are LSI compatible
oxyi wrote:I don't feel that great reading this post now...
I just finished building 2 50x 2tb Chenbro chassis server, each server is powered by 2 x Areca 1680 4Gb cache version, with 1.48 firmware.
How do I go about testing the high I/O load to see if my areca would have a problem, we still in testing stage so I can check for it...
Thanks !