The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
barrysmoke wrote:is this the kind of performance we can expect out of starwind ramdisk caching solutions coming in v8?
[ URL removed ]
especially when paired with a pcie ssd for level 2, or similar?
barrysmoke wrote:sorry, I wasn't debating using any of that software, I was simply trying to get a performance indicator of what is possible with starwind ramcache, and then look at v8 that does ramcache plus l2 ssd cache.
since the url was removed, I'll note that it showed a 700000 iops capability.
What I'm trying to determine, is what combination of hardware + starwind would result in the best possible performance on the market today.
I saw your post in the fusionIO thread, about their non-open drivers, and I agree. since a driver would be required to hit those 900000 iops, and ram disk caching can hit 700000 iops, I was thinking of just using starwind, no extra equipment just lots of ram.
Then my next question would be what would be the best L2 cache, multiple ssd's in raid0? you want those iops to be faster than your primary storage that is getting cached...in my configuration, the primary storage is 24 hybrid ssd drives.
fusionIO is expensive, like 5l to 10k expensive. If I found a competitive product, like raiddrive2, or similar for under $1l, I would consider it.
Would fusionIO, or similar be worth it used as a l2 cache?
barrysmoke wrote:I'm going to continue this thread, with the results I've seen, trying to use large ramdisks as cache on v8 beta2 on windows 2012 in a vmware vm. Any time I tried to use a ramdisk, or cache, I was losing data, and having bad performance. When I turned the cache off, starwind worked fine.
I started trying different configs, even reverting to windows server 2008. Unfortunately v8 won't use ramdisks on that version. I still wanted to test 2008, so I got the starwind free ramdisk software, and installed it, and did some local testing. That software is limited, I was only able to create a 27 gig ramdisk, but it worked! I was able to run iometer locally and got some amazing iops for a virtual machine ramdisk. So I proved the problem is not on the vmware side, so I went back to 2012 server, and created a smaller ramdisk, and made sure my ram usage wasn't out of hand. With v8, you are creating a ramdisk for use as an iscsi target, so when it created I went back to vmware, and I noticed what I had been doing before was blindly selecting vmfs5, since it is new, and that's what I usually do to my datastores...this has a 256M block size. I wondered if that was a problem, so I selected vmfs3 this time, with the smallest block size. There was still some slowness creating the file system(my thinking here was that the ramdisk should have flown doing this), and again slowness when I added a virtual disk to test onto this datastore after it was created. this time my iometer worked just fine. the read test was bad, but all the write tests were almost as fast as my 2008 ramdisk test, where iometer ran locally(no iscsi/network lag)
so, long story short, there are issues, but looks like the main issue I was experiencing was vmfs5 related.
I'll do more testing, and post logs here, so we can troubleshoot this.
barrysmoke wrote:I know, sorry...it got long, and complicated.
it's simple, I'm just testing what the v8 ramcache can do. but since it didn't work, I had to try all the things mentioned, to try to help you narrow down what the problem is.
what is the procedure for reporting a bug on the v8 beta software?
re-read what I posted with that in mind. i was just troubleshooting for you. I'm not using third party, it's all starwind products...just with different configs to try to make it work.
when I installed v8 on windows 2008, creating a target with a ramdisk is not an option anymore, and the ramdisk icon at the top is gone.