ramdrive caching v8

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Tue Oct 22, 2013 4:57 pm

is this the kind of performance we can expect out of starwind ramdisk caching solutions coming in v8?

[ URL removed ]

especially when paired with a pcie ssd for level 2, or similar?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Oct 23, 2013 9:29 am

1) RAM-based block caching is available with StarWind since 2006 or so. You don't need to wait for V8 which will add second layer (flash) to it.

2) There's a big difference where cache resides. We can do both initiator side caching (loopback when initiator and target run on the same machine,
say our vSAN or VSA scenario) or caching done on server (classic SAN). Referenced software can accelerate (in theory) transaction on a client only,
HUGE waste of resources (think about VDI scenario).

3) You should avoid software like FancyCache, VeloBit and FlashSoft. For a simple reason: doing write-back cache in RAM with a single
controller scenario and no NVRAM/Flash back end is simply dangerous. Any issue would result huge amount of data lost.
StarWind runs multiple controllers with multiple nodes keeping caches synchronized so it's safe. FlashSoft does some protection with a
beta they showed on a VMworld but others are out.
barrysmoke wrote:is this the kind of performance we can expect out of starwind ramdisk caching solutions coming in v8?

[ URL removed ]

especially when paired with a pcie ssd for level 2, or similar?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Tue Oct 29, 2013 9:55 pm

sorry, I wasn't debating using any of that software, I was simply trying to get a performance indicator of what is possible with starwind ramcache, and then look at v8 that does ramcache plus l2 ssd cache.
since the url was removed, I'll note that it showed a 700000 iops capability.
What I'm trying to determine, is what combination of hardware + starwind would result in the best possible performance on the market today.
I saw your post in the fusionIO thread, about their non-open drivers, and I agree. since a driver would be required to hit those 900000 iops, and ram disk caching can hit 700000 iops, I was thinking of just using starwind, no extra equipment just lots of ram.
Then my next question would be what would be the best L2 cache, multiple ssd's in raid0? you want those iops to be faster than your primary storage that is getting cached...in my configuration, the primary storage is 24 hybrid ssd drives.
fusionIO is expensive, like 5l to 10k expensive. If I found a competitive product, like raiddrive2, or similar for under $1l, I would consider it.
Would fusionIO, or similar be worth it used as a l2 cache?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Oct 31, 2013 3:02 pm

1) It's no problem to debate about anything and we're not against competition but you're not the first (and even second and so on...) who had tried to use StarWind with an unsupported block cache software we don't control. See:

http://thehomeserverblog.com/home-serve ... si-target/

http://hardforum.com/showthread.php?t=1376255&page=49

http://www.romexsoftware.com/bbs2/en-us ... =25&p=5435

http://www.romexsoftware.com/bbs2/en-us ... =26&t=1132

(actually many Romex Software hits, search their support forum for "StarWind" keyword)

http://community.spiceworks.com/topic/3 ... vers-setup

etc etc etc

Reason is simple. I repeat. If cluster is in degraded mode (only one node is left alive) StarWind will 1) flush own cache to disk with a highest priority and 2) turn cache from write-back to write-thru mode.
As we don't control underlying hardware (and in this particular case - software) the cluster node would still have HUGE amount of data being unsubmitted to disk. Final node down -> disasterous data loss.

2) There's no magic around 700K IOPS. StarWind was doing more then a million few years ago already. And keep in mind - that's with a 10 GbE sitting in between a site and a client (no local cache!). So doing 1M+
with a locally configured cache (vSAN scenario) with a good cache hit ratio is trivial. Check an embedded link here:

http://www.starwindsoftware.com/forums/ ... 53-15.html

3) RAID0 is not going to work for a reason: cache works with a smaller pages (4-16KB) so zero chance whole stripe would be updated. I guess one good PCIe or DIMM attached flash is a way to go. Talking to many
vendors about partnership at this very moment.
barrysmoke wrote:sorry, I wasn't debating using any of that software, I was simply trying to get a performance indicator of what is possible with starwind ramcache, and then look at v8 that does ramcache plus l2 ssd cache.
since the url was removed, I'll note that it showed a 700000 iops capability.
What I'm trying to determine, is what combination of hardware + starwind would result in the best possible performance on the market today.
I saw your post in the fusionIO thread, about their non-open drivers, and I agree. since a driver would be required to hit those 900000 iops, and ram disk caching can hit 700000 iops, I was thinking of just using starwind, no extra equipment just lots of ram.
Then my next question would be what would be the best L2 cache, multiple ssd's in raid0? you want those iops to be faster than your primary storage that is getting cached...in my configuration, the primary storage is 24 hybrid ssd drives.
fusionIO is expensive, like 5l to 10k expensive. If I found a competitive product, like raiddrive2, or similar for under $1l, I would consider it.
Would fusionIO, or similar be worth it used as a l2 cache?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Wed Dec 04, 2013 4:26 am

I'm going to continue this thread, with the results I've seen, trying to use large ramdisks as cache on v8 beta2 on windows 2012 in a vmware vm. Any time I tried to use a ramdisk, or cache, I was losing data, and having bad performance. When I turned the cache off, starwind worked fine.
I started trying different configs, even reverting to windows server 2008. Unfortunately v8 won't use ramdisks on that version. I still wanted to test 2008, so I got the starwind free ramdisk software, and installed it, and did some local testing. That software is limited, I was only able to create a 27 gig ramdisk, but it worked! I was able to run iometer locally and got some amazing iops for a virtual machine ramdisk. So I proved the problem is not on the vmware side, so I went back to 2012 server, and created a smaller ramdisk, and made sure my ram usage wasn't out of hand. With v8, you are creating a ramdisk for use as an iscsi target, so when it created I went back to vmware, and I noticed what I had been doing before was blindly selecting vmfs5, since it is new, and that's what I usually do to my datastores...this has a 256M block size. I wondered if that was a problem, so I selected vmfs3 this time, with the smallest block size. There was still some slowness creating the file system(my thinking here was that the ramdisk should have flown doing this), and again slowness when I added a virtual disk to test onto this datastore after it was created. this time my iometer worked just fine. the read test was bad, but all the write tests were almost as fast as my 2008 ramdisk test, where iometer ran locally(no iscsi/network lag)
so, long story short, there are issues, but looks like the main issue I was experiencing was vmfs5 related.
I'll do more testing, and post logs here, so we can troubleshoot this.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Dec 04, 2013 8:46 am

1) If StarWind is losing data you need to file a bug. You're definitely using "ramdisk" out of context so I honestly don't understand what you mean. Is it a stand-alone driver? Is it a built-in RAM disk? Is it really a RAM cache? What's "ramdisk"? Term "ramdisk caching" is Greek to me. That's that?

2) RAM cache was with StarWind for ages. The same about built-in RAM disk. The same about stand-alone driver (separate product). V6 definitely has them all.

3) It's not "limited software" it's just amount of the non-paged memory kernel can allocate. Has nothing to do with any restrictions.

4) You're listing bunch of activities and as you don't tell what you want to do I don't understand where to start from... What exactly do you want to do? What's your scenario?
barrysmoke wrote:I'm going to continue this thread, with the results I've seen, trying to use large ramdisks as cache on v8 beta2 on windows 2012 in a vmware vm. Any time I tried to use a ramdisk, or cache, I was losing data, and having bad performance. When I turned the cache off, starwind worked fine.
I started trying different configs, even reverting to windows server 2008. Unfortunately v8 won't use ramdisks on that version. I still wanted to test 2008, so I got the starwind free ramdisk software, and installed it, and did some local testing. That software is limited, I was only able to create a 27 gig ramdisk, but it worked! I was able to run iometer locally and got some amazing iops for a virtual machine ramdisk. So I proved the problem is not on the vmware side, so I went back to 2012 server, and created a smaller ramdisk, and made sure my ram usage wasn't out of hand. With v8, you are creating a ramdisk for use as an iscsi target, so when it created I went back to vmware, and I noticed what I had been doing before was blindly selecting vmfs5, since it is new, and that's what I usually do to my datastores...this has a 256M block size. I wondered if that was a problem, so I selected vmfs3 this time, with the smallest block size. There was still some slowness creating the file system(my thinking here was that the ramdisk should have flown doing this), and again slowness when I added a virtual disk to test onto this datastore after it was created. this time my iometer worked just fine. the read test was bad, but all the write tests were almost as fast as my 2008 ramdisk test, where iometer ran locally(no iscsi/network lag)
so, long story short, there are issues, but looks like the main issue I was experiencing was vmfs5 related.
I'll do more testing, and post logs here, so we can troubleshoot this.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Wed Dec 04, 2013 10:16 pm

I know, sorry...it got long, and complicated.
it's simple, I'm just testing what the v8 ramcache can do. but since it didn't work, I had to try all the things mentioned, to try to help you narrow down what the problem is.

what is the procedure for reporting a bug on the v8 beta software?

re-read what I posted with that in mind. i was just troubleshooting for you. I'm not using third party, it's all starwind products...just with different configs to try to make it work.
when I installed v8 on windows 2008, creating a target with a ramdisk is not an option anymore, and the ramdisk icon at the top is gone.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Dec 05, 2013 1:40 am

What you do is NOT caching )) You're allocating RAM for a primary storage. Cache is a non-addressable space added to your primary storage which should be a spinning disk.

Can you post a screenshot of what you think "does not work"? Thank you!
barrysmoke wrote:I know, sorry...it got long, and complicated.
it's simple, I'm just testing what the v8 ramcache can do. but since it didn't work, I had to try all the things mentioned, to try to help you narrow down what the problem is.

what is the procedure for reporting a bug on the v8 beta software?

re-read what I posted with that in mind. i was just troubleshooting for you. I'm not using third party, it's all starwind products...just with different configs to try to make it work.
when I installed v8 on windows 2008, creating a target with a ramdisk is not an option anymore, and the ramdisk icon at the top is gone.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Thu Dec 05, 2013 7:36 am

no, I'm doing both, in order to fully test....please bear with me and don't jump to conclusions(not being rude, just having a hard time getting everything across I guess)

I tried
1) iscsi target, 8 drive raid5, with l1 cache set to 100gig
iscsi target used to create a new datastore in vsphere 5.1, formatted vmfs5(took longer than normal)
I did not mount locally, only in vmware host. took an extreme amount of time to create datastore.
uploaded iso file to datastore
browsed datastore from 2 different hosts, to make sure iso file was there
added a 100gig thick zero disk to my iometer testing vm, placed on new datastore.(took extremely long time, longer than normal disk creation)
creation errored, and unable to browse datastore after error, so iso file lost
verified procedure twice.

turned off l1 cache, and no more problems

2)iscsi target, with starwind ramdisk(100gig), creation succeeds, but using in vmware host has same results as above.
long time to format vmfs
long time to add 100gig disk to testing vm, housed on this datastore
eventually errors

3)installed v8 on 2008 server, to determine if it was a windows 2012 issue.
tried to add ramdisk, and l1 cache, no ramdisk/caching features available on 2008?
I still wanted to test, and with starwind based software, so I installed the starwind ramdisk software, and created a 27 gig ramdisk(max the software allowed me to create)
I ran iometer against the new ramdisk, since that software presents ramdisk as a drive(and not a target)...and it flew. This told me that it was not a starwind ramdisk/vmware issue(assuming ramdisk/caching code is still similar)
so I went back to windows 2012

4)repeated procedure for ramdisk iscsi target, only this time I formatted vmfs3(with smallest block size in list), instead of vmfs5
still some slowness issues on vmfs format, and drive creation for my iometer vm, but I was able to get farther.
successfully created the 100gig thick zeroed drive(housed on ramdisk iscsi target)
read tests were severely limited, but write tests were very close to the iometer results from 2008 test(where I tested against the ramdisk locally, not over iscsi)

I hope that clarifies my procedures, and I think with this, you can replicate tests on your end.
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Thu Dec 05, 2013 8:26 am

oops, ignore the no ramdisk stuff about windows 2008 server, I had tested between builds, and it was moved to add device(advanced)
my bad....also 27 gig max ramdisk was limit of version of windows 2008 server standard. need datacenter to go past 32 gig
I'll do another test
User avatar
Bohdan (staff)
Staff
Posts: 435
Joined: Wed May 23, 2007 12:58 pm

Thu Dec 05, 2013 2:03 pm

What is the number of virtual CPUs assigned to the VM with StarWind?
What is the virtual network adapted type? Is it VMXNET 3 ? Are VMware tools installed?
Could you show us you ESX networking settings (screenshot)?
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Thu Dec 05, 2013 6:45 pm

2 cpu's, with 2 cores, 4 cpu total
vmxnet3, with vm tools installed, and jumbo frames turned on the network for iscsi
Yes, I'll post in a sec.
2 seperate networks, management net, and an iscsi net...thinking about extending this to a 3rd iscsi net as well.
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Thu Dec 05, 2013 6:57 pm

here's the network config
Attachments
netconfig.png
netconfig.png (37.98 KiB) Viewed 20315 times
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Sat Dec 07, 2013 2:04 am

starwind couldn't replicate my issue on their end, and we have a remote support session scheduled next week.
I'd be curious if anyone else has seen ramdisk, or cache issues? vsphere 5.1 update 1 is where I'm at.
barrysmoke
Posts: 86
Joined: Tue Oct 15, 2013 5:11 pm

Tue Dec 10, 2013 12:12 am

after starwind support said they couldn't replicate my issue, I dug into the log, to see what the difference in a vmfs5 ramdisk format, and a vmfs3 ramdisk format looks like.
I came accross this, which only happens on the same sector in the vmfs3, but has multiple lines in the vmfs5.

copy of both logs(post vmfs3 format, and post vmfs5 format):
http://bsmokeman.no-ip.biz/fileshare/ramdiskcompare.zip

basically the vmfs3 format that succeeds has a repeat of this line, same sector number:
12/9 23:03:40.892 e6c SCSI: VAAI C&W: sector 49724, len 1 - miscompare at pos 0.

repeats this one line 13 times in the whole log.

while the vmfs 5 format that fails, has this(seems to be 2048 lines, then starts over at first glance, 98725 rows total.
had to trim this, look at zip file for entire log
12/9 22:23:05.133 e6c SCSI: VAAI C&W: sector 2048, len 1 - miscompare at pos 0.
12/9 22:23:06.225 e6c SCSI: VAAI C&W: sector 44688, len 1 - miscompare at pos 0.
12/9 22:23:06.225 e6c SCSI: VAAI C&W: sector 44689, len 1 - miscompare at pos 0.
12/9 22:23:06.226 e6c SCSI: VAAI C&W: sector 44690, len 1 - miscompare at pos 0.
12/9 22:23:06.226 e6c SCSI: VAAI C&W: sector 44691, len 1 - miscompare at pos 0.
12/9 22:23:06.226 e6c SCSI: VAAI C&W: sector 44692, len 1 - miscompare at pos 0.
12/9 22:23:06.228 e6c SCSI: VAAI C&W: sector 44693, len 1 - miscompare at pos 0.
12/9 22:23:06.228 e6c SCSI: VAAI C&W: sector 44694, len 1 - miscompare at pos 0.
12/9 22:23:06.228 e6c SCSI: VAAI C&W: sector 44695, len 1 - miscompare at pos 0.
12/9 22:23:06.229 e6c SCSI: VAAI C&W: sector 44696, len 1 - miscompare at pos 0.
12/9 22:23:06.229 e6c SCSI: VAAI C&W: sector 44697, len 1 - miscompare at pos 0.
12/9 22:23:06.230 e6c SCSI: VAAI C&W: sector 44698, len 1 - miscompare at pos 0.
12/9 22:23:06.230 e6c SCSI: VAAI C&W: sector 44699, len 1 - miscompare at pos 0.
12/9 22:23:06.230 e6c SCSI: VAAI C&W: sector 44700, len 1 - miscompare at pos 0.
12/9 22:23:06.231 e6c SCSI: VAAI C&W: sector 44701, len 1 - miscompare at pos 0.
12/9 22:23:06.231 e6c SCSI: VAAI C&W: sector 44702, len 1 - miscompare at pos 0.
12/9 22:23:06.231 e6c SCSI: VAAI C&W: sector 44703, len 1 - miscompare at pos 0.
12/9 22:23:06.232 e6c SCSI: VAAI C&W: sector 44704, len 1 - miscompare at pos 0.
12/9 22:23:06.232 e6c SCSI: VAAI C&W: sector 44705, len 1 - miscompare at pos 0.
12/9 22:23:06.233 e6c SCSI: VAAI C&W: sector 44706, len 1 - miscompare at pos 0.
12/9 22:23:06.233 e6c SCSI: VAAI C&W: sector 44707, len 1 - miscompare at pos 0.
12/9 22:23:06.233 e6c SCSI: VAAI C&W: sector 44708, len 1 - miscompare at pos 0.
12/9 22:23:06.234 e6c SCSI: VAAI C&W: sector 44709, len 1 - miscompare at pos 0.
12/9 22:23:06.234 e6c SCSI: VAAI C&W: sector 44710, len 1 - miscompare at pos 0.
12/9 22:23:06.234 e6c SCSI: VAAI C&W: sector 44711, len 1 - miscompare at pos 0.
12/9 22:23:06.235 e6c SCSI: VAAI C&W: sector 44712, len 1 - miscompare at pos 0.
12/9 22:23:06.235 e6c SCSI: VAAI C&W: sector 44713, len 1 - miscompare at pos 0.
12/9 22:23:06.236 e6c SCSI: VAAI C&W: sector 44714, len 1 - miscompare at pos 0.
12/9 22:23:06.236 e6c SCSI: VAAI C&W: sector 44715, len 1 - miscompare at pos 0.
12/9 22:23:06.236 e6c SCSI: VAAI C&W: sector 44716, len 1 - miscompare at pos 0.
12/9 22:23:06.237 e6c SCSI: VAAI C&W: sector 44717, len 1 - miscompare at pos 0.
12/9 22:23:06.237 e6c SCSI: VAAI C&W: sector 44718, len 1 - miscompare at pos 0.
12/9 22:23:06.237 e6c SCSI: VAAI C&W: sector 44719, len 1 - miscompare at pos 0.
12/9 22:23:06.238 e6c SCSI: VAAI C&W: sector 44720, len 1 - miscompare at pos 0.
12/9 22:23:06.238 e6c SCSI: VAAI C&W: sector 44721, len 1 - miscompare at pos 0.
12/9 22:23:06.239 e6c SCSI: VAAI C&W: sector 44722, len 1 - miscompare at pos 0.
12/9 22:23:06.242 e6c SCSI: VAAI C&W: sector 44723, len 1 - miscompare at pos 0.
12/9 22:23:06.242 e6c SCSI: VAAI C&W: sector 44724, len 1 - miscompare at pos 0.
12/9 22:23:06.243 e6c SCSI: VAAI C&W: sector 44725, len 1 - miscompare at pos 0.
12/9 22:23:06.243 e6c SCSI: VAAI C&W: sector 44726, len 1 - miscompare at pos 0.
12/9 22:23:06.243 e6c SCSI: VAAI C&W: sector 44727, len 1 - miscompare at pos 0.
12/9 22:23:06.244 e6c SCSI: VAAI C&W: sector 44728, len 1 - miscompare at pos 0.
12/9 22:23:06.244 e6c SCSI: VAAI C&W: sector 44729, len 1 - miscompare at pos 0.
12/9 22:23:06.244 e6c SCSI: VAAI C&W: sector 44730, len 1 - miscompare at pos 0.
12/9 22:23:06.245 e6c SCSI: VAAI C&W: sector 44731, len 1 - miscompare at pos 0.
12/9 22:23:06.245 e6c SCSI: VAAI C&W: sector 44732, len 1 - miscompare at pos 0.
12/9 22:23:06.245 e6c SCSI: VAAI C&W: sector 44733, len 1 - miscompare at pos 0.
12/9 22:23:06.246 e6c SCSI: VAAI C&W: sector 44734, len 1 - miscompare at pos 0.
12/9 22:23:06.246 e6c SCSI: VAAI C&W: sector 44735, len 1 - miscompare at pos 0.
12/9 22:23:06.246 e6c SCSI: VAAI C&W: sector 44736, len 1 - miscompare at pos 0.
12/9 22:23:06.247 e6c SCSI: VAAI C&W: sector 44737, len 1 - miscompare at pos 0.
12/9 22:23:06.247 e6c SCSI: VAAI C&W: sector 44738, len 1 - miscompare at pos 0.
12/9 22:23:06.247 e6c SCSI: VAAI C&W: sector 44739, len 1 - miscompare at pos 0.
12/9 22:23:06.248 e6c SCSI: VAAI C&W: sector 44740, len 1 - miscompare at pos 0.
12/9 22:23:06.248 e6c SCSI: VAAI C&W: sector 44741, len 1 - miscompare at pos 0.
12/9 22:23:06.248 e6c SCSI: VAAI C&W: sector 44742, len 1 - miscompare at pos 0.
12/9 22:23:06.249 e6c SCSI: VAAI C&W: sector 44743, len 1 - miscompare at pos 0.
12/9 22:23:06.249 e6c SCSI: VAAI C&W: sector 44744, len 1 - miscompare at pos 0.
12/9 22:23:06.249 e6c SCSI: VAAI C&W: sector 44745, len 1 - miscompare at pos 0.
12/9 22:23:06.250 e6c SCSI: VAAI C&W: sector 44746, len 1 - miscompare at pos 0.
12/9 22:23:06.250 e6c SCSI: VAAI C&W: sector 44747, len 1 - miscompare at pos 0.
12/9 22:23:06.250 e6c SCSI: VAAI C&W: sector 44748, len 1 - miscompare at pos 0.
12/9 22:23:06.251 e6c SCSI: VAAI C&W: sector 44749, len 1 - miscompare at pos 0.
12/9 22:23:06.251 e6c SCSI: VAAI C&W: sector 44750, len 1 - miscompare at pos 0.
12/9 22:23:06.251 e6c SCSI: VAAI C&W: sector 44751, len 1 - miscompare at pos 0.
12/9 22:23:06.252 e6c SCSI: VAAI C&W: sector 44752, len 1 - miscompare at pos 0.
12/9 22:23:06.252 e6c SCSI: VAAI C&W: sector 44753, len 1 - miscompare at pos 0.
12/9 22:23:06.252 e6c SCSI: VAAI C&W: sector 44754, len 1 - miscompare at pos 0.
12/9 22:23:06.253 e6c SCSI: VAAI C&W: sector 44755, len 1 - miscompare at pos 0.
12/9 22:23:06.253 e6c SCSI: VAAI C&W: sector 44756, len 1 - miscompare at pos 0.
12/9 22:23:06.253 e6c SCSI: VAAI C&W: sector 44757, len 1 - miscompare at pos 0.
12/9 22:23:06.253 e6c SCSI: VAAI C&W: sector 44758, len 1 - miscompare at pos 0.
12/9 22:23:06.254 e6c SCSI: VAAI C&W: sector 44759, len 1 - miscompare at pos 0.
12/9 22:23:06.254 e6c SCSI: VAAI C&W: sector 44760, len 1 - miscompare at pos 0.
12/9 22:23:06.254 e6c SCSI: VAAI C&W: sector 44761, len 1 - miscompare at pos 0.
12/9 22:23:06.255 e6c SCSI: VAAI C&W: sector 44762, len 1 - miscompare at pos 0.
12/9 22:23:06.255 e6c SCSI: VAAI C&W: sector 44763, len 1 - miscompare at pos 0.
12/9 22:23:06.255 e6c SCSI: VAAI C&W: sector 44764, len 1 - miscompare at pos 0.
Post Reply