I tried smartmontools, but it did not yield any useful info. So, I am wondering if I have some kind of disk error on c4t3d0 and c4t5d0. The only unusual thing I could find with that RAIDZ1 pool was that the I/O counts seemed uneven - I thought they would be about the same across all RAID disks: I have turned off dedup on all volumes for now. I have no way of clearing the DDTs, so I can't start over with a clean system without destroying and re-creating my pool. I don't appear to have run out of RAM for the DDTs. I tried to trim the test case down, but it seems to take several 100's of MB before the problem occurs, and it doesn't consistently fail with smaller test cases. Hence, I think there is a problem with deduping. I tried copying the directories to a different ZFS volume with dedup on and off, with the same effect. Before you add a 'special' vdev, do some experiments, and get to know the feature. However, in roughly 100 of cases there is no value to turning off compression, as the lz4 compression algorithm is very good at not wasting time compressing data that can’t be meaningfully reduced in size. With the recent introduction of allocation classes (and 'special' vdevs) you might have good enough performance with deduplication on a non-SSD pool, assuming you have a fast SSD to hold the metadata (which includes the deduplication data). So you can have deduped/compressed data on the same pool as other data. If dedup is turned off for the volume, there is no problem with volumes detaching using those same directories. Dedup and compression are both on the dataset level. I was able to reproduce the volume detaching behavior by attempting to copy those directories again. However, copying three fairly-deep directories caused the volume to detach from the file server. Most of it deduped fine, with a dedupratio of 2.30x. Via iSCSI, this volume is attached to a Windows Server 2008 file server.įrom the file server I attempted to copy about 100GB of data onto that remote volume. There is no need for manually compile ZFS modules - all packages are included. Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS file system is introduced as optional file system and also as an additional selection for the root file system. I enabled deduplication on my zpool for a volume called data1/itfiles/home, which defined to be 900GB. ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Scan: scrub repaired 0 in 1h37m with 0 errors on Tue Aug 20 07:09:55 2013 My storage server running oi_151a7 has 16GB of RAM.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |