The file system driver has very few options to tune. I used a max label length of 3 because it seemed like a good choice. The file system driver was only tested on a Reiser file system. I tried using both an ext3 and a memory file system. Both ran out of inodes before all the zone data was created. I even created an ext3 partition of 9.5GB and specified 4k/inode (4k is the block size), and it ran out of space.
The test machine used two Western Digital Raptor 10K rpm SATA drives. These drives are fast. Currently hdparm does not offer much in the way of tuning for SATA, but SATA drives seem to run at their top speed anyway. Here is the output from hdparm -tT /dev/md3, which is the raid 1 reiserfs partition on my test system.
/dev/md3: Timing buffer-cache reads: 2652 MB in 2.00 seconds = 1326.00 MB/sec Timing buffered disk reads: 132 MB in 3.02 seconds = 43.71 MB/sec |
The configuration below was used in testing the file system driver.
dlz "file system zone" { database "filesystem /dns_data/reiserfs/filesystem/data3/ .dns .xfr 3 ~"; }; |
Because of the file system drivers low performance and high latency, I would not recommend this driver be used on a production system. However, it is very easy to use because it requires only a standard disk filesystem and does not require any other databases or libraries. This makes it very attractive for experimentation.