nanotaya.blogg.se

Openzfs draid
Openzfs draid










Signed-off-by:Ělexander Motin Sponsored by: iXsystems, Inc. name zzz` by 41% from 7.63s to 4.47s, and saves additional ~30% of CPU time on the DMU cache reclamation. Together on a pool with 10K directories of 1800 files each and DMU cache limited to 128MB this reduces time of `find. It allows to not waste time allocating/freeing memory when processing multiple names in a loop during mzap_open(). Split zap_name_alloc() into zap_name_alloc() and zap_name_init_str(). Allow custom B-tree leaf size to reduce memmove() time. Aside of the microzaps it should also help 32bit range trees. Reduce BTREE_CORE_ELEMS from 128 to 126 to allow struct zfs_btree_core in case of 8 byte elements to pack into 2KB instead of 4KB. As result, struct mzap_ent now drops from 48 (rounded to 64) to 8 bytes. Respectively with the 16 bits there can be no more than 16 bits of collision differentiators. Save 16 bits on mze_chunkid, since microzap can never have so many elements. Save 32 bits on mze_hash by storing only upper 32 bits since lower 32 bits are always zero for microzaps. It allows to save 24 bytes per element just on pointers. Improve memory efficiency of the hash tree by switching from AVL-tree to B-tree. I've also found that for each 64 byte mzap element additional 64 byte tree element is allocated, that is a waste of memory and CPU caches. I've found that workloads accessing many large directories and having active eviction from DMU cache spend significant amount of time building and then destroying the trees. The built tree is linked to DMU user buffer, freed when original DMU buffer is dropped from cache. Microzap on-disk format does not include a hash tree, expecting one to be built in RAM during mzap_open(). Signed-off-by: Richard Yao Pull-request: #14048 part 1/1 No additional false positives were found by this, but it is believed that the more accurate model file will help to catch false positives in the future. In addition, several general purpose memory allocation functions were missing models: * kmem_vasprintf() * kmem_asprintf() * kmem_strdup() * kmem_strfree() * spl_vmem_alloc() * spl_vmem_zalloc() * spl_vmem_free() * calloc() As an experiment to try to find more bugs, some less than general purpose memory allocation functions were also given models: * zfsvfs_create() * zfsvfs_free() * nvlist_alloc() * nvlist_dup() * nvlist_free() * nvlist_pack() * nvlist_unpack() Finally, the models were improved using additional coverity primitives: * _coverity_negative_sink_() * _coverity_writeall0_() * _coverity_mark_as_uninitialized_buffer_() * _coverity_mark_as_afm_allocated_() In addition, an attempt to inform coverity that certain modelled functions read entire buffers was used by adding the following to certain models: int first = buf int last = buf It was inspired by the QEMU model file.

openzfs draid

#Openzfs draid update#

Richard.yao model file update Upon review, it was found that the model for malloc() was incorrect.










Openzfs draid