static void xfs_lookup(fuse_req_t req, fuse_ino_t parent, const char *name) { struct xfs_inode *ip = xfs_iget(parent); xfs_dirent_t *de = xfs_dir_lookup(ip, name); fuse_reply_entry(req, &(struct fuse_entry_param){ .ino = de->inumber, .generation = ip->i_generation, .attr_timeout = 1.0, .entry_timeout = 1.0 }); } XFS divides the disk into equal-sized Allocation Groups. In fuse-xfs , each AG is a mmap() of a region in a backing file ( /var/lib/fuse-xfs/ag0.bin ). Reads and writes become pointer dereferences.
struct xfs_agf *agf = (struct xfs_agf *)(ag->map + XFS_AGF_OFFSET); if (be32_to_cpu(agf->agf_magicnum) != XFS_AGF_MAGIC) return -EINVAL; // or crash, which is more fun No buffer cache. No I/O scheduling. Just the filesystem’s raw data laid out in virtual memory. XFS’s extent B+tree is elegant: internal nodes point to other blocks, leaves point to extents. In kernel space, traversing it is cheap. In fuse-xfs , every bmap lookup might require reading several blocks—each of which is a pread() or a memory access, depending on your cache. fuse-xfs
Why? Because XFS inodes have a generation number (to handle inode reuse), and the low-level API lets us pass that back to the kernel’s dcache. struct xfs_agf *agf = (struct xfs_agf *)(ag->map +
This is where the kernel-to-userspace shift gets interesting. In the kernel, XFS uses xfs_buf_t with b_ops for verification. In fuse-xfs , we just cast: XFS’s extent B+tree is elegant: internal nodes point