5.0 NuttX File System

来源:互联网 发布:私处美白 知乎 编辑:程序博客网 时间:2024/06/03 14:36
转载请注明出处:http://blog.csdn.net/alvin_jack/article/details/71250007
文章均出自个人理解,错误之处敬请之处

前言
      前一段时间折腾了几个驱动(PWM、Serial、I2C),这次来折腾下,Nuttx的文件系统,之前看到手册上有好多xxxFs啥的,里面有提到以个可以执行二进制文件,听起来很有意思,现在开始来看看到底是怎么实现的呢。

Pseudo Root File System
     Nuttx的根文件系统是一个简单的伪文件系统,这个文件系统存在内存中,是不需要任何存储介质或块设备驱动支持的,我的理解是在RAM中申请的一块区域,然后构建这个文件系统的,其内容是通过标准文件系统操作引用即时生成的,这句话说的容易理解点就是说,这个文件系统具备(open、close、write、read...)这些接口,还记得之前的文档中说道,nuttx是将所有的驱动向上都抽象成了这种接口的。
     同样的在Linux/proc文件系统也是伪文件系统(原文翻译,我其实不懂linux,没细致去研究过),在我们之前编写的驱动(字符型、块驱动...)通过regist_driver(...)注册驱动时候,就是注册到这个伪根文件系统的,说道这里,其实这里的文件系统其实就是一个特定结构的数组(这里挖个坑,后面列举实例佐证);

Mounted File Systems
     之前提到的这种伪文件系统(Simple in-memory file system)可以被扩展为具备访问真实存储设备的块设备(block device),Nuttx是支持标准的mount()接口,允许一个块设备绑定到伪文件系统的一个挂载点上的,目前Nuttx是支持标准的VFAT和ROMFS文件系统,以及Nuttx的NXFFS和网络文件系统客户端(NFS client),这里表达的意思有点含糊,简单的一句话应该是nuttx的伪文件系统可以为以下几种文件系统提供挂载点;

Comparison to Linux
    这里和linux做个对比就是,nuttx的根文件系统是一个伪文件系统,真实的文件系统(存在真实的介质中)可能被挂载在这个伪文件系统中,而linux的根文件系统是真实存在与介质中的; 通过读这段文字,我的理解还是猜测这里的伪文件系统就是指不会被保存,掉电就消失了,但是ta又具备文件系统所有的接口和功能罢了(这里改天还是要找出具体的例子来佐证);

以下的文字是参考了《NuttX文件系统学习之关键数据结构及设备注册》文章
     老规矩还是先来整理框架,前面翻译的文章让人没有清晰的结构,感觉有点云里雾里,直接开始梳理;

在Nuttx的文件系统中,定义了字符型驱动设备、块驱动设备、Mountpoint驱动设备...这里需要回忆下之前的字符型设备最关键的两个数据结构×_dev_s和×_ops_s,下面就先分析字符型设备注册的过程,来了解Nuttx的文件系统框架;

字符型驱动注册流程
     之前在填充了×_dev_s和×_ops_s的数据结构后就调用了register_driver来注册;

int register_driver(FAR const char *path,                    FAR const struct file_operations *fops,                    mode_t mode,                    FAR void *priv){  FAR struct inode *node;  int ret;  /* Insert a dummy node -- we need to hold the inode semaphore because we  * will have a momentarily bad structure.  */  inode_semtake();  ret = inode_reserve(path, &node);  if (ret >= 0)  {  /* We have it, now populate it with driver specific information.  * NOTE that the initial reference count on the new inode is zero.  */  INODE_SET_DRIVER(node);  node->u.i_ops = fops;#ifdef CONFIG_FILE_MODE  node->i_mode = mode;#endif  node->i_private = priv;  ret = OK;  }  inode_semgive();  return ret;}
这里很明显,我们填充的两个驱动数据结构都被注册到了struct inode这个数据结构中了,实际这inode就是我们这次分析的切入点;

/* This structure represents one inode in the Nuttx pseudo-file system */struct inode{  FAR struct inode *i_peer;  /* Link to same level inode */  FAR struct inode *i_child; /* Link to lower level inode */  int16_t i_crefs;           /* References to inode */  uint16_t i_flags;          /* Flags for inode */  union inode_ops_u u;       /* Inode operations */#ifdef CONFIG_FILE_MODE  mode_t i_mode;             /* Access mode flags */#endif  FAR void *i_private;       /* Per inode driver private data */  char i_name[1];            /* Name of inode (variable) */};
看到这里需要注意的两个关键的成员变量,红色标红的两个数据在后面分析块设备驱动时也会用得上,我继续展开;
/* These are the various kinds of operations that can be associated with * an inode. */union inode_ops_u{  FAR const struct file_operations *i_ops;         /* Driver operations for inode */#ifndef CONFIG_DISABLE_MOUNTPOINT  FAR const struct block_operations *i_bops;       /* Block driver operations */  FAR const struct mountpt_operations *i_mops;     /* Operations on a mountpoint */#endif#ifdef CONFIG_FS_NAMED_SEMAPHORES  FAR struct nsem_inode_s *i_nsem;                 /* Named semaphore */#endif#ifndef CONFIG_DISABLE_MQUEUE  FAR struct mqueue_inode_s *i_mqueue;             /* POSIX message queue */#endif#ifdef CONFIG_PSEUDOFS_SOFTLINKS  FAR char *i_link;                                /* Full path to link target */#endif};

这里可以明显看出来两点问题,第一就是Nuttx在框架实现上都是采用了×_dev_s和×_ops_s这种套路,inode也不例外;第二就是在inode_ops_u中,定义了两类ops(file_operations、block_operations、mountpt_operations)两类后面解释;前者是我们通常的字符型型驱动和特殊驱动抽象的文件接口,后面的两个很明显是针对块设备驱动的;说到这里其实就应该了解这些驱动注册到inode里面后,是如何存在于Nuttx的文件系统里面的了;
     在补充一句就是,inode是对所有设备的接口的抽象,各类设备的区别其实就在×_ops_s,这也就很好解释了为什么是union类型的数据结构了;接口内部先根据设备名字遍历记录块设备的全局结构体链表中所有的node,如果链表中已经存在与此名字相同的node,则表明此设备已经注册,反之则根据设备名字分配一个structinode *类型的inode内存;然后根据设备的name,bops, mode以及priv初始化inode的相关成员;从而完成块设备的注册;简单来说这个inode在nuttx里面就是存在与一个树结构里面的;
/**************************************************************************** * Name: inode_reserve * * Description: * Reserve an (initialized) inode the pseudo file system. The initial * reference count on the new inode is zero. * * Input parameters: * path - The path to the inode to create * inode - The location to return the inode pointer * * Returned Value: * Zero on success (with the inode point in 'inode'); A negated errno * value is returned on failure: * * EINVAL - 'path' is invalid for this operation * EEXIST - An inode already exists at 'path' * ENOMEM - Failed to allocate in-memory resources for the operation * * Assumptions: * Caller must hold the inode semaphore * ****************************************************************************/int inode_reserve(FAR const char *path, FAR struct inode **inode){  struct inode_search_s desc;  FAR struct inode *left;  FAR struct inode *parent;  FAR const char *name;  int ret;  /* Assume failure */  DEBUGASSERT(path != NULL && inode != NULL);  *inode = NULL;  /* Handle paths that are interpreted as the root directory */  if (path[0] == '\0' || path[0] != '/')  {  return -EINVAL;  }  /* Find the location to insert the new subtree */  SETUP_SEARCH(&desc, path, false);  ret = inode_search(&desc);  if (ret >= 0)  {  /* It is an error if the node already exists in the tree (or if it  * lies within a mountpoint, we don't distinguish here).  */  ret = -EEXIST;  goto errout_with_search;  }  /* Now we now where to insert the subtree */  name = desc.path;  left = desc.peer;  parent = desc.parent;  for (; ; )  {  FAR struct inode *node;  /* Create a new node -- we need to know if this is the  * the leaf node or some intermediary. We can find this  * by looking at the next name.  */  FAR const char *nextname = inode_nextname(name);  if (*nextname != '\0')  {  /* Insert an operationless node */  node = inode_alloc(name);  if (node != NULL)  {  inode_insert(node, left, parent);  /* Set up for the next time through the loop */  name = nextname;  left = NULL;  parent = node;  continue;  }  }  else  {  node = inode_alloc(name);  if (node != NULL)  {  inode_insert(node, left, parent);  *inode = node;  ret = OK;  break;  }  }  /* We get here on failures to allocate node memory */  ret = -ENOMEM;  break;  }errout_with_search:  RELEASE_SEARCH(&desc);  return ret;}
我们继续查看inode_insert函数
/**************************************************************************** * Name: inode_insert ****************************************************************************/static void inode_insert(FAR struct inode *node,  FAR struct inode *peer,  FAR struct inode *parent){  /* If peer is non-null, then new node simply goes to the right  * of that peer node.  */  if (peer)  {  node->i_peer = peer->i_peer;  peer->i_peer = node;  }  /* If parent is non-null, then it must go at the head of its  * list of children.  */  else if (parent)  {  node->i_peer = parent->i_child;  parent->i_child = node;  }  /* Otherwise, this must be the new root_inode */  else  {  node->i_peer = g_root_inode;  g_root_inode = node;  }}
到此基本上我们的驱动注册就走到头了,有兴趣可以继续分析更细致了原理;接下来我们继续block的驱动设备注册原理分析;


块设备驱动的注册流程
     上面的分析其实差不多描绘了一个大体的框架了,块设备有些许的差异,但是万变不离其中,我们还是来举一反三;
和字符设备一样,块设备也是以inode的抽象形式存在VFS的树结构上,只是在×_ops_s的数据结构中有Block driver operations i_bops和Operations on a mountpoint i_mops的专门的数据结构了,这里有点区别的是多出了“Operations on a mountpoint i_mops”,这个是主要是对应文件系统的接口,例如fat/nfs/nxffs/romfs/smartfs...这写文件系统,在nuttx/fs/路径下包含上述文件系统;
     而i_mops和之前的字符驱动ops相差不大,多出了geometry这个成员,ta是用来描述块设备的物理属性的(扇区大小、扇区个数等等),既然都是以inode的形式存在,那么我们就只分析这个i_mops了;
struct mountpt_operations{  /* The mountpoint open method differs from the driver open method  * because it receives (1) the inode that contains the mountpoint  * private data, (2) the relative path into the mountpoint, and (3)  * information to manage privileges.  */  int (*open)(FAR struct file *filp, FAR const char *relpath,  int oflags, mode_t mode);  /* The following methods must be identical in signature and position because  * the struct file_operations and struct mountp_operations are treated like  * unions.  */  int (*close)(FAR struct file *filp);  ssize_t (*read)(FAR struct file *filp, FAR char *buffer, size_t buflen);  ssize_t (*write)(FAR struct file *filp, FAR const char *buffer, size_t buflen);  off_t (*seek)(FAR struct file *filp, off_t offset, int whence);  int (*ioctl)(FAR struct file *filp, int cmd, unsigned long arg);  /* The two structures need not be common after this point. The following  * are extended methods needed to deal with the unique needs of mounted  * file systems.  *  * Additional open-file-specific mountpoint operations:  */  int (*sync)(FAR struct file *filp);  int (*dup)(FAR const struct file *oldp, FAR struct file *newp);  /* Directory operations */  int (*opendir)(FAR struct inode *mountpt, FAR const char *relpath, FAR struct fs_dirent_s *dir);  int (*closedir)(FAR struct inode *mountpt, FAR struct fs_dirent_s *dir);  int (*readdir)(FAR struct inode *mountpt, FAR struct fs_dirent_s *dir);  int (*rewinddir)(FAR struct inode *mountpt, FAR struct fs_dirent_s *dir);  /* General volume-related mountpoint operations: */  int (*bind)(FAR struct inode *blkdriver, FAR const void *data, FAR void **handle);  int (*unbind)(FAR void *handle, FAR struct inode **blkdriver);  int (*statfs)(FAR struct inode *mountpt, FAR struct statfs *buf);  /* Operations on paths */  int (*unlink)(FAR struct inode *mountpt, FAR const char *relpath);  int (*mkdir)(FAR struct inode *mountpt, FAR const char *relpath, mode_t mode);  int (*rmdir)(FAR struct inode *mountpt, FAR const char *relpath);  int (*rename)(FAR struct inode *mountpt, FAR const char *oldrelpath, FAR const char *newrelpath);  int (*stat)(FAR struct inode *mountpt, FAR const char *relpath, FAR struct stat *buf);  /* NOTE: More operations will be needed here to support: disk usage stats  * file stat(), file attributes, file truncation, etc.  */};
接下来直接进入主题,在nuttx系统中mount point和block device,其实是当block device自身作为一个设备时,和字符型设备一样,都需要先构建自己的×_dev_s(ftl_struct_s/smart_struct_s...)和×_ops_s(i_bops)数据结构,然后类似的调用register_blockdriver()函数进行设备注册;
     而相对来说mount point对我们来说就稍微陌生点,在nuttx/fs/fs_mount.c中mount函数中我们看到了这样的定义:FAR struct inode *mountpt_inode;这说明mountpoint其实也是被抽象成了inode形式,那么我接下来其实也就很容易理解了;
union inode_ops_u{  FAR const struct file_operations *i_ops; /* Driver operations for inode */#ifndef CONFIG_DISABLE_MOUNTPOUNT  FAR const struct block_operations *i_bops; /* Block driver operations */  FAR const struct mountpt_operations *i_mops; /* Operations on a mountpoint */#endif};
这样来说我们也就可以理解为在nuttx中inode其实定义了三种类型,从struct mountpt_operations来看,其成员又是对于文件系统抽象的接口罢了;

文件系统mount流程
上面了解了块设备的注册流程,接下就只剩下文件系统的Mount流程了;
/**************************************************************************** * Name: mount * * Description: * mount() attaches the filesystem specified by the 'source' block device * name into the root file system at the path specified by 'target.' * * Return: * Zero is returned on success; -1 is returned on an error and errno is * set appropriately: * * EACCES A component of a path was not searchable or mounting a read-only * filesystem was attempted without giving the MS_RDONLY flag. * EBUSY 'source' is already mounted. * EFAULT One of the pointer arguments points outside the user address * space. * EINVAL 'source' had an invalid superblock. * ENODEV 'filesystemtype' not configured * ENOENT A pathname was empty or had a nonexistent component. * ENOMEM Could not allocate a memory to copy filenames or data into. * ENOTBLK 'source' is not a block device * ****************************************************************************/int mount(FAR const char *source, FAR const char *target,  FAR const char *filesystemtype, unsigned long mountflags,  FAR const void *data){#if defined(BDFS_SUPPORT) || defined(NONBDFS_SUPPORT)#ifdef BDFS_SUPPORT  FAR struct inode *blkdrvr_inode = NULL;#endif  FAR struct inode *mountpt_inode;  FAR const struct mountpt_operations *mops;  void *fshandle;  int errcode;  int ret;  /* Verify required pointer arguments */  DEBUGASSERT(target && filesystemtype);  /* Find the specified filesystem. Try the block driver file systems first */#ifdef BDFS_SUPPORT  if (source && (mops = mount_findfs(g_bdfsmap, filesystemtype)) != NULL)  {  /* Make sure that a block driver argument was provided */  DEBUGASSERT(source);  /* Find the block driver */  ret = find_blockdriver(source, mountflags, &blkdrvr_inode);  if (ret < 0)  {  fdbg("Failed to find block driver %s\n", source);  errcode = -ret;  goto errout;  }  }  else#endif /* BDFS_SUPPORT */#ifdef NONBDFS_SUPPORT  if ((mops = mount_findfs(g_nonbdfsmap, filesystemtype)) != NULL)  {  }  else#endif /* NONBDFS_SUPPORT */  {  fdbg("Failed to find file system %s\n", filesystemtype);  errcode = ENODEV;  goto errout;  }  /* Insert a dummy node -- we need to hold the inode semaphore  * to do this because we will have a momentarily bad structure.  */  inode_semtake();  ret = inode_reserve(target, &mountpt_inode);  if (ret < 0)  {  /* inode_reserve can fail for a couple of reasons, but the most likely  * one is that the inode already exists. inode_reserve may return:  *  * -EINVAL - 'path' is invalid for this operation  * -EEXIST - An inode already exists at 'path'  * -ENOMEM - Failed to allocate in-memory resources for the operation  */  fdbg("Failed to reserve inode\n");  errcode = -ret;  goto errout_with_semaphore;  }  /* Bind the block driver to an instance of the file system. The file  * system returns a reference to some opaque, fs-dependent structure  * that encapsulates this binding.  */  if (!mops->bind)  {  /* The filesystem does not support the bind operation ??? */  fdbg("Filesystem does not support bind\n");  errcode = EINVAL;  goto errout_with_mountpt;  }  /* Increment reference count for the reference we pass to the file system */#ifdef BDFS_SUPPORT#ifdef NONBDFS_SUPPORT  if (blkdrvr_inode)#endif  {  blkdrvr_inode->i_crefs++;  }#endif  /* On failure, the bind method returns -errorcode */#ifdef BDFS_SUPPORT  ret = mops->bind(blkdrvr_inode, data, &fshandle);#else  ret = mops->bind(NULL, data, &fshandle);#endif  if (ret != 0)  {  /* The inode is unhappy with the blkdrvr for some reason. Back out  * the count for the reference we failed to pass and exit with an  * error.  */  fdbg("Bind method failed: %d\n", ret);#ifdef BDFS_SUPPORT#ifdef NONBDFS_SUPPORT  if (blkdrvr_inode)#endif  {  blkdrvr_inode->i_crefs--;  }#endif  errcode = -ret;  goto errout_with_mountpt;  }  /* We have it, now populate it with driver specific information. */  INODE_SET_MOUNTPT(mountpt_inode);  mountpt_inode->u.i_mops = mops;#ifdef CONFIG_FILE_MODE  mountpt_inode->i_mode = mode;#endif  mountpt_inode->i_private = fshandle;  inode_semgive(); /* We can release our reference to the blkdrver_inode, if the filesystem  * wants to retain the blockdriver inode (which it should), then it must  * have called inode_addref(). There is one reference on mountpt_inode  * that will persist until umount() is called.  */#ifdef BDFS_SUPPORT#ifdef NONBDFS_SUPPORT  if (blkdrvr_inode)#endif  {  inode_release(blkdrvr_inode);  }#endif  return OK;  /* A lot of goto's! But they make the error handling much simpler */errout_with_mountpt:  mountpt_inode->i_crefs = 0;  inode_remove(target);  inode_semgive();#ifdef BDFS_SUPPORT#ifdef NONBDFS_SUPPORT  if (blkdrvr_inode)#endif  {  inode_release(blkdrvr_inode);  }#endif  inode_release(mountpt_inode);  goto errout;errout_with_semaphore:  inode_semgive();#ifdef BDFS_SUPPORT#ifdef NONBDFS_SUPPORT  if (blkdrvr_inode)#endif  {  inode_release(blkdrvr_inode);  }#endiferrout:  set_errno(errcode);  return ERROR;#else  fdbg("No filesystems enabled\n");  set_errno(ENOSYS);  return error;#endif /* BDFS_SUPPORT || NONBDFS_SUPPORT */}

Source:块设备节点名字,比如/dev/ram1
Target:mount到文件系统的路径,比如/mnt/ramdisk0
Filesystemtype:文件系统类型,比如procfs, tmpfs, vfat, smartfs等等
Mountflags:文件系统被mount时的属性,比如只读等
Data: private data,用于扩展,未使用时传NULL
     接口内部先根据文件系统类型,去遍历记录各种文件系统属性(各种文件系统的类型和操作方法)的结构体全局数组,找出待mount类型文件系统的operation,即读写等操作的方法;
static const struct fsmap_t g_bdfsmap[] ={  { "vfat",&fat_operations },  { "romfs",&romfs_operations },  { "smartfs",&smartfs_operations },  { NULL, NULL },};static const struct fsmap_t g_nonbdfsmap[] ={  { "nxffs",&nxffs_operations },  { "tmpfs",&tmpfs_operations },  { "nfs",&nfs_operations },  { "binfs",&binfs_operations },  { "procfs",&procfs_operations },  { "hostfs",&hostfs_operations },  { NULL, NULL },};
根据mount的设备节点名字比如/dev/ram1,找到描述该设备的inode节点(即到g_root_inode链表中去遍历);
根据要Mount的路径名字target,为mountpt新分配一个inode节点,并添加至链表g_root_node中;
根据步骤1中获取的文件系统operation方法,调用其中的bind接口,进行文件系统初始化;
对于mount,每种文件系统都有管理他们文件系统的内部数据结构,虽然管理各自内部文件系统的数据结构不同;
但是他们内部均有一个相同的成员,那就是struct inode *fs_blkdriver,用于指向管理设备的inode成员,
Bind函数内部先分配一块管理各自文件系统的内部数据结构,初始化如下成员
fs->fs_blkdriver = blkdriver
fs->fs_sem
然后调用各种文件系统的mout函数,比如smartfs_mount/procfs_mount/fat_mount函数进行各自文件系统真正的Mount操作;
最后把上述管理各自文件系统内部数据结构,通过参数fshandle指针返回;
说明:每种文件系统内部mount实现均是根据各种类型文件系统原理实现,差异比较大,这里略去说明;
最后初始化mountpt_inode的如下成员,完成磁盘Mount工作;
0 0
原创粉丝点击