ubuntu16.04+fastdfs+nginx分布式文件系统

来源:互联网 发布:淘宝汽车超人假机油 编辑:程序博客网 时间:2024/06/06 00:01

FastDFS简介

FastDFS是一个由C语言实现的开源的轻量级分布式文件系统,它对文件进行管理,功能包括:文件存储,文件同步,文件访问(文件上传,下载)等;解决了大容量存储和负载均衡的问题。特别适合以文件为载体的在线服务。如相册网站,视频网站等等。
同类的分布式文件还有谷歌的GFS,HDFS(Hadoop),TFS(淘宝)
FastDFS有两个角色:Tracker(跟踪器),Storage(存储节点)
Tracker:主要做调度作用,起到负载均衡的作用;负责管理所有的Storage和Group,每一个Storage再启动后会连接Tracker,告知自己所属的Group,并保持周期心跳
Storage:存储节点,主要提供容量和备份服务;以Group为单位,每个Group内可以有多台Storage,数据互相备份。

架构图

环境

  • 系统环境:ubantu16.04
  • FastDFS版本:5.0.5
  • Nginx版本:1.12.2

Tracker:192.168.0.3
Storage1:192.168.0.3
Storage2:192.168.0.4

FastDFS安装(三台服务器都要安装)

  1. 由于fastdfs5.0.5依赖libfastcommon,先安装libfastcommon
wget https://github.com/happyfish100/libfastcommon/archive/V1.0.7.tar.gztar -zxvf V1.0.7.tar.gzcd libfastcommon-1.0.7./make.sh./make.sh install
  1. 设置软连接
ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.soln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.soln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.soln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
  1. 安装FastDFS
wget https://github.com/happyfish100/fastdfs/archive/V5.05.tar.gztar -zxvf V5.05.tar.gzcd fastdfs-5.05./make.sh./make.sh install

配置Tracker与Storage

FastDFS安装成功后,会在/etc目录下会有个fdfs目录,进入fdfs,会发现三个.sample后缀的示例文件。

  1. 配置Tracker服务器(本文使用192.168.0.3)
cp tracker.conf.sample tracker.confvim tracker.conf

打开tracker.conf,修改如下处

base_path=/data/fastdfs/tracker

创建/data/fastdfs/tracker目录

mkdir -p /data/fastdfs/tracker

启动tracker服务

fdfs_trackerd /etc/fdfs/tracker.conf start

关闭tracker服务

fdfs_trackerd /etc/fdfs/tracker.conf stop

启动tracker服务后,查看监听

netstat -unltp|grep fdfs

查看/data/fastdfs/tracker目录文件,发现多出两个文件,用来存放数据和日志的

ls /data/fastdfs/trackerdata logs

tracker安装成功

  1. 配置Storage服务器(两台192.168.0.3,192.168.0.4)
cp storage.conf.sample storage.confvim storage.conf

打开storage.conf,修改如下处:注意此处ip地址需配置外网,否则外网访问不到,如果只在内网访问则没有问题

# the base path to store data and log filesbase_path=/data/fastdfs/storage# store_path#, based 0, if store_path0 not exists, it's value is base_path# the paths must be existstore_path0=/data/fastdfs/storage# tracker_server can ocur more than once, and tracker_server format is#  "host:port", host can be hostname or ip address#配置tracker跟踪器ip端口tracker_server=192.168.0.3:22122

创建/data/fastdfs/storage目录

mkdir -p /data/fastdfs/storage

启动storage服务

fdfs_storaged /etc/fdfs/storage.conf start

启动有错误,可以通过/data/fastdfs/storage/logs查看
查看/data/fastdfs/storage下文件内容,生成logs、data两个目录
启动storage服务后,查看监听

netstat -unltp|grep fdfs

storage默认端口23000
Storage存储节点安装成功

所有存储节点都启动之后,可以在任一存储节点上使用如下命令查看集群的状态信息:

/usr/bin/fdfs_monitor /etc/fdfs/storage.conf

这里写图片描述
这里写图片描述

通过上图可以看到,两台storage的状态。

测试上传文件

三台服务器随便选择一台服务器,这里我选择192.168.0.3服务器

同样进入/etc/fdfs目录,编辑client.conf

cp /etc/fdfs/client.conf.sample /etc/fdfs/client.confvim /etc/fdfs/client.conf

修改如下:

# the base path to store log filesbase_path=/data/fastdfs/client# tracker_server can ocur more than once, and tracker_server format is#  "host:port", host can be hostname or ip address#配置tracker跟踪器ip端口tracker_server=192.168.0.3:22122

创建/data/fastdfs/client目录

mkdir -p /data/fastdfs/client

上传/opt目录的一张图片(名为:test.jpg)

fdfs_test /etc/fdfs/client.conf upload /opt/test.jpg

如上图,上传成功,分别进入两台storage服务器目录/data/fastdfs/storage/data/00/00下,都可以发现,文件保存成功

在所有Storage节点下载安装fastdfs-nginx-module

FastDFS通过Tracker服务器,将文件放在Storage服务器存储,但是同组存储服务器之间需要进入文件复制,有同步延迟的问题。假如Tracker 服务器将文件上传到192.168.0.3,上传成功后文件ID已经返回给客户端。此时FastDFS存储集群机制会将这个文件同步到同组存储 192.168.0.4,在文件还没有复制完成的情况下,客户端如果用这个文件ID192.168.0.4上取文件,就会出现文件无法访问的错误。而 fastdfs-nginx-module可以重定向文件连接到源服务器取文件,避免客户端由于复制延迟导致的文件无法访问错误。

安装fastdfs-nginx-module

wget http://jaist.dl.sourceforge.NET/project/fastdfs/FastDFS%20Nginx%20Module%20Source%20Code/fastdfs-nginx-module_v1.16.tar.gztar -zxvf fastdfs-nginx-module_v1.16.tar.gzcd /opt/fastdfs/fastdfs-nginx-module/srcvim config

修改如下:不修改后面编译nginx会报错

CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/"

改为:

CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/"

复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录, 并修改

cd fastdfs-nginx-module/srccp mod_fastdfs.conf /etc/fdfscd /etc/fdfsvim mod_fastdfs.conftracker_server=192.168.7.73:22122 # tracker服务IP和端口url_have_group_name=true # 访问链接前缀加上组名store_path0=/data/fastdfs/storage # 文件存储路径

nginx安装配置

apt-get install build-essentialapt-get install libtoolapt-get install libpcre3 libpcre3-devapt-get install zlib1g-devapt-get install opensslwget http://nginx.org/download/nginx-1.11.7.tar.gztar -zxvf nginx1.11.7cd nginx1.11.7./configure --prefix=/usr/local/nginx --add-module=/root/fastdfs-nginx-module/src  make && make install

启动、停止、重载配置、测试配置文件是否正确

sudo /usr/local/nginx    #启动

sudo /usr/local/nginx -c /usr/local/nginx.confsudo /usr/local/nginx -t #检测配置文件是否正确 sudo /usr/local/nginx -s stop #停止 sudo /usr/local/nginx -s reload #重载配置文件

nginx默认端口80,查看命令:

netstat -anp|grep 80

FastDFS Storage配置nginx访问(Storage机器配置)

复制FastDFS的部分配置文件到/etc/fdfs目录,命令:

cd /opt/fastdfs/fastdfs-5.05/confcp http.conf mime.types /etc/fdfs

配置nginx.conf文件,修改端口与/etc/fdfs/storage.conf中的http.server_port=8888相对应

listen       8888;location /group1/M00 {        root /data/fastdfs/storage/;        ngx_fastdfs_module;}

注意事项
8888端口值是要与/etc/fdfs/storage.conf中的http.server_port=8888相对应,因为http.server_port默认为8888,如果想改成 80,则要对应修改过来。
Storage对应有多个group的情况下,访问路径带group名,如/group1/M00/00/00/xxx,对应的 Nginx 配置为:

location ~/group([0-9])/M00 {    ngx_fastdfs_module;}

如查下载时如发现老报404,将nginx.conf第一行user nobody修改为user root后重新启动。
重启nginx服务

/usr/local/nginx/sbin/nginx -s reload

FastDFS Tracker配置nginx访问(Tracker机器配置)

vim /usr/local/nginx/conf/nginx.confupstream fdfs_group1 {     server 192.168.7.149:8888 weight=1 max_fails=2 fail_timeout=30s;     server 192.168.7.44:8888 weight=1 max_fails=2 fail_timeout=30s;}listen       80; #设置 group 的负载均衡参数location /group1/M00 {    proxy_next_upstream http_502 http_504 error timeout invalid_header;    proxy_pass http://fdfs_group1;    expires 30d;}

重启nginx服务

/usr/local/nginx/sbin/nginx -s reload

上传文件访问测试

通过浏览器访问,上传的文件,例如测试上传后返回的文件ID为group1/M00/00/00/wKgAA1oXweqAVyBDAAFsYaVScOM276_big.jpg

用浏览器通过Storage访问:http://ip:8888/group1/M00/00/00/wKgAA1oXweqAVyBDAAFsYaVScOM276_big.jpg

用浏览器通过Tracker访问:http://ip/group1/M00/00/00/wKgAA1oXweqAVyBDAAFsYaVScOM276_big.jpg

常用命令

/usr/local/nginx/sbin/nginx 启动服务/usr/local/nginx/sbin/nginx -s reload 重载配置fdfs_trackerd /etc/fdfs/tracker.conf start 启动tracker服务fdfs_storaged /etc/fdfs/storage.conf start 启动storage服务

java代码访问

package org.csource.fastdfs;import org.csource.common.MyException;import org.csource.common.NameValuePair;import org.junit.Before;import org.junit.Test;import java.io.IOException;import java.net.InetSocketAddress;/** * <p>fastdfs java api</p> * Created by zhezhiyong@163.com on 2017/11/27. */public class TestAll {    private StorageClient client;    private TrackerServer trackerServer;    private StorageClient1 client1;    @Before    public void testBefore() throws IOException {        ClientGlobal.setG_connect_timeout(2 * 1000);        ClientGlobal.setG_network_timeout(30 * 1000);        ClientGlobal.setG_anti_steal_token(false);        ClientGlobal.setG_charset("UTF-8");        ClientGlobal.setG_secret_key(null);        InetSocketAddress[] trackerServerList = new InetSocketAddress[1];        for (int i = 0; i < 1; i++) {            trackerServerList[i] = new InetSocketAddress("ip", 22122);        }        ClientGlobal.setG_tracker_group(new TrackerGroup(trackerServerList));        TrackerClient tracker = new TrackerClient();        trackerServer = tracker.getConnection();        client = new StorageClient(trackerServer, null);        client1 = new StorageClient1(trackerServer, null);    }    /**     * 上传文件     */    @Test    public void upload_appender_file() throws Exception {        byte[] file_buff;        NameValuePair[] meta_list;        String[] results;        String file_ext_name = "txt";        String remote_filename;        String group_name;        meta_list = new NameValuePair[4];        meta_list[0] = new NameValuePair("width", "800");        meta_list[1] = new NameValuePair("heigth", "600");        meta_list[2] = new NameValuePair("bgcolor", "#FFFFFF");        meta_list[3] = new NameValuePair("author", "Mike");        file_buff = "this is a test2".getBytes(ClientGlobal.g_charset);        System.out.println("file length: " + file_buff.length);        results = client.upload_appender_file(file_buff, file_ext_name, meta_list);          /*          group_name = "";        results = client.upload_appender_file(group_name, file_buff, "txt", meta_list);        */        if (results == null) {            System.err.println("upload file fail, error code: " + client.getErrorCode());            return;        } else {            group_name = results[0];            remote_filename = results[1];            System.err.println("group_name: " + group_name + ", remote_filename: " + remote_filename);            System.err.println(client.get_file_info(group_name, remote_filename));        }    }    /**     * 上传文件     */    @Test    public void upload_appender_file1() throws Exception {        byte[] file_buff;        NameValuePair[] meta_list;        String appender_file_id;        String file_ext_name = "txt";        meta_list = new NameValuePair[4];        meta_list[0] = new NameValuePair("width", "800");        meta_list[1] = new NameValuePair("heigth", "600");        meta_list[2] = new NameValuePair("bgcolor", "#FFFFFF");        meta_list[3] = new NameValuePair("author", "Mike");        file_buff = "this is a upload_appender_file1".getBytes(ClientGlobal.g_charset);        System.out.println("file length: " + file_buff.length);        appender_file_id = client1.upload_appender_file1(file_buff, file_ext_name, meta_list);        System.out.println("appender_file_id = " + appender_file_id);          /*          group_name = "";        results = client.upload_appender_file(group_name, file_buff, "txt", meta_list);        */        if (appender_file_id == null) {            System.err.println("upload file fail, error code: " + client.getErrorCode());            return;        } else {            System.err.println(client1.get_file_info1(appender_file_id));        }    }    /**     * 追加文件     */    @Test    public void append_file() throws IOException, MyException {        byte[] file_buff = "\r\nthis is a slave buff".getBytes(ClientGlobal.g_charset);        String appender_filename = "M00/00/00/wKgAA1obwKyECQ_4AAAAAP4ZzcQ523.txt";        String group_name = "group1";        int errno = client.append_file(group_name, appender_filename, file_buff);        if (errno == 0) {            System.err.println(client.get_file_info(group_name, appender_filename));        } else {            System.err.println("append file fail, error no: " + errno);        }    }}

参考:https://github.com/happyfish100/fastdfs-client-java

常见问题

java代码报错如下:

java.net.SocketTimeoutException: connect timed outat java.net.PlainSocketImpl.socketConnect(Native Method)at java.net.PlainSocketImpl.doConnect(Unknown Source)at java.net.PlainSocketImpl.connectToAddress(Unknown Source)at java.net.PlainSocketImpl.connect(Unknown Source)at java.net.SocksSocketImpl.connect(Unknown Source)at java.net.Socket.connect(Unknown Source)at net.mikesu.fastdfs.client.StorageClientImpl.getSocket(StorageClientImpl.java:25)at net.mikesu.fastdfs.client.StorageClientImpl.upload(StorageClientImpl.java:69)at net.mikesu.fastdfs.FastdfsClientImpl.upload(FastdfsClientImpl.java:71)at net.mikesu.fastdfs.FastdfsClientImpl.upload(FastdfsClientImpl.java:274)at net.mikesu.fastdfs.FastdfsClientImpl.upload(FastdfsClientImpl.java:270)at net.mikesu.fastdfs.FastdfsClientTest.testFastdfsClient(FastdfsClientTest.java:27)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)at java.lang.reflect.Method.invoke(Unknown Source)

解决:

root@zhengzy:/etc/fdfs# vim storage.conftracker_server=外网ip:22122