OpenStack Kilo VirtualBox CentOS7 部署问题汇总及解决办法

来源:互联网 发布:数据库内模式是什么 编辑:程序博客网 时间:2024/06/06 15:44

一、控制节点设置NTP时间服务器 - chronyd和firewalld

CentOS7自带开启另一个配置更加简单的时间同步软件chronyd,chronyc来配置检测。不过还是可以先禁用chronyd再来启用配置常用的ntpd.

# systemctl stop chronyd && systemctl disable chronyd // 停止并禁用chronyd

# yum install ntp // 系统默认已安装

# vim /etc/ntp.conf //编辑配置文件
# 设置上层时间服务器。#server 0.centos.pool.ntp.org iburst // 系统默认有,iburst表示不通也要继续检测使用        #server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburstserver 1.cn.pool.ntp.org prefer // 国内使用的三个服务器,prefer表示优先使用server 1.asia.pool.ntp.orgserver 0.asia.pool.ntp.orgserver  127.127.1.0 // 实在连不通上面的服务器那就同步自己fudge   127.127.1.0 stratum 12 // 把自己的层级sratum定死为12方便我们辨别# 设置可以访问控制节点时间服务器的对象restrict 127.0.0.1 // 默认本机不受限制restrict ::1 #默认本机不受限制    # 加入下面两行,表示IPv4和IPv6的所有IP都可以请求时间同步查询,但是拒绝kiss of death包,拒绝更改服务器时间,拒绝跟踪服务;# 其中的default表示所有IPrestrict -4 default kod notrap nomodify restrict -6 default kod notrap nomodify#restrict 10.0.0.0 mask 255.255.255.0 notrap nomodify# 开启NTP的日志记录并更改日志文件的所有权和SELinux上下文 - 可选步骤logfile /var/log/ntpd.log

# chown ntp:ntp /var/log/ntpd.log

# chcon -t ntpd_log_t /var/log/ntpd.log

# systemctl enable ntpd.service // 设置开机启用

# systemctl start ntpd.service      
# netstat -pln | grep 123 // 验证开启效果,端口123开放
udp 0 0 10.0.0.11:123 0.0.0.0:* 6639/ntpdudp 0 0 127.0.0.1:123 0.0.0.0:* 6639/ntpdudp 0 0 0.0.0.0:123 0.0.0.0:* 6639/ntpd
# systemctl status firewalld.service // CentOS7默认开启自带并启用的firewalld防火墙,替代之前的且默认禁用的iptables
firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)Active: active (running)
# ntpdate -d controller // 不关闭防火墙,即便其他节点手动同步控制节点的这个时间服务器也是不通的
26 Sep 23:16:25 ntpdate[4012]: ntpdate 4.2.6p5@1.2349-o Tue Jun 23 23:48:53 UTC 2015 (1) Looking for host controller and service ntphost found : controllertransmit(10.0.0.11)transmit(10.0.0.11)transmit(10.0.0.11)transmit(10.0.0.11)transmit(10.0.0.11)     10.0.0.11: Server dropped: no dataserver 10.0.0.11, port 123 stratum 0, precision 0, leap 00, trust 000refid [10.0.0.11], delay 0.00000, …(省略)  26 Sep 23:16:34 ntpdate[4012]: no server suitable for synchronization found
# ntpq -p
remote    refid  st t  when poll reach delay offset jitter         ==========================================================controller .INIT. 16 u - 64  0 0.000 0.000 0.000   
# ntpstat
unsynchronisedtime server re-startingpolling server every 8 s
# systemctl disable firewalld.service // 所以为了简单起见,直接停掉控制节点的防火墙然后再检测
# ntpq -p // *号表示当前连接的服务器
remote           refid      st t when poll reach   delay   offset  jitter====================================================news.neu.edu.cn .INIT.          16 u    -   64    0    0.000    0.000   0.000ns02.hns.net.in .INIT.          16 u    -   64    0    0.000    0.000   0.000*web10.hnshostin 158.43.128.33    2 u   46   64   37  423.988  -268.71 140.830LOCAL(0)        .LOCL.          12 l  245   64   30    0.000    0.000   0.000
# ntpq -c assoc // sys.peer即OK
ind assid status conf reach auth condition last_event cnt         ===================================================1 1820 8011 yes no none reject mobilize 12 1821 8011 yes no none reject mobilize 13 1822 963a yes yes none sys.peer sys_peer 34 1823 903a yes yes none reject sys_peer 3

# tcpdump | grep ntp // 控制节点上查看各个节点的同步情况

00:06:03.583400 IP network.ntp > controller.ntp: NTPv4, Client, length 4800:06:03.583542 IP controller.ntp > network.ntp: NTPv4, Server, length 4800:06:04.419712 IP compute1.ntp > controller.ntp: NTPv4, Client, length 4800:06:04.419833 IP controller.ntp > compute1.ntp: NTPv4, Server, length 4800:06:11.652623 IP block1.ntp > controller.ntp: NTPv4, Client, length 4800:06:11.652784 IP controller.ntp > block1.ntp: NTPv4, Server, length 4800:06:12.055254 IP object1.ntp > controller.ntp: NTPv4, Client, length 4800:06:12.055485 IP controller.ntp > object1.ntp: NTPv4, Server, length 4800:11:53.377608 IP object2.ntp > controller.ntp: NTPv4, Client, length 4800:11:53.377770 IP controller.ntp > object2.ntp: NTPv4, Server, length 48

# tcpdump | grep "length 48" //或者tcpdump | grep NTPv4//

02:11:45.525619 IP ntp4.ntp > ntp1.ntp: NTPv4, Client, length 4802:11:45.525742 IP ntp1.ntp > ntp4.ntp: NTPv4, Server, length 4802:11:49.770662 IP ntp3.ntp > ntp1.ntp: NTPv4, Client, length 4802:11:49.770803 IP ntp1.ntp > ntp3.ntp: NTPv4, Server, length 4802:11:54.093182 IP ntp2.ntp > ntp1.ntp: NTPv4, Client, length 4802:11:54.093352 IP ntp1.ntp > ntp2.ntp: NTPv4, Server, length 4802:12:04.426254 IP ntp1.ntp > news.neu.edu.cn.ntp: NTPv4, Client, length 4802:12:04.467897 IP news.neu.edu.cn.ntp > ntp1.ntp: NTPv4, Server, length 4802:12:05.422005 IP ntp1.ntp > frontier.innolan.net.ntp: NTPv4, Client, length 4802:12:05.615618 IP frontier.innolan.net.ntp > ntp1.ntp: NTPv4, Server, length 48

二、部署object对象节点的时候,服务状态报错

[root@object2 ~]# systemctl status openstack-swift-object-updater.service  openstack-swift-object-replicator.service openstack-swift-object-auditor.service -l

openstack-swift-object-updater.service - OpenStack Object Storage (swift) - Object Updater   Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-updater.service; enabled)   Active: active (running) since Fri 2015-12-18 03:47:57 HKT; 3min 5s ago Main PID: 1840 (swift-object-up)   CGroup: /system.slice/openstack-swift-object-updater.service           └─1840 /usr/bin/python /usr/bin/swift-object-updater /etc/swift/object-server.confDec 18 03:47:57 object2 systemd[1]: Starting OpenStack Object Storage (swift) - Object Updater...Dec 18 03:47:57 object2 systemd[1]: Started OpenStack Object Storage (swift) - Object Updater.Dec 18 03:48:25 object2 object-updater[1840]: Begin object update sweepDec 18 03:48:25 object2 object-updater[1875]: UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012  File "/usr/bin/swift-object-updater", line 23, in <module>#012    run_daemon(ObjectUpdater, conf_file, **options)#012  File "/usr/lib/python2.7/site-packages/swift/common/daemon.py", line 110, in run_daemon#012    klass(conf).run(once=once, **kwargs)#012  File "/usr/lib/python2.7/site-packages/swift/common/daemon.py", line 57, in run#012    self.run_forever(**kwargs)#012  File "/usr/lib/python2.7/site-packages/swift/obj/updater.py", line 91, in run_forever#012    self.object_sweep(os.path.join(self.devices, device))#012  File "/usr/lib/python2.7/site-packages/swift/obj/updater.py", line 141, in object_sweep#012    for asyncdir in os.listdir(device):#012OSError: [Errno 13] Permission denied: '/srv/node/sdb1'Dec 18 03:48:25 object2 object-updater[1840]: Object update sweep completed: 0.04s
openstack-swift-object-replicator.service - OpenStack Object Storage (swift) - Object Replicator   Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-replicator.service; enabled)   Active: active (running) since Fri 2015-12-18 03:47:57 HKT; 6min ago Main PID: 1839 (swift-object-re)   CGroup: /system.slice/openstack-swift-object-replicator.service           └─1839 /usr/bin/python /usr/bin/swift-object-replicator /etc/swift/object-server.confDec 18 03:52:58 object2 object-replicator[1839]: Nothing replicated for 0.000877857208252 seconds.Dec 18 03:52:58 object2 object-replicator[1839]: Object replication complete. (0.00 minutes)Dec 18 03:53:28 object2 object-replicator[1839]: Starting object replication pass.Dec 18 03:53:28 object2 object-replicator[1839]: ERROR creating /srv/node/sdb1/objects: #012Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-packages/swift/obj/replicator.py", line 428, in process_repl#012    mkdirs(obj_path)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line 770, in mkdirs#012    os.makedirs(path)#012  File "/usr/lib64/python2.7/os.py", line 157, in makedirs#012    mkdir(name, mode)#012OSError: [Errno 13] Permission denied: '/srv/node/sdb1/objects'Dec 18 03:53:28 object2 object-replicator[1839]: Nothing replicated for 0.000908136367798 seconds.Dec 18 03:53:28 object2 object-replicator[1839]: Object replication complete. (0.00 minutes)Dec 18 03:53:58 object2 object-replicator[1839]: Starting object replication pass.Dec 18 03:53:58 object2 object-replicator[1839]: ERROR creating /srv/node/sdb1/objects: #012Traceback (most recent call last):#012  File "/usr/lib/python2.7/site-packages/swift/obj/replicator.py", line 428, in process_repl#012    mkdirs(obj_path)#012  File "/usr/lib/python2.7/site-packages/swift/common/utils.py", line 770, in mkdirs#012    os.makedirs(path)#012  File "/usr/lib64/python2.7/os.py", line 157, in makedirs#012    mkdir(name, mode)#012OSError: [Errno 13] Permission denied: '/srv/node/sdb1/objects'Dec 18 03:53:58 object2 object-replicator[1839]: Nothing replicated for 0.000730991363525 seconds.Dec 18 03:53:58 object2 object-replicator[1839]: Object replication complete. (0.00 minutes)
openstack-swift-object-auditor.service - OpenStack Object Storage (swift) - Object Auditor   Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-auditor.service; enabled)   Active: active (running) since Fri 2015-12-18 03:47:57 HKT; 9min ago Main PID: 1838 (swift-object-au)   CGroup: /system.slice/openstack-swift-object-auditor.service           └─1838 /usr/bin/python /usr/bin/swift-object-auditor /etc/swift/object-server.confDec 18 03:55:59 object2 object-auditor[1995]: Begin object audit "forever" mode (ZBF)Dec 18 03:55:59 object2 object-auditor[1995]: ERROR: Unable to run auditing: [Errno 13] Permission denied: '/srv/node/sdb1'Dec 18 03:56:29 object2 object-auditor[2003]: Begin object audit "forever" mode (ALL)Dec 18 03:56:29 object2 object-auditor[2003]: ERROR: Unable to run auditing: [Errno 13] Permission denied: '/srv/node/sdb1'Dec 18 03:56:29 object2 object-auditor[2002]: Begin object audit "forever" mode (ZBF)Dec 18 03:56:29 object2 object-auditor[2002]: ERROR: Unable to run auditing: [Errno 13] Permission denied: '/srv/node/sdb1'Dec 18 03:56:59 object2 object-auditor[2019]: Begin object audit "forever" mode (ZBF)Dec 18 03:56:59 object2 object-auditor[2019]: ERROR: Unable to run auditing: [Errno 13] Permission denied: '/srv/node/sdb1'Dec 18 03:56:59 object2 object-auditor[2020]: Begin object audit "forever" mode (ALL)Dec 18 03:56:59 object2 object-auditor[2020]: ERROR: Unable to run auditing: [Errno 13] Permission denied: '/srv/node/sdb1'
解决办法:
# restorecon -R /srv/node


三、控制节点验证swfit stat的时候报错 Account not found - firewalld

# swift -V 3 stat

Account HEAD failed: http://controller:8080/v1/AUTH_394088b333474f6fb943392309d11cc7 503 Internal Server Error

# swift -V 3 stat

Authorization Failure. Authorization failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-78f593b7-7515-4107-a086-fed3488935ba)

# swift --debug stat

DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://controller:5000/v2.0/tokensINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): controllerDEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 2799DEBUG:iso8601.iso8601:Parsed 2015-12-17T21:12:19Z into {'tz_sign': None, 'second_fraction': None, 'hour': u'21', 'daydash': u'17', 'tz_hour': None, 'month': None, 'timezone': u'Z', 'second': u'19', 'tz_minute': None, 'year': u'2015', 'separator': u'T', 'monthdash': u'12', 'day': None, 'minute': u'12'} with default timezone <iso8601.iso8601.Utc object at 0x7fcf54016950>DEBUG:iso8601.iso8601:Got u'2015' for 'year' with default NoneDEBUG:iso8601.iso8601:Got u'12' for 'monthdash' with default 1DEBUG:iso8601.iso8601:Got 12 for 'month' with default 12DEBUG:iso8601.iso8601:Got u'17' for 'daydash' with default 1DEBUG:iso8601.iso8601:Got 17 for 'day' with default 17DEBUG:iso8601.iso8601:Got u'21' for 'hour' with default NoneDEBUG:iso8601.iso8601:Got u'12' for 'minute' with default NoneDEBUG:iso8601.iso8601:Got u'19' for 'second' with default NoneINFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): controllerDEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:19 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'tx72f2f06156b34861a7f0a-0056731723')]DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:20 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'tx70470c17cd794866921b5-0056731724')]DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:22 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'tx5555ca48c68f48178be62-0056731726')]DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:26 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'tx25515024cbcb4d63832dc-005673172a')]DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:34 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'txa5437cbfd543446ebb811-0056731732')]DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_50af18d09386413c9c0095533fde34e9 HTTP/1.1" 503 0INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 -I -H "X-Auth-Token: 85f21325df544c42924da253833bbbd3"INFO:swiftclient:RESP STATUS: 503 Internal Server ErrorINFO:swiftclient:RESP HEADERS: [('date', 'Thu, 17 Dec 2015 20:12:50 GMT'), ('content-length', '0'), ('content-type', 'text/html; charset=UTF-8'), ('connection', 'keep-alive'), ('x-trans-id', 'tx671207cc0aa0481680a86-0056731742')]ERROR:swiftclient:Account HEAD failed: http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 503 Internal Server ErrorTraceback (most recent call last):  File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1243, in _retry    rv = func(self.url, self.token, *args, **kwargs)  File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 528, in head_account    http_response_content=body)ClientException: Account HEAD failed: http://controller:8080/v1/AUTH_50af18d09386413c9c0095533fde34e9 503 Internal Server ErrorTraceback (most recent call last):  File "/usr/bin/swift", line 10, in <module>    sys.exit(main())  File "/usr/lib/python2.7/site-packages/swiftclient/shell.py", line 1287, in main    globals()['st_%s' % args[0]](parser, argv[1:], output)  File "/usr/lib/python2.7/site-packages/swiftclient/shell.py", line 492, in st_stat    stat_result = swift.stat()  File "/usr/lib/python2.7/site-packages/swiftclient/service.py", line 443, in stat    raise SwiftError('Account not found', exc=err)swiftclient.service.SwiftError: 'Account not found'
原因在于没有把object1和object2上的防火墙关掉。关掉防火墙之后重新生成rings再重启下swift-proxy

# systemctl restart openstack-swift-proxy.service memcached.service

# swift stat

        Account: AUTH_50af18d09386413c9c0095533fde34e9     Containers: 0        Objects: 0          Bytes: 0   Content-Type: text/plain; charset=utf-8     Connection: keep-alive    X-Timestamp: 1450386224.14965     X-Trans-Id: txa53f7365721847f4bc5e9-0056732330X-Put-Timestamp: 1450386224.14965

四、各个服务和组件的配置文件中,不要在同一行注释。

五、安装ceilometer配置控制节点报错

controller swift-proxy-server[12934]: LookupError: No section 'ceilometer' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /etc/swift/proxy-server.conf
proxy-sevver.conf文件中的pipeline = xxx ceilometer proxy-server,这里的proxy-server和ceilometer位置不能颠倒了,否则出现上面的这个错误。

六、在搞ceilometer的时候,验证 ceilometer meter-list报错:The service catalog is empty.

# ceilometer meter-list

The service catalog is empty.

# ceilometer --version

1.0.13
# cat admin-openrc.sh // 检测rc配置文件
export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=evandengexport OS_AUTH_URL=http://controller:35357/v3export OS_IMAGE_API_VERSION=2export OS_VOLUME_API_VERSION=2

因为keystone用的是v3,但是它对1.0版本的ceilometer有问题,要么升级ceilometer 1.5(最好不要,太危险,可能造成其他的服务不正常),要么就推荐新建个admin.rc:

# cat bdmin-openrc.sh

unset OS_PROJECT_DOMAINunset OS_USER_DOMAIN_IDexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=evandengexport OS_AUTH_URL=http://controller:35357/v2.0export OS_IMAGE_API_VERSION=2export OS_VOLUME_API_VERSION=2


七、Cinder-volume在block上的状态始终是down (这个问题是在搞OpenStack Juno的时候,为了简单没有搞时间服务器去同步控制节点和卷节点时间)

# cinder service-list

# cinder-manage service list

配置文件检查了一遍没有问题,最后发现是控制节点和块节点时间相差了几十分钟,设置NTP时间同步就好了。


八、DevStack自动一体化安装OpenStack之后的服务操作

DevStack安装之后,服务操作通过screen来完成,OpenStack的screen名字是stack —— 奇怪的是,官网DevStack页面我没找到这些说明。devstack/rejoin-stack.sh之后,

# screen -x stack /*打开这个screen,或者# screen -r

每个服务都是有个数字代号,服务都是简写了,比如glance-register是g-reg。下面切换不同服务的界面,当前界面的服务名字后会添加一个星号*

ctrl+a n /*下一个服务的界面,先按ctrl+a,再n(next)

ctrl+a p /*上一个服务的界面,p(previous)

ctrl+c /*在一个服务的界面按下该组合键来关闭服务,界面中会刷出对应日志

UP Enter /*先按键盘上的向上方向键,命令行会自动列出如上图的命令(/usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf),然后回车开启服务,完成关闭并读取配置文件开启服务的操作

ctrl+a d /*最后要退出screen界面,或者ctrl+a :quit

更多screen的操作办法参考:http://aperiodic.net/screen/quick_reference


九、windows虚拟机,windows 2012R2 远程桌面出现蓝底(不是常见的系统崩溃的蓝屏),10秒钟内报错网络中断

更加具体的表现为:在同一个时刻,有的客户端出现这个问题,同时另外有客户端却可以正常连接。并且遇到这个问题的客户端之前却能够正常连接,出现问题的时刻环境和之前的一样未作更改。抓包发现这样的情况:服务器端不回应客户端的请求包了。

这个时候,想起了MTU。在网络节点下我设置的MTU为1450(为什么不采用以太网默认的1500?留点空间给OpenStack用,预防潜在的网络未知问题)

$ cat /etc/neutron/dnsmasq-neutron.conf dhcp-option-force=26,1450

查看Linux虚机的MTU如下,这个MTU是跟随云平台设置更改的:

# ifconfigeth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450        inet internal_ip_addr  netmask 255.255.255.0  broadcast internal_broadcast_ip        inet6 fe80::f816:3eff:fece:a281  prefixlen 64  scopeid 0x20<link>        ether fa:16:3e:ce:a2:81  txqueuelen 1000  (Ethernet)        RX packets 9281096  bytes 858475572 (818.7 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 9346189  bytes 772906813 (737.1 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
但是,用管理员身份运行cmd查看windows虚机的MTU却还是保持默认的1500:

PS C:\Users\Administrator> netsh interface ipv4 show subinterfaces   MTU  MediaSenseState   传入字节  传出字节      接口------  ---------------  ---------  ---------  -------------4294967295                1          0          0  Loopback Pseudo-Interface 1  1500                1  3936380497  2236953882  以太网
马上更改为1450并显示更改效果:

PS C:\Users\Administrator> netsh interface ipv4 set subinterface "以太网" mtu=1450 store=persistent确定。PS C:\Users\Administrator> netsh interface ipv4 show subinterfaces   MTU  MediaSenseState   传入字节  传出字节      接口------  ---------------  ---------  ---------  -------------4294967295                1          0          0  Loopback Pseudo-Interface 1  1450                1  3936791751  2237534635  以太网
再次远程桌面连接,问题搞定。

1 0