记录puppet mcollective 构建过程

来源:互联网 发布:红衣军 知乎 编辑:程序博客网 时间:2024/06/07 08:01

配置activemq

用数字代表对应的主,客户端
1 ===> master
2 ===> agent


查看配置信息
1.
sudo puppet master –configprint certdir,privatekeydir
2.
sudo puppet agent –configprint certdir,privatekeydir


产生证书
1.
[ACTIVEMQ CERT]
sudo puppet cert generate activemq.test.cn (可以重用puppet agent -t 产生的证书)
[SHARED SERVER KEYS]
sudo puppet cert generate mcollective-servers
2.
[SERVER CERTS]
重用$certdir/.pem and $privatekeydir/.pem.
1.
[CLIENT CERTS]
如果需要添加新的管理员,重新生成管理员的证书
sudo puppet cert generate


安装activemq
yum install -y activemq
替换原来的 activemq.xml 配置文件如下(目录位置/etc/activemq)

<beans  xmlns="http://www.springframework.org/schema/beans"  xmlns:amq="http://activemq.apache.org/schema/core"  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd  http://activemq.apache.org/camel/schema/spring http://activemq.apache.org/camel/schema/spring/camel-spring.xsd">    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">        <property name="locations">            <value>file:${activemq.base}/conf/credentials.properties</value>        </property>    </bean>    <!--      For more information about what MCollective requires in this file,      see http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html    -->    <!--      WARNING: The elements that are direct children of <broker> MUST BE IN      ALPHABETICAL ORDER. This is fixed in ActiveMQ 5.6.0, but affects      previous versions back to 5.4.      https://issues.apache.org/jira/browse/AMQ-3570    -->    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" schedulePeriodForDestinationPurge="60000">        <!--          MCollective generally expects producer flow control to be turned off.          It will also generate a limitless number of single-use reply queues,          which should be garbage-collected after about five minutes to conserve          memory.          For more information, see:          http://activemq.apache.org/producer-flow-control.html        -->        <destinationPolicy>          <policyMap>            <policyEntries>              <policyEntry topic=">" producerFlowControl="false"/>              <policyEntry queue="*.reply.>" gcInactiveDestinations="true" inactiveTimoutBeforeGC="300000" />            </policyEntries>          </policyMap>        </destinationPolicy>        <managementContext>            <managementContext createConnector="false"/>        </managementContext>        <plugins>          <statisticsBrokerPlugin/>          <!--            This configures the users and groups used by this broker. Groups            are referenced below, in the write/read/admin attributes            of each authorizationEntry element.          -->          <simpleAuthenticationPlugin>            <users>              <authenticationUser username="mcollective" password="marionette" groups="mcollective,everyone"/>              <authenticationUser username="admin" password="secret" groups="mcollective,admins,everyone"/>            </users>          </simpleAuthenticationPlugin>          <!--            Configure which users are allowed to read and write where. Permissions            are organized by group; groups are configured above, in the            authentication plugin.            With the rules below, both servers and admin users belong to group            mcollective, which can both issue and respond to commands. For an            example that splits permissions and doesn't allow servers to issue            commands, see:            http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#detailed-restrictions          -->          <authorizationPlugin>            <map>              <authorizationMap>                <authorizationEntries>                  <authorizationEntry queue=">" write="admins" read="admins" admin="admins" />                  <authorizationEntry topic=">" write="admins" read="admins" admin="admins" />                  <authorizationEntry topic="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />                  <authorizationEntry queue="mcollective.>" write="mcollective" read="mcollective" admin="mcollective" />                  <!--                    The advisory topics are part of ActiveMQ, and all users need access to them.                    The "everyone" group is not special; you need to ensure every user is a member.                  -->                  <authorizationEntry topic="ActiveMQ.Advisory.>" read="everyone" write="everyone" admin="everyone"/>                </authorizationEntries>              </authorizationMap>            </map>          </authorizationPlugin>        </plugins>        <!--          The systemUsage controls the maximum amount of space the broker will          use for messages. For more information, see:          http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#memory-and-temp-usage-for-messages-systemusage        -->        <systemUsage>            <systemUsage>                <memoryUsage>                    <memoryUsage limit="20 mb"/>                </memoryUsage>                <storeUsage>                    <storeUsage limit="1 gb" name="foo"/>                </storeUsage>                <tempUsage>                    <tempUsage limit="100 mb"/>                </tempUsage>            </systemUsage>        </systemUsage>        <!--          The transport connectors allow ActiveMQ to listen for connections over          a given protocol. MCollective uses Stomp, and other ActiveMQ brokers          use OpenWire. You'll need different URLs depending on whether you are          using TLS. For more information, see:          http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#transport-connectors        -->        <transportConnectors>            <transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>            <transportConnector name="stomp+nio" uri="stomp+nio://0.0.0.0:61613"/>            <!-- If using TLS, uncomment this and comment out the previous connector:              <transportConnector name="stomp+ssl" uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true"/>            -->        </transportConnectors>    </broker>    <!--      Enable web consoles, REST and Ajax APIs and demos.      It also includes Camel (with its web console); see ${ACTIVEMQ_HOME}/conf/camel.xml for more info.      See ${ACTIVEMQ_HOME}/conf/jetty.xml for more details.    -->    <import resource="jetty.xml"/></beans>

修改配置文件中的密码

<simpleAuthenticationPlugin>        <users>          <authenticationUser username="mcollective" password="password" groups="mcollective,everyone"/>

修改端口号和协议

<transportConnectors>      <transportConnector name="stomp+ssl" uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true"/>    </transportConnectors>

生成ActiveMQ keystores (Java keystores)
[ActiveMQ需要的证书]
A copy of the site’s CA certificate (一份地址CA的认证)
A certificate signed by the site’s CA (一份已签名的认证)
A private key to match its certificate (一个与证书的密钥)
通过puppet已经生成了一个:activemq.test.cn.pem (这里密钥小心保管

[创建Truststore] 这里可以用activemq中的密码
(The truststore determines which certificates are allowed to connect to ActiveMQ)
cd /var/lib/puppet/ssl/certs
sudo keytool -import -alias “activemq” -file ca.pem -keystore truststore.jks
输入密码,获得truststore.jks文件

检查一下 指纹是否一致:
sudo keytool -list -keystore truststore.jks (输入上面的密码)
sudo openssl x509 -in ca.pem -fingerprint -md5 (查看md5是否一致)

[创建Keystore]
(The keystore contains the ActiveMQ broker’s certificate and private key, which it uses to identify itself to the applications that connect to it.)

sudo cat /var/lib/puppet/ssl/private_keys/activemq.test.cn.pem /var/lib/puppet/ssl/certs/activemq.test.cn.pem > temp.pem (这里是密钥在前,签名证书在后)

sudo openssl pkcs12 -export -in temp.pem -out activemq.p12 -name activemq.test.cn (输入的密码与前面一致)

导入Pkcs12,生成keystore(输入的密码与前面一致)
sudo keytool -importkeystore -destkeystore keystore.jks -srckeystore activemq.p12 -srcstoretype PKCS12 -alias activemq.test.cn

检查一下指纹是否一致
sudo keytool -list -keystore keystore.jks
sudo openssl x509 -in activemq.test.cn.pem -fingerprint -md5

[拷贝truststore和keystore到ActiveMQ的配置目录/etc/activemq]
cp truststore.jks keystore.jks /etc/activemq/

[配置ActiveMQ使用它们] (放置在plugins和systemusage中间)

<sslContext>      <sslContext         keyStore="keystore.jks" keyStorePassword="password"         trustStore="truststore.jks" trustStorePassword="password"      />    </sslContext>

[重启activemq服务]
service activemq restart
检查是否启动成功
netstat -nlatp | grep 61614

或者访问下面的链接进行验证
http://puppet.test.cn:8161/admin/ 访问activemq服务看是否启动成功


配置mcollective

1.安装mcollective

yum -y install mcollective-*
分三步走:
Locate and place the credentials (放置证书)
Populate the fact file (大众化fact文件)
Write the server config file, with appropriate settings (配置server配置文件)
1.

前面已经产生了 mcollective-servers 证书(将private和public的key放入其中)然后用puppet同步到所有的agent上去,这样就不用为管理证书发愁了都采用统一的一个。此时可以通过/var/lib/puppet/ssl/certs/mcollective-servers.pem 可以与所有mcollective server进行通信

file { '/etc/mcollective':ensure => directory,source => 'puppet:///modules/mcollective/pem',owner  =>  root,group  =>  root,mode   => '0640',   recurse =>  remote,    notify => service['mcollective'],}

每个mcollective server都需要配置 /etc/mcollective/facts.yaml

 file{"/etc/mcollective/facts.yaml":      owner    => root,      group    => root,      mode     => 400,      loglevel => debug, # reduce noise in Puppet reports      content  => inline_template("<%= scope.to_hash.reject { |k,v| k.to_s =~ /(uptime_seconds|timestamp|free)/ }.to_yaml %>"), # exclude rapidly changing facts    }

接下来就是配置mcollective server了,使用模板配置

<% ssldir = '/var/lib/puppet/ssl' %>    # /etc/mcollective/server.cfg    # ActiveMQ connector settings:    connector = activemq    direct_addressing = 1    plugin.activemq.pool.size = 1    plugin.activemq.pool.1.host = <%= @activemq_server %>    plugin.activemq.pool.1.port = 61614    plugin.activemq.pool.1.user = mcollective    plugin.activemq.pool.1.password = <%= @mcollective_password %>    plugin.activemq.pool.1.ssl = 1    plugin.activemq.pool.1.ssl.ca = <%= ssldir %>/certs/ca.pem    plugin.activemq.pool.1.ssl.cert = <%= ssldir %>/certs/<%= scope.lookupvar('::clientcert') %>.pem    plugin.activemq.pool.1.ssl.key = <%= ssldir %>/private_keys/<%= scope.lookupvar('::clientcert') %>.pem    plugin.activemq.pool.1.ssl.fallback = 0    # SSL security plugin settings:    securityprovider = ssl    plugin.ssl_client_cert_dir = /etc/mcollective/clients    plugin.ssl_server_private = /etc/mcollective/server_private.pem    plugin.ssl_server_public = /etc/mcollective/server_public.pem    #    plugin.puppet.resource_allow_managed_resources = true    plugin.puppet.resource_type_whitelist = exec,file    #    #    # Facts, identity, and classes:    identity = <%= scope.lookupvar('::fqdn') %>    factsource = yaml    plugin.yaml = /etc/mcollective/facts.yaml    classesfile = /var/lib/puppet/state/classes.txt    # No additional subcollectives:    collectives = mcollective    main_collective = mcollective    # Registration:    # We don't configure a listener, and only send these messages to keep the    # Stomp connection alive. This will use the default "agentlist" registration    # plugin.    registerinterval = 600    # Auditing (optional):    # If you turn this on, you must arrange to rotate the log file it creates.    rpcaudit = 1    rpcauditprovider = logfile    plugin.rpcaudit.logfile = /var/log/mcollective-audit.log    # Logging:    logger_type = file    loglevel = info    logfile = /var/log/mcollective.log    keeplogs = 5    max_log_size = 2097152    logfacility = user    # Platform defaults:    # These settings differ based on platform; the default config file created by    # the package should include correct values. If you are managing settings as    # resources, you can ignore them, but with a template you'll have to account    # for the differences.    <% if scope.lookupvar('::osfamily') == 'RedHat' -%>    libdir = /usr/libexec/mcollective    daemonize = 1    <% elsif scope.lookupvar('::osfamily') == 'Debian' -%>    libdir = /usr/share/mcollective/plugins    daemonize = 1    <% else -%>    # INSERT PLATFORM-APPROPRIATE VALUES FOR LIBDIR AND DAEMONIZE    <% end %>

1.配置mcollective client端
两步走
Request, retrieve, and place their credentials 需要的认证信息
Write the client config file, with appropriate sitewide and per-user settings 将用户配置到mcollective

添加一个新用户需要做的事情
Issues the user a signed SSL certificate, while assuring the user that no one else has ever had custody of their private key.(生成ssl证书,保存好密钥)
Adds a copy of the user’s certificate to every MCollective server.(将每个用户的证书拷贝到每个服务器上,通过puppet完成)
Gives the user a copy of the shared server public key, the CA cert, and the ActiveMQ username/password (给用户一份共享的明文密钥,ca证书,ACtiveMQ的用户名和密码

修改client端的配置文件

# ~/.mcollective# or# /etc/mcollective/client.cfg (默认采用这种方式)

下面是配置文件的内容

# ActiveMQ connector settings:connector = activemqdirect_addressing = 1plugin.activemq.pool.size = 1plugin.activemq.pool.1.host = puppet.test.cnplugin.activemq.pool.1.port = 61614plugin.activemq.pool.1.user = mcollectiveplugin.activemq.pool.1.password = testplugin.activemq.pool.1.ssl = 1plugin.activemq.pool.1.ssl.ca = /var/lib/puppet/ssl/certs/ca.pem plugin.activemq.pool.1.ssl.cert = /var/lib/puppet/ssl/certs/activemq.test.cn.pem plugin.activemq.pool.1.ssl.key = /var/lib/puppet/ssl/private_keys/activemq.test.cn.pemplugin.activemq.pool.1.ssl.fallback = 0# SSL security plugin settings:securityprovider = sslplugin.ssl_server_public = /var/lib/puppet/ssl/public_keys/mcollective-servers.pemplugin.ssl_client_private = /var/lib/puppet/ssl/private_keys/activemq.test.cn.pemplugin.ssl_client_public = /var/lib/puppet/ssl/certs/activemq.test.cn.pem# Interface settings:default_discovery_method = mcdirect_addressing_threshold = 10ttl = 60color = 1rpclimitmethod = first# No additional subcollectives:collectives = mcollectivemain_collective = mcollective# Platform defaults:# These settings differ based on platform; the default config file created# by the package should include correct values or omit the setting if the# default value is fine.libdir = /usr/libexec/mcollective# Logging:logger_type = consoleloglevel = warn1.接下来配置server模块到puppet的module里面以便同步到所有的 mcollective server当中创建mcollective 模块mkdir -p /etc/puppet/modules/mcollective/{manifests,templates,files}mkdir -p /etc/puppet/modules/mcollective/files/pem/clients拷贝puppet master的证书到 module files下面cp /var/lib/puppet/ssl/certs/activemq.test.cn.pem /etc/puppet/modules/mcollective/files/pem/clientscp /var/lib/puppet/ssl/private_keys/mcollective-servers.pem /etc/puppet/modules/mcollective/files/pem/server_private.pemcp /var/lib/puppet/ssl/public_keys/mcollective-servers.pem /etc/puppet/modules/mcollective/files/pem/server_public.pem给予mcollective访问的权限,否则在agent 更新时会提示没有访问权限sudo chmod 755 /etc/puppet/modules/mcollective/files/pem/*sudo chmod 755 /etc/puppet/modules/mcollective/files/pem/clients/*将前面的server 脚本放上来vim /etc/puppet/modules/mcollective/manifests/init.pp 配置好模板,将前面的server 模板内容加进来vim /etc/puppet/modules/mcollective/templates/server.cfg.erbvim /etc/puppet/manifests/site.pp将node 加进来node tjcyzg66.test.cn {  class { 'mcollective':    activemq_server => 'puppet.test.cn',    mcollective_password => 'password',   }}

2.先测试下脚本的正确性
puppet agent -t –noop

解决mcollective与activemq的方法,查看它们的日志文件
/var/log/mcollective.log和/var/log/activemq.log

测试 mcollective是否正确安装以及能否正常工作于puppet

1.mco ping
如果能够看到node,那说明成功了

mco puppet runonce –server nodename
用mcollective试试更新puppet,代替puppet kick

0 0
原创粉丝点击