heat自动伸缩

来源:互联网 发布:JAVA动态绑定的条件 编辑:程序博客网 时间:2024/05/01 21:28
Heat自动伸缩操作说明文档


1 Heat 简介
The Orchestration service provides a template-based orchestration for describing a cloud application by running OpenStack API calls to generate running cloud applications. The software integrates other core components of OpenStack into a one-file template system. The templates allow you to create most OpenStack resource types, such as instances, floating IPs, volumes, security groups and users. It also provides advanced functionality, such as instance high availability, instance auto-scaling, and nested stacks. This enables OpenStack core projects to receive a larger user base.
The service enables deployers to integrate with the Orchestration service directly or through custom plug-ins.
The Orchestration service consists of the following components:
heat command-line client
A CLI that communicates with the heat-api to run AWS CloudFormation APIs. End developers can directly use the Orchestration REST API.
heat-api component
An OpenStack-native REST API that processes API requests by sending them to the heat-engine over Remote Procedure Call (RPC).
heat-api-cfn component
An AWS Query API that is compatible with AWS CloudFormation. It processes API requests by sending them to the heat-engine over RPC.
heat-engine
Orchestrates the launching of templates and provides events back to the API consumer.


AutoScaling的概念最早出现在AWS,AutoScaling是一项Web服务,目的是根据用户定义的策略,时间表的运行状态检查启动或终止虚拟机,达到自动伸缩。 
Openstack里的Auto Scale是由Heat和Ceilometer模块一起配合完成的。Ceilometer负责收集处理性能数据,一旦达到Heat模版里定义的阀值,就发告警信息给heat-engine,由heat-engine调动Heat模版里定义的其它的OpenStack资源实现auto scale。


2 Heat 编排
OpenStack 从最开始就提供了命令行和 Horizon 来供用户管理资源。然而靠敲一行行的命令和在浏览器中的点击相当费时费力。即使把命令行保存为脚本,在输入输出,相互依赖之间要写额外的脚本来进行维护,而且不易于扩展。用户或者直接通过 REST API 编写程序,这里会引入额外的复杂性,同样不易于维护和扩展。 这都不利于用户使用 Openstack 来进行大批量的管理,更不利于使用 OpenStack 来编排各种资源以支撑 IT 应用。
Heat 在这种情况下应运而生。Heat 采用了业界流行使用的模板方式来设计或者定义编排。用户只需要打开文本编辑器,编写一段基于 Key-Value 的模板,就能够方便地得到想要的编排。为了方便用户的使用,Heat 提供了大量的模板例子,大多数时候用户只需要选择想要的编排,通过拷贝-粘贴的方式来完成模板的编写。
Heat 从四个方面来支持编排。首先是 OpenStack 自己提供的基础架构资源,包括计算,网络和存储等资源。通过编排这些资源,用户就可以得到最基本的 VM。值得提及的是,在编排 VM 的过程中,用户可以提供一些简单的脚本,以便对 VM 做一些简单的配置。然后用户可以通过 Heat 提供的 Software Configuration 和 Software Deployment 等对 VM 进行复杂的配置,比如安装软件、配置软件。接着如果用户有一些高级的功能需求,比如需要一组能够根据负荷自动伸缩的 VM 组,或者需要一组负载均衡的 VM,Heat 提供了 AutoScaling 和 Load Balance 等进行支持。如果要用户自己单独编程来完成这些功能,所花费的时间和编写的代码都是不菲的。现在通过 Heat,只需要一段长度的 Template,就可以实现这些复杂的应用。Heat 对诸如 AutoScaling 和 Load Blance 等复杂应用的支持已经非常成熟,有各种各样的模板供参考。最后如果用户的应用足够复杂,或者说用户的应用已经有了一些基于流行配置管理工具的部署,比如说已经基于 Chef 有了 Cookbook,那么可以通过集成 Chef 来复用这些 Cookbook,这样就能够节省大量的开发时间或者是迁移时间。本文稍后会分别对这四个方面做一些介绍。
图 5. Heat 编排
 
3 Heat 模板
HOT模板样式由YAML定义,形式如下:
heat_template_version: 2015-10-15


description:
  # a description of the template


parameter_groups:
  # a declaration of input parameter groups and order


parameters:
  # declaration of input parameters


resources:
  # declaration of template resources


outputs:
  # declaration of output parameters


conditions:
  # declaration of conditions


1 heat_template_version
heat 模板版本号:heat 模板的版本号不仅代表模板的格式,也包含所支持的特性。


2 description 可选关键字
该部分是针对当前模板功能的详细描述


3 parameter_groups可选关键字
这是模板的可选部分,该部分定义应该如何组织输入参数


4 parameters可选关键字
这是模板的可选部分,该部分定义在初始化模板时必须提供的输入参数


5 resources
该部分包含模板资源的声明,在任何模板中该部分中都应至少包含一个资源类型,否则模板实际上将不会做任何事情。可以定义资源间的依赖关系,比如说生成 Port,然后再用 port 来生成 VM。


6 outputs可选关键字
这是模板的可选部分,该部分描述在模板初始化后,对用户可见的输出参数。可以用来给用户使用,也可以用来作为输入提供给其它的 Stack。


7 conditions 可选关键字
Note: Support for this section is added in the Newton version.




4 Heat 对基础架构的编排
对于不同的资源,Heat 都提供了对应的资源类型。比如对于 VM,Heat 提供了 OS::Nova::Server。OS::Nova::Server 有一些参数,比如 key、image、flavor 等,这些参数可以直接指定,可以由客户在创建 Stack 时提供,也可以由上下文其它的参数获得。
清单 1. 创建一个 VM
1
2
3
4
5
6
7
8
9
10 resources:
server:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: { get_param: image }
flavor: { get_param: flavor }
user_data: |
#!/bin/bash
echo “10.10.10.10 testvm” >> /etc/hosts
在上面创建 VM 的例子中,我们选择从输入参数获得 OS::Nova::Server 所需的值。其中利用 user_data 做了一些简单的配置。
5 Heat 对软件配置和部署的编排
Heat 提供了多种资源类型来支持对于软件配置和部署的编排,如下所列:
OS::Heat::CloudConfig: VM 引导程序启动时的配置,由 OS::Nova::Server 引用
OS::Heat::SoftwareConfig:描述软件配置
OS::Heat::SoftwareDeployment:执行软件部署
OS::Heat::SoftwareDeploymentGroup:对一组 VM 执行软件部署
OS::Heat::SoftwareComponent:针对软件的不同生命周期部分,对应描述软件配置
OS::Heat::StructuredConfig:和 OS::Heat::SoftwareConfig 类似,但是用 Map 来表述配置
OS::Heat::StructuredDeployment:执行 OS::Heat::StructuredConfig 对应的配置
OS::Heat::StructuredDeploymentsGroup:对一组 VM 执行 OS::Heat::StructuredConfig 对应的配置
其中最常用的是 OS::Heat::SoftwareConfig 和 OS::Heat::SoftwareDeployment。
OS::Heat::SoftwareConfig
下面是 OS::Heat::SoftwareConfig 的用法,它指定了配置细节。
清单 2. 最常见的 OS::Heat::SoftwareConfig 用法
























resources:
install_db_sofwareconfig
type: OS::Heat::SoftwareConfig
properties:
group: script
outputs:
- name: result
config: |
#!/bin/bash -v
yum -y install mariadb mariadb-server httpd wordpress
touch /var/log/mariadb/mariadb.log
chown mysql.mysql /var/log/mariadb/mariadb.log
systemctl start mariadb.service
OS::Heat::SoftwareDeployment
下面是 OS::Heat::SoftwareDeployment 的用法,它指定了在哪台服务器上做哪项配置。另外 SofwareDeployment 也指定了以何种信号传输类型来和 Heat 进行通信。
清单 3. OS::Heat::SoftwareDeployment 样例
1
2
3
4
5
6 sw_deployment:
type: OS::Heat::SoftwareDeployment
properties:
config: { get_resource: install_db_sofwareconfig }
server: { get_resource: server }
signal_transport: HEAT_SIGNAL
OS::Heat::SoftwareConfig 和 OS::Heat::SoftwareDeployment 流程
OS::Heat::SoftwareConfig 和 OS::Heat::SoftwareDeployment 协同工作,需要一系列 Heat 工具的自持。这些工具都是 OpenStack 的子项目。
首先,os-collect-config 调用 Heat API 拿到对应 VM 的 metadata。
当 metadata 更新完毕后,os-refresh-config 开始工作了,它主要是运行下面目录所包含的脚本:
/opt/stack/os-config-refresh/pre-configure.d
/opt/stack/os-config-refresh/configure.d
/opt/stack/os-config-refresh/post-configure.d
/opt/stack/os-config-refresh/migration.d
/opt/stack/os-config-refresh/error.d
每个文件夹都应对了软件不同的阶段,比如预先配置阶段、配置阶段、后配置阶段和迁移阶段。如果任一阶段的脚本执行出现问题,它会运行 error.d 目录里的错误处理脚本。os-refresh-config 在配置阶段会调用一定预先定义的工具,比如 heat-config,这样就触发了 heat-config 的应用,调用完 heat-config 后,又会调用 os-apply-config。存在在 heat-config 或者 os-apply-config 里的都是一些脚本,也叫钩子。Heat 对于各种不同的工具提供了不同的钩子脚本。用户也可以自己定义这样的脚本。
等一切调用完成无误后,heat-config-notify 会被调用,它用来发信号给 Heat,告诉这个软件部署的工作已经完成。
当 Heat 收到 heat-config-notify 发来的信号后,它会把 OS::Heat::SoftwareConfig 对应资源的状态改为 Complete。如果有任何错误发生,就会改为 CREATE_FAILED 状态。
图 6. OS::Heat::SoftwareConfig 和 OS::Heat::SoftwareDeployment 流程图
 
6 Heat 对资源自动伸缩的编排
基础架构的自动伸缩是一个很高级的功能。Heat 提供自动伸缩组 OS::Heat::AutoScalingGroup 和伸缩策略 OS::Heat::ScalingPolicy,结合基于 Ceilometer 的 OS::Ceilometer::Alarm 实现了可以根据各种条件,比如负载,进行资源自动伸缩的功能。
图 7. Heat 自动伸缩的流程图
 
清单 4. 定义自动伸缩组
1
2
3
4
5 auto_scale_group:
type: OS::Heat::AutoScalingGroup
properties:
min_size: 1
max_size: 4
清单 5. 定义伸缩规则
1
2
3
4
5
6
7 server_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: auto_scale_group}
cooldown: 60
scaling_adjustment: 1
清单 6. 定义警报
  cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 50% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 50
      alarm_actions:
        - {get_attr: [server_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: gt


  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      threshold: 15
      alarm_actions:
        - {get_attr: [server_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: lt
7 Heat 对负载均衡的编排
负载均衡也是一个很高级应用,它也是由一组不同的资源类型来实现的。资源类型包括:
OS::Neutron::Pool:定义资源池,一般可以由 VM 组成
OS::Neutron::PoolMember:定义资源池的成员
OS::Neutron::HealthMonitor:定义健康监视器,根据自定的协议,比如 TCP 来监控资源的状态,并提供给 OS::Neutron::Pool 来调整请求分发
OS::Neutron::LoadBalancer:关联资源池以定义整个负载均衡。
图 8. Heat 负载均衡
 
清单 7. Monitor 的定义
monitor:
  type: OS::Neutron::HealthMonitor
  properties:
    type: TCP
    delay: 3
    max_retries: 5
    timeout: 3


清单 8. Pool 成员的定义
member:
  type: OS::Neutron::PoolMember
  properties:
    pool_id: {get_resource: pool}
    address: {get_attr: [my_server,first_address]}
    protocol_port: 80
other_member:
  type: OS::Neutron::PoolMember
  properties:
    pool_id: {get_resource: pool}
    address: {get_attr: [my_other_server,first_address]}
    protocol_port: 80


清单 9. Pool 的定义


pool:
  type: OS::Neutron::Pool
  properties:
    protocol: HTTP
    monitors: [{get_resource: monitor}]
    subnet_id: {get_param: network_subnet_lb_pool}
    lb_method: ROUND_ROBIN
    vip:
      protocol_port: 80


清单 10. LoadBalancer 的定义


lb:
  type: OS::Neutron::LoadBalancer
  properties:
    protocol_port: 80
    pool_id: {get_resource: pool}














































8 AutoScaling定义的流程
Openstack里的Auto Scale是由Heat和Ceilometer模块一起配合完成的。Ceilometer负责收集处理性能数据,一旦达到Heat模版里定义的阀值,就发告警信息给heat-engine,由heat-engine调动Heat模版里定义的其它的OpenStack资源实现auto scale。


首先定义一个Auto Scaling Group,该Group 定义了可以持有资源的类型以及的最大、最小资源数


根据需求定义Alarm的触发条件,例如当CPU利用率在一分钟内平均值超过50%时触发警报,支持的的指标可以通过ceilometer meter-list命令查询
针对某个具体的Alarm,定义Policy,例如CPU利用率长时间偏高时,就在AutoScalingGroup中重新初始化一个相同实例,该Policy需要与 步骤一  中定义的Group绑定


为了更好的提高资源利用率,在定义自动收缩机制的同时可以定义负载均衡(Neutron LBAAS)。
定义AutoScaling的过程中涉及到的资源如下图:
 
9 AutoScaling的工作流程
Ceilometer通过获取实例的监控参数,发现实例的某项指标出现异常,且符合已经定义的Alarm触发规则


触发Policy.在生成Policy和Alarm时,Alarm会设置其alarm_actions属性,该属性的值可以理解为调用特定Policy服务的URL,此时该URL被调用


Policy被调用,根据配置,决定增加还是减少实例
工作流程大概如下:
 



s








10 Heat AutoScaling Resources


OS::Heat::AutoScalingGroup¶
伸缩组是具有相同应用场景的实例的集合,定义了组内实例数的最大值和最小值,冷却时间等等。  


Note
Available since 2014.1 (Icehouse)
An autoscaling group that can scale arbitrary resources.
An autoscaling group allows the creation of a desired count of similar resources, which are defined with the resource property in HOT format. If there is a need to create many of the same resources (e.g. one hundred sets of Server, WaitCondition and WaitConditionHandle or even Neutron Nets), AutoScalingGroup is a convenient and easy way to do that.
Required Properties¶
max_size¶
Maximum number of resources in the group.
Integer value expected.
Can be updated without replacement.
The value must be at least 0.
min_size¶
Minimum number of resources in the group.
Integer value expected.
Can be updated without replacement.
The value must be at least 0.
resource¶
Resource definition for the resources in the group, in HOT format. The value of this property is the definition of a resource just as if it had been declared in the template itself.
Map value expected.
Can be updated without replacement.
Optional Properties¶
cooldown¶
Cooldown period, in seconds.
Integer value expected.
Can be updated without replacement.
注:冷却时间是指一个伸缩活动后的一段锁定时间,在这个时间内不能进行其他的伸缩活动。
desired_capacity¶
Desired initial number of resources.
Integer value expected.
Can be updated without replacement.
rolling_updates¶
Policy for rolling updates for this scaling group.
Map value expected.
Can be updated without replacement.
Defaults to “{‘min_in_service’: 0, ‘max_batch_size’: 1, ‘pause_time’: 0}”.
Map properties:
max_batch_size¶
The maximum number of resources to replace at once.
Integer value expected.
Can be updated without replacement.
Defaults to “1”.
The value must be at least 1.
min_in_service¶
The minimum number of resources in service while rolling updates are being executed.
Integer value expected.
Can be updated without replacement.
Defaults to “0”.
The value must be at least 0.
pause_time¶
The number of seconds to wait between batches of updates.
Number value expected.
Can be updated without replacement.
Defaults to “0”.
The value must be at least 0.
Attributes¶
current_size¶ 
Note
Available since 2015.1 (Kilo)
The current size of AutoscalingResourceGroup.
outputs¶
Note
Available since 2014.2 (Juno)
A map of resource names to the specified attribute of each individual resource that is part of the AutoScalingGroup. This map specifies output parameters that are available once the AutoScalingGroup has been instantiated.
outputs_list¶
Note
Available since 2014.2 (Juno)
A list of the specified attribute of each individual resource that is part of the AutoScalingGroup. This list of attributes is available as an output once the AutoScalingGroup has been instantiated.
refs¶
Note
Available since 7.0.0 (Newton)
A list of resource IDs for the resources in the group.
refs_map¶
Note
Available since 7.0.0 (Newton)
A map of resource names to IDs for the resources in the group.
show¶
Detailed information about resource.
HOT Syntax¶
heat_template_version: 2015-04-30
...
resources:
  ...
  the_resource:
    type: OS::Heat::AutoScalingGroup
    properties:
      cooldown: Integer
      desired_capacity: Integer
      max_size: Integer
      min_size: Integer
      resource: {...}
      rolling_updates: {"min_in_service": Integer, "max_batch_size": Integer, "pause_time": Number}




OS::Heat::ScalingPolicy¶
为auto scale group添加伸缩的策略,定义了具体的扩展或者收缩的操作,以及伸缩的数量。 




A resource to manage scaling of OS::Heat::AutoScalingGroup.
Note while it may incidentally support AWS::AutoScaling::AutoScalingGroup for now, please don’t use it for that purpose and useAWS::AutoScaling::ScalingPolicy instead.
Resource to manage scaling for OS::Heat::AutoScalingGroup, i.e. define which metric should be scaled and scaling adjustment, set cooldown etc.
Required Properties¶
adjustment_type¶
Type of adjustment (absolute or percentage).
String value expected.
Can be updated without replacement.
Allowed values: change_in_capacity, exact_capacity, percent_change_in_capacity
auto_scaling_group_id¶
AutoScaling group ID to apply policy to.
String value expected.
Updates cause replacement.
scaling_adjustment¶
Size of adjustment.
Number value expected.
Can be updated without replacement.
Optional Properties¶
cooldown¶
Cooldown period, in seconds.
Number value expected.
Can be updated without replacement.
min_adjustment_step¶
Minimum number of resources that are added or removed when the AutoScaling group scales up or down. This can be used only when specifying percent_change_in_capacity for the adjustment_type property.
Integer value expected.
Can be updated without replacement.
The value must be at least 0.
Attributes¶
alarm_url¶
A signed url to handle the alarm.
show¶
Detailed information about resource.
signal_url¶
Note
Available since 5.0.0 (Liberty)
A url to handle the alarm using native API.
HOT Syntax¶
heat_template_version: 2015-04-30
...
resources:
  ...
  the_resource:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: String
      auto_scaling_group_id: String
      cooldown: Number
      min_adjustment_step: Integer
      scaling_adjustment: Number






Heat中AutoScaling还需配合OS::Ceilometer::Alarm使用,由Alarm监控实例的运行状况,一旦超过阈值,则会产生告警。
OS::Ceilometer::Alarm
Heat使用两种方式获取vm中的一些资源使用数据,一种是openstack服务ceilometer,另一种是vm中的heat-cfntools工具。本文介绍前一种的使用方法。
    cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 50% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 300
      evaluation_periods: 1
      repeat_actions: true
      threshold: 80
      alarm_actions:
        - {get_attr: [server_scaleup_policy, alarm_url]}
      matching_metadata: {'server_group': {get_param: "OS::stack_id"}}
      comparison_operator: gt


  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      repeat_actions: true
      threshold: 10
      alarm_actions:
        - {get_attr: [server_scaledown_policy, alarm_url]}
      matching_metadata: {'server_group': {get_param: "OS::stack_id"}}
      comparison_operator: lt










11 Heat AutoScaling Template
https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml


模板1
创建了一个ServerGroup,instance个数为1~3个,然后根据alarm消息,来执行ScalePolicy,执行+1或者-1 instance的操作。


heat_template_version: 2015-10-15


description: A simple auto scaling group without lbaas


parameters:
  image:
    type: string
    description: Image used for servers
  key:
    type: string
    description: SSH key to connect to the servers
  flavor:
    type: string
    description: flavor used by the web servers
  network:
    type: string
    description: Network used by the server


resources:
  auto_scaling_group:
    type: OS::Heat::AutoScalingGroup
    properties:
      min_size: 1
      max_size: 3
      resource:
        type: OS::Nova::Server
        properties:
          flavor: {get_param: flavor}
          # Optional Properties
          image: {get_param: image}
          key_name: {get_param: key}
          networks: [{network: {get_param: network} }]
          metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
      # Optional Properties
      cooldown: 60
      desired_capacity: 1
          
  server_scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: auto_scaling_group}
      scaling_adjustment: 1
      # Optional Properties
      cooldown: 60


  server_scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: auto_scaling_group}
      scaling_adjustment: -1
      # Optional Properties
      cooldown: 60


  cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 80% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 300
      evaluation_periods: 1
      repeat_actions: true
      threshold: 80
      alarm_actions:
        - {get_attr: [server_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.server_group': {get_param: "OS::stack_id"}}
      comparison_operator: gt


  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 10% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      repeat_actions: true
      threshold: 10
      alarm_actions:
        - {get_attr: [server_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.server_group': {get_param: "OS::stack_id"}}
      comparison_operator: lt




模板2


heat_template_version: 2015-10-15
description: AutoScaling Wordpress
parameters:
  image:
    type: string
    description: Image used for servers
  key:
    type: string
    description: SSH key to connect to the servers
  flavor:
    type: string
    description: flavor used by the web servers
  network:
    type: string
    description: Network used by the server
  subnet_id:
    type: string
    description: subnet on which the load balancer will be located
  external_network_id:
    type: string
    description: UUID of a Neutron external network
resources:
  asg:
    type: OS::Heat::AutoScalingGroup
    properties:
      min_size: 1
      max_size: 3
      resource:
        type: lb_server.yaml
        properties:
          flavor: {get_param: flavor}
          image: {get_param: image}
          key_name: {get_param: key}
          network: {get_param: network}
          pool_id: {get_resource: pool}
          metadata: {"metering.stack": {get_param: "OS::stack_id"}}
          user_data: |
            #!/bin/sh
            while :; do echo >/dev/null; done &
      # Optional Properties
      cooldown: 60
      desired_capacity: 2
  web_server_scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: asg}
      cooldown: 60
      scaling_adjustment: 1
  web_server_scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: asg}
      cooldown: 60
      scaling_adjustment: -1
      
  cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 50% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 50
      alarm_actions:
        - {get_attr: [web_server_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: gt
  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 10 minutes
      meter_name: cpu_util
      statistic: avg
      period: 600
      evaluation_periods: 1
      threshold: 15
      alarm_actions:
        - {get_attr: [web_server_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: lt


  monitor:
    type: OS::Neutron::HealthMonitor
    properties:
      type: TCP
      delay: 5
      max_retries: 5
      timeout: 5
  pool:
    type: OS::Neutron::Pool
    properties:
      protocol: HTTP
      monitors: [{get_resource: monitor}]
      subnet_id: {get_param: subnet_id}
      lb_method: ROUND_ROBIN
      vip:
        protocol_port: 80
  lb:
    type: OS::Neutron::LoadBalancer
    properties:
      protocol_port: 80
      pool_id: {get_resource: pool}


  # assign a floating ip address to the load balancer
  # pool.
  lb_floating:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: {get_param: external_network_id}
      port_id: {get_attr: [pool, vip, port_id]}


outputs:
  pool_ip_address:
    value: {get_attr: [pool, vip, address]}
    description: The IP address of the load balancing pool




#lb_server.yaml
heat_template_version: 2015-10-15
description: A load-balancer server
parameters:
  image:
    type: string
    description: Image used for servers
  key_name:
    type: string
    description: SSH key to connect to the servers
  flavor:
    type: string
    description: flavor used by the servers
  pool_id:
    type: string
    description: Pool to contact
  user_data:
    type: string
    description: Server user_data
  metadata:
    type: json
  network:
    type: string
    description: Network used by the server


resources:
  server:
    type: OS::Nova::Server
    properties:
      flavor: {get_param: flavor}
      image: {get_param: image}
      key_name: {get_param: key_name}
      metadata: {get_param: metadata}
      user_data: {get_param: user_data}
      user_data_format: RAW
      networks: [{network: {get_param: network} }]
  member:
    type: OS::Neutron::PoolMember
    properties:
      pool_id: {get_param: pool_id}
      address: {get_attr: [server, first_address]}
      protocol_port: 80


outputs:
  server_ip:
    description: IP Address of the load-balanced server.
    value: { get_attr: [server, first_address] }
  lb_member:
    description: LB member details.
    value: { get_attr: [member, show] }




12 AutoScaing实战


创建stack
使用模板1;查找好相关的openstack信息,image, network, key, flavor。
创建了一个ServerGroup,instance个数为1~3个,然后根据alarm消息,来执行ScalePolicy,执行+1或者-1 instance的操作。
执行命令:
root@node111:~# heat stack-create -f autoscaling.yml -P "image=cirros;network=private;key=mykey;flavor=m1.tiny" stack_cirros 
+--------------------------------------+--------------+--------------------+---------------------+--------------+
| id                                   | stack_name   | stack_status       | creation_time       | updated_time |
+--------------------------------------+--------------+--------------------+---------------------+--------------+
| d7a68309-ab44-42d3-bb83-3c891444a1be | stack_cirros | CREATE_IN_PROGRESS | 2017-10-19T08:02:41 | None         |
+--------------------------------------+--------------+--------------------+---------------------+--------------+
root@node111:~# heat stack-list
+--------------------------------------+--------------+-----------------+---------------------+--------------+
| id                                   | stack_name   | stack_status    | creation_time       | updated_time |
+--------------------------------------+--------------+-----------------+---------------------+--------------+
| d7a68309-ab44-42d3-bb83-3c891444a1be | stack_cirros | CREATE_COMPLETE | 2017-10-19T08:02:41 | None         |
+--------------------------------------+--------------+-----------------+---------------------+--------------+
root@node111:~# nova list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks           |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| 390fd353-417c-449f-b351-a00f610959fe | st-aling_group-svsevkr46xo3-urkfepwbjkdt-lrzein64bbbd | ACTIVE | -          | Running     | private=10.0.0.119 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+


日志:
2017-10-19 16:02:30.005 2505 INFO heat.engine.service [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_cirros
2017-10-19 16:02:30.332 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingResourceGroup "auto_scaling_group"
2017-10-19 16:02:30.620 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "qionvsem7ygr"
2017-10-19 16:02:33.530 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingPolicy "server_scaledown_policy"
2017-10-19 16:02:33.569 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating CeilometerAlarm "cpu_alarm_low"
2017-10-19 16:02:33.591 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingPolicy "server_scaleup_policy"
2017-10-19 16:02:33.592 2505 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating CeilometerAlarm "cpu_alarm_high"
2017-10-19 16:02:34.131 2505 WARNING oslo_config.cfg [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Option "username" from group "trustee" is deprecated. Use option "user-name" from group "trustee".
2017-10-19 16:02:34.436 2499 INFO heat.engine.service [req-6d69be8f-627f-4cfb-aa09-6dc48847b88b - -] Service d544d0c8-aea5-4a71-94ff-31159d5eab1d is updated
2017-10-19 16:02:36.957 2503 INFO heat.engine.service [req-4ed427f8-f237-4bc9-888e-d2ff685f6faf - -] Service f4c93d77-37ff-4953-b06e-c74990da43f4 is updated
2017-10-19 16:02:37.205 2501 INFO heat.engine.service [req-cdcead0c-0596-4b2f-9e5f-45efad378fa0 - -] Service 4b806624-30eb-4507-b557-c9a687ef4cac is updated
2017-10-19 16:02:41.689 2505 INFO heat.engine.service [req-bb8a48b6-a369-4052-8cf7-c2eea3a22bda - -] Service 10638771-2663-42e5-8a15-17f0515747c3 is updated
2017-10-19 16:02:42.512 2505 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_cirros): Stack CREATE started
2017-10-19 16:02:42.591 2505 INFO heat.engine.resource [-] creating AutoScalingResourceGroup "auto_scaling_group" Stack "stack_cirros" [d7a68309-ab44-42d3-bb83-3c891444a1be]
2017-10-19 16:02:42.809 2505 INFO oslo.messaging._drivers.impl_rabbit [-] Connecting to AMQP server on 192.168.1.111:5672
2017-10-19 16:02:42.815 2505 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on 192.168.1.111:5672
2017-10-19 16:02:42.924 2503 INFO heat.engine.service [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_cirros-auto_scaling_group-svsevkr46xo3
2017-10-19 16:02:42.938 2503 INFO heat.engine.resource [req-1ecc1055-6c3e-4e5b-8abe-17aff5d8e75f 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "urkfepwbjkdt"
2017-10-19 16:02:45.260 2503 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_cirros-auto_scaling_group-svsevkr46xo3): Stack CREATE started
2017-10-19 16:02:45.307 2503 INFO heat.engine.resource [-] creating Server "urkfepwbjkdt" Stack "stack_cirros-auto_scaling_group-svsevkr46xo3" [f16bb623-194b-4db4-a5b9-e557112e3081]
2017-10-19 16:02:55.209 2503 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_cirros-auto_scaling_group-svsevkr46xo3): Stack CREATE completed successfully
2017-10-19 16:02:55.998 2505 INFO heat.engine.resource [-] creating AutoScalingPolicy "server_scaledown_policy" Stack "stack_cirros" [d7a68309-ab44-42d3-bb83-3c891444a1be]
2017-10-19 16:02:57.081 2505 INFO heat.engine.resource [-] creating AutoScalingPolicy "server_scaleup_policy" Stack "stack_cirros" [d7a68309-ab44-42d3-bb83-3c891444a1be]
2017-10-19 16:02:58.823 2505 INFO heat.engine.resource [-] creating CeilometerAlarm "cpu_alarm_low" Stack "stack_cirros" [d7a68309-ab44-42d3-bb83-3c891444a1be]
2017-10-19 16:02:59.839 2505 INFO heat.engine.resource [-] creating CeilometerAlarm "cpu_alarm_high" Stack "stack_cirros" [d7a68309-ab44-42d3-bb83-3c891444a1be]
2017-10-19 16:03:01.452 2505 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_cirros): Stack CREATE completed successfully


查看资源列表:
root@node111:~# heat resource-list stack_cirros
+-------------------------+--------------------------------------+----------------------------+-----------------+---------------------+
| resource_name           | physical_resource_id                 | resource_type              | resource_status | updated_time        |
+-------------------------+--------------------------------------+----------------------------+-----------------+---------------------+
| auto_scaling_group      | f16bb623-194b-4db4-a5b9-e557112e3081 | OS::Heat::AutoScalingGroup | CREATE_COMPLETE | 2017-10-19T08:02:42 |
| cpu_alarm_high          | ed5f70a9-247b-42fe-ac72-2901426bc118 | OS::Ceilometer::Alarm      | CREATE_COMPLETE | 2017-10-19T08:02:42 |
| cpu_alarm_low           | c9028950-3e25-444e-a4c7-5b808f665c16 | OS::Ceilometer::Alarm      | CREATE_COMPLETE | 2017-10-19T08:02:42 |
| server_scaledown_policy | d0a6046ec080418bb77f0262e0b7fa84     | OS::Heat::ScalingPolicy    | CREATE_COMPLETE | 2017-10-19T08:02:42 |
| server_scaleup_policy   | bda70c3439f64eeb8e69bbcd1a44e05f     | OS::Heat::ScalingPolicy    | CREATE_COMPLETE | 2017-10-19T08:02:42 |
+-------------------------+--------------------------------------+----------------------------+-----------------+---------------------+
三个资源中:auto_scaling_group (AutoScalingGroup)被 server_scaleup_policy (ScalingPolicy) 所使用,server _scaleup_policy  被 cpu_alarm_high (Ceilometer::Alarm) 使用。


查看Alarm信息
Alarm的列表:
root@node111:~# ceilometer alarm-list
+--------------------------------------+------------------------------------------+-------------------+----------+---------+------------+---------------------------------+------------------+
| Alarm ID                             | Name                                     | State             | Severity | Enabled | Continuous | Alarm condition                 | Time constraints |
+--------------------------------------+------------------------------------------+-------------------+----------+---------+------------+---------------------------------+------------------+
| c9028950-3e25-444e-a4c7-5b808f665c16 | stack_cirros-cpu_alarm_low-5ch6usj54ch3  | insufficient data | low      | True    | True       | cpu_util < 15.0 during 1 x 600s | None             |
| ed5f70a9-247b-42fe-ac72-2901426bc118 | stack_cirros-cpu_alarm_high-a2clwq4pjjpy | insufficient data | low      | True    | True       | cpu_util > 50.0 during 1 x 60s  | None             |
+--------------------------------------+------------------------------------------+-------------------+----------+---------+------------+---------------------------------+------------------+
alarm是一个监控特定指标的对象。alarm的状态包括:
1、OK。表示指标正常
2、ALARM。表示指标异常。如果连续几个周期都处于ALARM状态,那么就会触发一个或多个policy,进而触发scaling group的扩缩。
3、INSUFFICIENT_DATA。表示数据不可用。出现这个状态主要是因为 缺少监控指标的数据,处于这个状态的Alarm也不会被触发。如果为了测试目的,可以通过修改/etc/ceilometer/pipeline.yaml文件中的interval参数来调整收集数据的间隔




Alarm的history:
root@node111:~# ceilometer alarm-history c9028950-3e25-444e-a4c7-5b808f665c16
+----------+----------------------------+-----------------------------------------------------------------+
| Type     | Timestamp                  | Detail                                                          |
+----------+----------------------------+-----------------------------------------------------------------+
| creation | 2017-10-19T08:02:59.376000 | name: stack_cirros-cpu_alarm_low-5ch6usj54ch3                   |
|          |                            | description: Scale-down if the average CPU < 15% for 10 minutes |
|          |                            | type: threshold                                                 |
|          |                            | rule: cpu_util < 15.0 during 1 x 600s                           |
|          |                            | severity: low                                                   |
|          |                            | time_constraints: None                                          |
+----------+----------------------------+-----------------------------------------------------------------+


Alarm的详情:
root@node111:~# ceilometer alarm-show c9028950-3e25-444e-a4c7-5b808f665c16
+---------------------------+--------------------------------------------------------------------------+
| Property                  | Value                                                                    |
+---------------------------+--------------------------------------------------------------------------+
| alarm_actions             | [u'http://192.168.1.125:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A5c0c0 |
|                           | f0aeb4c4a3c9ecae63a6cc1a6c0%3Astacks%2Fstack_cirros%2Fd7a68309-ab44-42d3 |
|                           | -bb83-3c891444a1be%2Fresources%2Fserver_scaledown_policy?Timestamp=2017- |
|                           | 10-19T08%3A02%3A42Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=f115100fbe |
|                           | 944de7af5f9852281841ac&SignatureVersion=2&Signature=vjazy%2BV24BbbxUwE%2 |
|                           | BzG7etMRDRqw%2F0IFdhDNh%2FI4iuA%3D']                                     |
| alarm_id                  | c9028950-3e25-444e-a4c7-5b808f665c16                                     |
| comparison_operator       | lt                                                                       |
| description               | Scale-down if the average CPU < 15% for 10 minutes                       |
| enabled                   | True                                                                     |
| evaluation_periods        | 1                                                                        |
| exclude_outliers          | False                                                                    |
| insufficient_data_actions | None                                                                     |
| meter_name                | cpu_util                                                                 |
| name                      | stack_cirros-cpu_alarm_low-5ch6usj54ch3                                  |
| ok_actions                | None                                                                     |
| period                    | 600                                                                      |
| project_id                | 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0                                         |
| query                     | metadata.user_metadata.stack == d7a68309-ab44-42d3-bb83-3c891444a1be AND |
|                           | project_id == 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0                           |
| repeat_actions            | True                                                                     |
| severity                  | low                                                                      |
| state                     | insufficient data                                                        |
| statistic                 | avg                                                                      |
| threshold                 | 15.0                                                                     |
| type                      | threshold                                                                |
| user_id                   | 6d3eb238e11740369526863b1f1bba04                                         |
+---------------------------+--------------------------------------------------------------------------+




在创建的过程中的问题
1 Alarm的状态长时间为insufficent_data
原因1:
在模板配置文件resources中,auto_scaling_group设置的metadata属性是:
metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
而cpu_alarm_high和cpu_alarm_low中的matching_metadata设置的是:
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
这会导致metadata匹配不上。
ceilometer alarm-show c9028950-3e25-444e-a4c7-5b808f665c16命令查看的信息中有内容:


| query                     | metadata.user_metadata.stack == d7a68309-ab44-42d3-bb83-3c891444a1be AND |
|                           | project_id == 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0                           |


与nova show查看生成的虚拟机信息不一样:
root@node111:~# nova show 390fd353-417c-449f-b351-a00f610959fe
+--------------------------------------+-------------------------------------------------------------------+
| Property                             | Value                                                             |
+--------------------------------------+-------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                            |
| OS-EXT-AZ:availability_zone          | nova                                                              |
| OS-EXT-STS:power_state               | 1                                                                 |
| OS-EXT-STS:task_state                | -                                                                 |
| OS-EXT-STS:vm_state                  | active                                                            |
| OS-SRV-USG:launched_at               | 2017-10-19T08:02:53.000000                                        |
| OS-SRV-USG:terminated_at             | -                                                                 |
| accessIPv4                           |                                                                   |
| accessIPv6                           |                                                                   |
| config_drive                         |                                                                   |
| created                              | 2017-10-19T08:02:46Z                                              |
| flavor                               | m1.tiny (1)                                                       |
| hostId                               | 194de285682af8ea0191bdc3e90632207358af8e5ec0a5c9bd3a2584          |
| id                                   | 390fd353-417c-449f-b351-a00f610959fe                              |
| image                                | cirros (65571f2c-9e69-4a1a-96a5-32837c90a2ac)                     |
| key_name                             | mykey                                                             |
| metadata                             | {"metering.server_group": "d7a68309-ab44-42d3-bb83-3c891444a1be"} |
| name                                 | st-aling_group-svsevkr46xo3-urkfepwbjkdt-lrzein64bbbd             |
| os-extended-volumes:volumes_attached | []                                                                |
| private network                      | 10.0.0.119                                                        |
| progress                             | 0                                                                 |
| security_groups                      | default                                                           |
| status                               | ACTIVE                                                            |
| tenant_id                            | 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0                                  |
| updated                              | 2017-10-19T08:02:53Z                                              |
| user_id                              | 6d3eb238e11740369526863b1f1bba04                                  |
+--------------------------------------+-------------------------------------------------------------------+
从中可看出,metadata属性的值:{"metering.server_group": "d7a68309-ab44-42d3-bb83-3c891444a1be"}


所以,这两处应该设置成一样的metadata。需要把server_group改成stack',或stack'改成server_group。




原因2:
如果Alarm的状态长时间为insufficent_data,说明ceilometer长时间没有采集到监控指标的数据,为了达到更好的演示效果可以调整/etc/ceilometer/pipeline.yaml文件中采集指标的间隔。默认的间隔是600秒,可以将其设置为小于CoolDown或是Alarm Period   的时间


2repeat_actions属性
该模板中的Alarm创建出来后,查看alarm 列表可以发现Continues属性都是false( 如果查看明细该属性对应的是repeat_actions属性)该属性的为false代表alarm_action只会被执行一次,所以为了达到更好的演示效果,需要将其修改为True


3 Unable to retrieve stack
当前的版本执行过程中有以下错误产生,可以参考https://review.openstack.org/#/c/92887/进行解决
2014-08-0105:38:08.410 3834 ERROR heat.engine.service[req-96a84baa-6b6f-4a4e-a2f3-90c0a02612e7 None] Unable to retrieve stack40e7560e-848e-4d78-bac0-8eb4f26ac22f for periodic task




13负载均衡编排实践
创建stack
使用模板2;查找好相关的openstack信息,image, network, key, flavor。
创建了一个ServerGroup,instance个数为1~3个,然后根据alarm消息,来执行ScalePolicy,执行+1或者-1 instance的操作。
执行命令:
root@node111:~# heat stack-create -f lbaas.yml -P "image=cirros;network=private;key=mykey;flavor=m1.tiny;subnet_id=10ea7dd8-3b13-4bb5-97eb-c9b77381c5b8;external_network_id=2390f0f1-43b3-45ff-92ca-1e9fcc29a7ac" stack_lbaas1400
+--------------------------------------+-----------------+--------------------+---------------------+--------------+
| id                                   | stack_name      | stack_status       | creation_time       | updated_time |
+--------------------------------------+-----------------+--------------------+---------------------+--------------+
| f9d8c9b0-48d2-48b2-88c6-4062b968c38f | stack_lbaas1400 | CREATE_IN_PROGRESS | 2017-10-26T06:38:48 | None         |
+--------------------------------------+-----------------+--------------------+---------------------+--------------+
root@node111:~# heat stack-list
+--------------------------------------+-----------------+-----------------+---------------------+--------------+
| id                                   | stack_name      | stack_status    | creation_time       | updated_time |
+--------------------------------------+-----------------+-----------------+---------------------+--------------+
| f9d8c9b0-48d2-48b2-88c6-4062b968c38f | stack_lbaas1400 | CREATE_COMPLETE | 2017-10-26T06:38:48 | None         |
+--------------------------------------+-----------------+-----------------+---------------------+--------------+
root@node111:~# nova list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks           |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| a5d4b4b1-c04f-4bc2-8d65-a80d9ed9d6d1 | st-4dfr-2chggxltixwk-cnaim3mcwesv-server-eb7cfz2vplta | ACTIVE | -          | Running     | private=10.0.0.136 |
| b22ba98b-5cee-4d85-8a3c-93b12a7e1541 | st-4dfr-j3awdcs7i2mw-ph3j6j4aswi5-server-dgnorr5hntjc | ACTIVE | -          | Running     | private=10.0.0.135 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+


日志:


root@node111:~# tailf /var/log/heat/heat-engine.log
2017-10-26 14:38:47.621 2505 INFO heat.engine.service [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_lbaas1400
2017-10-26 14:38:47.643 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating HealthMonitor "monitor"
2017-10-26 14:38:47.644 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Pool "pool"
2017-10-26 14:38:47.683 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating FloatingIP "lb_floating"
2017-10-26 14:38:47.700 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating LoadBalancer "lb"
2017-10-26 14:38:47.701 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingResourceGroup "asg"
2017-10-26 14:38:47.713 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating TemplateResource "zzrthyvb5fww"
2017-10-26 14:38:47.721 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:38:48.583 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:38:48.586 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating TemplateResource "jrakyi2gctff"
2017-10-26 14:38:48.597 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:38:48.625 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:38:48.626 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingPolicy "web_server_scaleup_policy"
2017-10-26 14:38:48.627 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating CeilometerAlarm "cpu_alarm_high"
2017-10-26 14:38:48.628 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating AutoScalingPolicy "web_server_scaledown_policy"
2017-10-26 14:38:48.628 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating CeilometerAlarm "cpu_alarm_low"
2017-10-26 14:38:49.883 2505 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_lbaas1400): Stack CREATE started
2017-10-26 14:38:49.953 2505 INFO heat.engine.resource [-] creating HealthMonitor "monitor" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:38:50.310 2505 INFO heat.engine.resource [-] creating Pool "pool" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:38:53.898 2503 INFO heat.engine.service [req-918bd126-b092-4375-b362-5a6d2d0eb401 - -] Service f4c93d77-37ff-4953-b06e-c74990da43f4 is updated
2017-10-26 14:38:55.902 2505 INFO heat.engine.resource [-] creating FloatingIP "lb_floating" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:38:56.927 2505 INFO heat.engine.resource [-] creating LoadBalancer "lb" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:38:57.041 2505 INFO heat.engine.resource [-] creating AutoScalingResourceGroup "asg" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:38:57.517 2501 INFO heat.engine.service [req-86571e2d-7ee9-4f24-b148-60dfc0907ff0 - -] Service 4b806624-30eb-4507-b557-c9a687ef4cac is updated
2017-10-26 14:38:57.520 2503 INFO heat.engine.service [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_lbaas1400-asg-eo57lc6o4dfr
2017-10-26 14:38:57.533 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating TemplateResource "j3awdcs7i2mw"
2017-10-26 14:38:57.542 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:38:57.648 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:38:57.649 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating TemplateResource "2chggxltixwk"
2017-10-26 14:38:57.676 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:38:57.708 2503 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:38:58.794 2503 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_lbaas1400-asg-eo57lc6o4dfr): Stack CREATE started
2017-10-26 14:38:58.892 2503 INFO heat.engine.resource [-] creating TemplateResource "j3awdcs7i2mw" Stack "stack_lbaas1400-asg-eo57lc6o4dfr" [c2e8d16b-94d6-4f27-8060-91e89199a089]
2017-10-26 14:38:59.204 2501 INFO heat.engine.service [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_lbaas1400-asg-eo57lc6o4dfr-j3awdcs7i2mw-ph3j6j4aswi5
2017-10-26 14:38:59.217 2501 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:38:59.414 2501 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:38:59.641 2503 INFO heat.engine.resource [-] creating TemplateResource "2chggxltixwk" Stack "stack_lbaas1400-asg-eo57lc6o4dfr" [c2e8d16b-94d6-4f27-8060-91e89199a089]
2017-10-26 14:38:59.765 2501 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_lbaas1400-asg-eo57lc6o4dfr-j3awdcs7i2mw-ph3j6j4aswi5): Stack CREATE started
2017-10-26 14:38:59.797 2505 INFO heat.engine.service [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Creating stack stack_lbaas1400-asg-eo57lc6o4dfr-2chggxltixwk-cnaim3mcwesv
2017-10-26 14:38:59.822 2501 INFO heat.engine.resource [-] creating Server "server" Stack "stack_lbaas1400-asg-eo57lc6o4dfr-j3awdcs7i2mw-ph3j6j4aswi5" [ac16333d-d980-4c36-ba80-4deac546588c]
2017-10-26 14:38:59.829 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating Server "server"
2017-10-26 14:39:00.588 2505 INFO heat.engine.service [req-565dc59a-408f-466e-9ecd-3d06afc23284 - -] Service 10638771-2663-42e5-8a15-17f0515747c3 is updated
2017-10-26 14:39:00.605 2505 INFO heat.engine.resource [req-ae00184e-d4aa-4a29-9af8-3ec937ed9d82 6d3eb238e11740369526863b1f1bba04 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0] Validating PoolMember "member"
2017-10-26 14:39:01.063 2505 INFO heat.engine.stack [-] Stack CREATE IN_PROGRESS (stack_lbaas1400-asg-eo57lc6o4dfr-2chggxltixwk-cnaim3mcwesv): Stack CREATE started
2017-10-26 14:39:01.114 2505 INFO heat.engine.resource [-] creating Server "server" Stack "stack_lbaas1400-asg-eo57lc6o4dfr-2chggxltixwk-cnaim3mcwesv" [5cc1e0d1-c0e1-4a79-84c7-40cf3cb51afc]
2017-10-26 14:39:08.270 2501 INFO heat.engine.resource [-] creating PoolMember "member" Stack "stack_lbaas1400-asg-eo57lc6o4dfr-j3awdcs7i2mw-ph3j6j4aswi5" [ac16333d-d980-4c36-ba80-4deac546588c]
2017-10-26 14:39:13.109 2501 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_lbaas1400-asg-eo57lc6o4dfr-j3awdcs7i2mw-ph3j6j4aswi5): Stack CREATE completed successfully
2017-10-26 14:39:13.287 2505 INFO heat.engine.resource [-] creating PoolMember "member" Stack "stack_lbaas1400-asg-eo57lc6o4dfr-2chggxltixwk-cnaim3mcwesv" [5cc1e0d1-c0e1-4a79-84c7-40cf3cb51afc]
2017-10-26 14:39:13.931 2499 INFO heat.engine.service [req-c9bb45f8-b178-4723-9c81-0ea84f0522f7 - -] Service d544d0c8-aea5-4a71-94ff-31159d5eab1d is updated
2017-10-26 14:39:15.347 2505 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_lbaas1400-asg-eo57lc6o4dfr-2chggxltixwk-cnaim3mcwesv): Stack CREATE completed successfully
2017-10-26 14:39:16.193 2503 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_lbaas1400-asg-eo57lc6o4dfr): Stack CREATE completed successfully
2017-10-26 14:39:17.564 2505 INFO heat.engine.resource [-] creating AutoScalingPolicy "web_server_scaleup_policy" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:39:18.302 2505 INFO heat.engine.resource [-] creating AutoScalingPolicy "web_server_scaledown_policy" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:39:20.123 2505 INFO heat.engine.resource [-] creating CeilometerAlarm "cpu_alarm_high" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:39:21.253 2505 INFO heat.engine.resource [-] creating CeilometerAlarm "cpu_alarm_low" Stack "stack_lbaas1400" [f9d8c9b0-48d2-48b2-88c6-4062b968c38f]
2017-10-26 14:39:23.612 2505 INFO heat.engine.stack [-] Stack CREATE COMPLETE (stack_lbaas1400): Stack CREATE completed successfully






查看Alarm信息


root@node111:~# ceilometer alarm-list
+--------------------------------------+---------------------------------------------+-------+----------+---------+------------+---------------------------------+------------------+
| Alarm ID                             | Name                                        | State | Severity | Enabled | Continuous | Alarm condition                 | Time constraints |
+--------------------------------------+---------------------------------------------+-------+----------+---------+------------+---------------------------------+------------------+
| 8c1aab82-59eb-4e95-b337-3f90236833fa | stack_lbaas1400-cpu_alarm_high-cmymzpbjg44q | ok    | low      | True    | True       | cpu_util > 80.0 during 1 x 300s | None             |
| e089d214-21d0-4729-8562-f9f49754ea6a | stack_lbaas1400-cpu_alarm_low-y7dqnq3mv432  | ok    | low      | True    | True       | cpu_util < 1.0 during 1 x 600s  | None             |
+--------------------------------------+---------------------------------------------+-------+----------+---------+------------+---------------------------------+------------------+
root@node111:~# 
root@node111:~# ceilometer alarm-history 8c1aab82-59eb-4e95-b337-3f90236833fa
+------------------+----------------------------+-------------------------------------------------------------+
| Type             | Timestamp                  | Detail                                                      |
+------------------+----------------------------+-------------------------------------------------------------+
| state transition | 2017-10-26T06:40:45.477000 | state: ok                                                   |
| creation         | 2017-10-26T06:39:20.715000 | name: stack_lbaas1400-cpu_alarm_high-cmymzpbjg44q           |
|                  |                            | description: Scale-up if the average CPU > 80% for 1 minute |
|                  |                            | type: threshold                                             |
|                  |                            | rule: cpu_util > 80.0 during 1 x 300s                       |
|                  |                            | severity: low                                               |
|                  |                            | time_constraints: None                                      |
+------------------+----------------------------+-------------------------------------------------------------+






查看负载信息
root@node111:~# neutron lb-pool-list
+--------------------------------------+-----------------------------------+----------+-------------+----------+----------------+--------+
| id                                   | name                              | provider | lb_method   | protocol | admin_state_up | status |
+--------------------------------------+-----------------------------------+----------+-------------+----------+----------------+--------+
| 8b505df7-7778-4dbe-b157-769c57d8d85f | stack_lbaas1400-pool-e55c6twxrtst | haproxy  | ROUND_ROBIN | HTTP     | True           | ACTIVE |
+--------------------------------------+-----------------------------------+----------+-------------+----------+----------------+--------+
root@node111:~# neutron lb-member-list
+--------------------------------------+------------+---------------+--------+----------------+--------+
| id                                   | address    | protocol_port | weight | admin_state_up | status |
+--------------------------------------+------------+---------------+--------+----------------+--------+
| 73db56ec-11e8-4965-81f8-a7adaf17cfdf | 10.0.0.137 |            80 |      1 | True           | ACTIVE |
| 767fc8eb-e46b-4f2f-80d6-48397966509f | 10.0.0.136 |            80 |      1 | True           | ACTIVE |
| 93056fe4-0ceb-4ec4-9cf7-3c4f9fe0103b | 10.0.0.135 |            80 |      1 | True           | ACTIVE |
+--------------------------------------+------------+---------------+--------+----------------+--------+
root@node111:~# neutron lb-vip-list
+--------------------------------------+----------+------------+----------+----------------+--------+
| id                                   | name     | address    | protocol | admin_state_up | status |
+--------------------------------------+----------+------------+----------+----------------+--------+
| 700fe3b3-4997-4454-9cea-3803191e80f0 | pool.vip | 10.0.0.134 | HTTP     | True           | ACTIVE |
+--------------------------------------+----------+------------+----------+----------------+--------+
root@node111:~# neutron lb-healthmonitor-list
+--------------------------------------+------+----------------+
| id                                   | type | admin_state_up |
+--------------------------------------+------+----------------+
| 4624300b-13db-49c1-923f-2e85af00cd63 | TCP  | True           |
+--------------------------------------+------+----------------+
root@node111:~# neutron lb-pool-show 8b505df7-7778-4dbe-b157-769c57d8d85f
+------------------------+--------------------------------------------------------------------------------------------------------+
| Field                  | Value                                                                                                  |
+------------------------+--------------------------------------------------------------------------------------------------------+
| admin_state_up         | True                                                                                                   |
| description            |                                                                                                        |
| health_monitors        | 4624300b-13db-49c1-923f-2e85af00cd63                                                                   |
| health_monitors_status | {"monitor_id": "4624300b-13db-49c1-923f-2e85af00cd63", "status": "ACTIVE", "status_description": null} |
| id                     | 8b505df7-7778-4dbe-b157-769c57d8d85f                                                                   |
| lb_method              | ROUND_ROBIN                                                                                            |
| members                | 73db56ec-11e8-4965-81f8-a7adaf17cfdf                                                                   |
|                        | 767fc8eb-e46b-4f2f-80d6-48397966509f                                                                   |
|                        | 93056fe4-0ceb-4ec4-9cf7-3c4f9fe0103b                                                                   |
| name                   | stack_lbaas1400-pool-e55c6twxrtst                                                                      |
| protocol               | HTTP                                                                                                   |
| provider               | haproxy                                                                                                |
| status                 | ACTIVE                                                                                                 |
| status_description     |                                                                                                        |
| subnet_id              | 10ea7dd8-3b13-4bb5-97eb-c9b77381c5b8                                                                   |
| tenant_id              | 5c0c0f0aeb4c4a3c9ecae63a6cc1a6c0                                                                       |
| vip_id                 | 700fe3b3-4997-4454-9cea-3803191e80f0                                                                   |
+------------------------+--------------------------------------------------------------------------------------------------------+


haproxy信息:


root@node122:~# ip netns exec qlbaas-8b505df7-7778-4dbe-b157-769c57d8d85f ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
17: tap9878d151-e0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
    link/ether fa:16:3e:d0:29:49 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.134/24 brd 10.0.0.255 scope global tap9878d151-e0
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fed0:2949/64 scope link 
       valid_lft forever preferred_lft forever
root@node122:~# 
root@node122:~# ps -ef | grep haproxy
nobody     664     1  0 Oct26 ?        00:00:09 haproxy -f /var/lib/neutron/lbaas/8b505df7-7778-4dbe-b157-769c57d8d85f/conf -p /var/lib/neutron/lbaas/8b505df7-7778-4dbe-b157-769c57d8d85f/pid -sf 21960
root@node122:~# 
root@node122:~# cat /var/lib/neutron/lbaas/8b505df7-7778-4dbe-b157-769c57d8d85f/conf 
global
daemon
user nobody
group haproxy
log /dev/log local0
log /dev/log local1 notice
stats socket /var/lib/neutron/lbaas/8b505df7-7778-4dbe-b157-769c57d8d85f/sock mode 0666 level user
defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend 700fe3b3-4997-4454-9cea-3803191e80f0
option tcplog
bind 10.0.0.134:80
mode http
default_backend 8b505df7-7778-4dbe-b157-769c57d8d85f
option forwardfor
backend 8b505df7-7778-4dbe-b157-769c57d8d85f
mode http
balance roundrobin
option forwardfor
timeout check 5s
server 73db56ec-11e8-4965-81f8-a7adaf17cfdf 10.0.0.137:80 weight 1 check inter 5s fall 5
server 767fc8eb-e46b-4f2f-80d6-48397966509f 10.0.0.136:80 weight 1 check inter 5s fall 5
server 93056fe4-0ceb-4ec4-9cf7-3c4f9fe0103b 10.0.0.135:80 weight 1 check inter 5s fall 5




测试负载
使用命令
root@node111:~# heat stack-show stack_lbaas1400
查看出vip是10.0.0.134
"output_value": "10.0.0.134", 
"description": "The IP address of the load balancing pool",
"output_key": "pool_ip_address"


在服务器135和136上执行如下命令添加一个80端口的监听进程,模拟httpd监听
while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to server135" | sudo nc -l -p 80 ; done&   


使用curl命令轮询分发成功
root@node111:~# ip net exec qdhcp-abfcade3-df6f-4716-9757-2bb19fba457b curl 10.0.0.134
Welcome to server135
root@node111:~# ip net exec qdhcp-abfcade3-df6f-4716-9757-2bb19fba457b curl 10.0.0.134
Welcome to server136




在服务器135和136上进行cpu加压,之后告警,再生成服务器137
root@node111:~# ceilometer alarm-history 8c1aab82-59eb-4e95-b337-3f90236833fa
+------------------+----------------------------+-------------------------------------------------------------+
| Type             | Timestamp                  | Detail                                                      |
+------------------+----------------------------+-------------------------------------------------------------+
| state transition | 2017-10-26T08:24:45.569000 | state: alarm                                                |
| state transition | 2017-10-26T06:40:45.477000 | state: ok                                                   |
| creation         | 2017-10-26T06:39:20.715000 | name: stack_lbaas1400-cpu_alarm_high-cmymzpbjg44q           |
|                  |                            | description: Scale-up if the average CPU > 80% for 1 minute |
|                  |                            | type: threshold                                             |
|                  |                            | rule: cpu_util > 80.0 during 1 x 300s                       |
|                  |                            | severity: low                                               |
|                  |                            | time_constraints: None                                      |
+------------------+----------------------------+-------------------------------------------------------------+
root@node111:~# nova list
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| ID                                   | Name                                                  | Status | Task State | Power State | Networks           |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+
| 7d9fc18d-1431-4979-8047-a024a473e736 | ceilometertest                                        | ACTIVE | -          | Running     | private=10.0.0.118 |
| a5d4b4b1-c04f-4bc2-8d65-a80d9ed9d6d1 | st-4dfr-2chggxltixwk-cnaim3mcwesv-server-eb7cfz2vplta | ACTIVE | -          | Running     | private=10.0.0.136 |
| 776408d5-7042-441f-ba52-161318883b02 | st-4dfr-2relygvkt5v4-tjshpbr2jagp-server-agrinflzk63q | ACTIVE | -          | Running     | private=10.0.0.137 |
| b22ba98b-5cee-4d85-8a3c-93b12a7e1541 | st-4dfr-j3awdcs7i2mw-ph3j6j4aswi5-server-dgnorr5hntjc | ACTIVE | -          | Running     | private=10.0.0.135 |
+--------------------------------------+-------------------------------------------------------+--------+------------+-------------+--------------------+


在服务器135和136上执行如下命令添加一个80端口的监听进程,模拟httpd监听
while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to server135" | sudo nc -l -p 80 ; done&   


使用curl命令轮询分发成功
root@node111:~# ip net exec qdhcp-abfcade3-df6f-4716-9757-2bb19fba457b curl 10.0.0.134
Welcome to server135
root@node111:~# ip net exec qdhcp-abfcade3-df6f-4716-9757-2bb19fba457b curl 10.0.0.134
Welcome to server136
root@node111:~# ip net exec qdhcp-abfcade3-df6f-4716-9757-2bb19fba457b curl 10.0.0.134
Welcome to server137






14 Senlin
There is now a separate autoscaling API project, Senlin.


















参考:
1 wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
原创粉丝点击