Jstorm-------storm.yaml配置

来源:互联网 发布:医学数据分析重要吗? 编辑:程序博客网 时间:2024/05/16 07:26

之前把Jstorm配置好了,发现本地模式下面可以执行,但是,我想在集群模式下面执行的时候,老是报没有活的nimbus,仔细看书,终于知道在运行集群模式之前,要启动nimbus进程和supervisor进程,就是在nimbus节点上执行 jstorm nimbus命令就可以,然后在supervisor节点上执行 jstorm supervisor命令就可以了。然后提交jar包运行就可以了。下面来讲讲storm.yaml配置文件里面如何配置。

storm.yaml参数文件

下面是我的jstorm里面的配置文件内容:

########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- “192.168.2.191”
- “192.168.2.169”
- “192.168.2.170”
storm.zookeeper.root: “/jstorm”#cluster.name: “default”

###nimbus.host/nimbus.host.start.supervisor is being used by $JSTORM_HOME/bin/start.sh

#it only support IP, please don’t set hostname
# For example
# nimbus.host: “10.132.168.10, 10.132.168.45”
nimbus.host: “192.168.2.191”
#nimbus.host.start.supervisor: false
#%JSTORM_HOME% is the jstorm home directory
storm.local.dir: “%JSTORM_HOME%/data”
# please set absolute path, default path is JSTORM_HOME/logs
# jstorm.log.dir: “absolute path”
# java.library.path: “/usr/local/lib:/opt/local/lib:/usr/lib”
#if supervisor.slots.ports is null, # the port list will be generated by cpu cores and system memory size
# for example,
# there are cpu_num = system_physical_cpu_num/supervisor.slots.port.cpu.weight
# there are mem_num = system_physical_memory_size/(worker.memory.size * supervisor.slots.port.mem.weight)
# The final port number is min(cpu_num, mem_num)
# supervisor.slots.ports.base: 6800
# supervisor.slots.port.cpu.weight: 1.2# supervisor.slots.port.mem.weight: 0.7
# supervisor.slots.ports: null
supervisor.slots.ports:
- 6800
- 6801
- 6802
- 6803
# Default disable user-define classloader
# If there are jar conflict between jstorm and application,
# please enable it
# topology.enable.classloader: false
# enable supervisor use cgroup to make resource isolation
# Before enable it, you should make sure:
# 1. Linux version (>= 2.6.18)
# 2. Have installed cgroup (check the file’s existence:/proc/cgroups)
# 3. You should start your supervisor on root
# You can get more about cgroup:
# http://t.cn/8s7nexU
# supervisor.enable.cgroup: false
### Netty will send multiple messages in one batch
### Setting true will improve throughput, but more latency
# storm.messaging.netty.transfer.async.batch: true
### if this setting is true, it will use disruptor as internal queue, which size is limited
### otherwise, it will use LinkedBlockingDeque as internal queue , which size is unlimited
### generally when this setting is true, the topology will be more stable,
### but when there is a data loop flow, for example A -> B -> C -> A
### and the data flow occur blocking, please set this as false
# topology.buffer.size.limited: true
### default worker memory size, unit is byte
# worker.memory.size: 2147483648
# Metrics Monitor
# topology.performance.metrics: it is the switch flag for performance
# purpose. When it is disabled, the data of timer and histogram metrics
# will not be collected.
# topology.alimonitor.metrics.post: If it is disable, metrics data
# will only be printed to log. If it is enabled, the metrics data will be
# posted to alimonitor besides printing to log.
# topology.performance.metrics: true
# topology.alimonitor.metrics.post: false
# UI MultiCluster
# Following is an example of multicluster UI configuration
ui.clusters:
- {
name: “jstorm.share”,
zkRoot: “/jstorm”,
zkServers:
[ “192.168.2.191”, “192.168.2.169”, “192.168.2.170”],
zkPort: 2181,
}
首先,我装了三个虚拟机,并且把他们的IP地址固定,分别是“192.168.2.191”, “192.168.2.169”, “192.168.2.170”.
storm.zookeeper.servers:就是配置节点信息
storm.zookeeper.root: 这个就直接使用默认配置即可,自己不用配置。
nimbus.host: 则是nimbus的主节点配置,我这里配置的是“192.168.2.191”作为nimbus节点。
storm.local.dir:这里我现在jstorm的文件夹里面建立了一个文件夹,data,然后进行配置
supervisor.slots.ports: 这里使用的直接是默认配置,把里面#去掉打开就行了。
ui.clusters:这里因为我用到了jstorm ui来显示信息,你配置的时候,只是改改地址就行了,把#去掉之后再改。
上面的配置好了之后,(UI部分选择性配置),就可以运行jstorm程序了,不过你得在maven里面把代码写好,发布成jar包之后,提交运行。

1 0