查看Spark进程的JVM配置及内存使用

来源:互联网 发布:js cinfirm 编辑:程序博客网 时间:2024/06/06 03:14

如何查看正在运行的Spark进程的JVM配置以及分代的内存使用情况,是线上运行作业常用的监控手段:


1、通过ps命令查询PID

ps -ef  | grep 5661

可以根据命令中的特殊字符来定位pid




2、使用jinfo命令查询该进程的JVM参数设置

jinfo 105007

可以得到详细的JVM配置信息

Attaching to process ID 105007, please wait...Debugger attached successfully.Server compiler detected.JVM version is 24.65-b04Java System Properties:spark.local.dir = /diskb/sparktmp,/diskc/sparktmp,/diskd/sparktmp,/diske/sparktmp,/diskf/sparktmp,/diskg/sparktmpjava.runtime.name = Java(TM) SE Runtime Environmentjava.vm.version = 24.65-b04sun.boot.library.path = /usr/java/jdk1.7.0_67-cloudera/jre/lib/amd64java.vendor.url = http://java.oracle.com/java.vm.vendor = Oracle Corporationpath.separator = :file.encoding.pkg = sun.iojava.vm.name = Java HotSpot(TM) 64-Bit Server VMsun.os.patch.level = unknownsun.java.launcher = SUN_STANDARDuser.country = CNuser.dir = /opt/bin/spark_dev_jobjava.vm.specification.name = Java Virtual Machine Specificationjava.runtime.version = 1.7.0_67-b01java.awt.graphicsenv = sun.awt.X11GraphicsEnvironmentSPARK_SUBMIT = trueos.arch = amd64java.endorsed.dirs = /usr/java/jdk1.7.0_67-cloudera/jre/lib/endorsedspark.executor.memory = 24gline.separator = java.io.tmpdir = /tmpjava.vm.specification.vendor = Oracle Corporationos.name = Linuxspark.driver.memory = 15gspark.master = spark://10.130.2.220:7077sun.jnu.encoding = UTF-8java.library.path = :/opt/cloudera/parcels/CDH/lib/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/libsun.nio.ch.bugLevel = java.class.version = 51.0java.specification.name = Java Platform API Specificationsun.management.compiler = HotSpot 64-Bit Tiered Compilersspark.submit.deployMode = clientspark.executor.extraJavaOptions = -XX:PermSize=8m -XX:+PrintGCDetails -XX:+PrintGCTimeStampsos.version = 2.6.32-573.8.1.el6.x86_64user.home = /rootuser.timezone = PRCjava.awt.printerjob = sun.print.PSPrinterJobfile.encoding = UTF-8java.specification.version = 1.7spark.app.name = com.hexun.streaming.NewsTopNRealRankOffsetRisespark.eventLog.enabled = trueuser.name = rootjava.class.path = /opt/cloudera/parcels/CDH/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/modules/spark-1.6.1-bin-hadoop2.6/conf/:/opt/modules/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar:/opt/modules/spark-1.6.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/modules/spark-1.6.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/modules/spark-1.6.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/etc/hadoop/conf/java.vm.specification.version = 1.7sun.arch.data.model = 64sun.java.command = org.apache.spark.deploy.SparkSubmit --master spark://10.130.2.220:7077 --conf spark.driver.memory=15g --conf spark.executor.extraJavaOptions=-XX:PermSize=8m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps --conf spark.ui.port=5661 --class com.hexun.streaming.NewsTopNRealRankOffsetRise --executor-memory 24g --total-executor-cores 24 --jars /opt/bin/sparkJars/kafka_2.10-0.8.2.1.jar,/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.6.1.jar,/opt/bin/sparkJars/metrics-core-2.2.0.jar,/opt/bin/sparkJars/mysql-connector-java-5.1.26-bin.jar NewsTopNRealRankOffsetRise.jarjava.home = /usr/java/jdk1.7.0_67-cloudera/jreuser.language = zhjava.specification.vendor = Oracle Corporationawt.toolkit = sun.awt.X11.XToolkitspark.ui.port = 5661java.vm.info = mixed modejava.version = 1.7.0_67java.ext.dirs = /usr/java/jdk1.7.0_67-cloudera/jre/lib/ext:/usr/java/packages/lib/extsun.boot.class.path = /usr/java/jdk1.7.0_67-cloudera/jre/lib/resources.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/rt.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/sunrsasign.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jsse.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jce.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/charsets.jar:/usr/java/jdk1.7.0_67-cloudera/jre/lib/jfr.jar:/usr/java/jdk1.7.0_67-cloudera/jre/classesjava.vendor = Oracle Corporationfile.separator = /spark.cores.max = 24spark.eventLog.dir = hdfs://nameservice1/spark-logjava.vendor.url.bug = http://bugreport.sun.com/bugreport/sun.io.unicode.encoding = UnicodeLittlesun.cpu.endian = littlespark.jars = file:/opt/bin/sparkJars/kafka_2.10-0.8.2.1.jar,file:/opt/bin/sparkJars/spark-streaming-kafka_2.10-1.6.1.jar,file:/opt/bin/sparkJars/metrics-core-2.2.0.jar,file:/opt/bin/sparkJars/mysql-connector-java-5.1.26-bin.jar,file:/opt/bin/spark_dev_job/NewsTopNRealRankOffsetRise.jarsun.cpu.isalist = VM Flags:-Xms15g -Xmx15g -XX:MaxPermSize=256m


3、使用jmap查看进程中内存分代使用的情况

jmap -heap 105007

可以得到该java进程使用内存的详细情况,包括新生代老年代内存的使用

Attaching to process ID 105007, please wait...Debugger attached successfully.Server compiler detected.JVM version is 24.65-b04using thread-local object allocation.Parallel GC with 18 thread(s)Heap Configuration:   MinHeapFreeRatio = 0   MaxHeapFreeRatio = 100   MaxHeapSize      = 16106127360 (15360.0MB)   NewSize          = 1310720 (1.25MB)   MaxNewSize       = 17592186044415 MB   OldSize          = 5439488 (5.1875MB)   NewRatio         = 2   SurvivorRatio    = 8   PermSize         = 21757952 (20.75MB)   MaxPermSize      = 268435456 (256.0MB)   G1HeapRegionSize = 0 (0.0MB)Heap Usage:PS Young GenerationEden Space:   capacity = 4945084416 (4716.0MB)   used     = 2674205152 (2550.320770263672MB)   free     = 2270879264 (2165.679229736328MB)   54.07804856369109% usedFrom Space:   capacity = 217579520 (207.5MB)   used     = 37486624 (35.750030517578125MB)   free     = 180092896 (171.74996948242188MB)   17.22893036991717% usedTo Space:   capacity = 206045184 (196.5MB)   used     = 0 (0.0MB)   free     = 206045184 (196.5MB)   0.0% usedPS Old Generation   capacity = 10737418240 (10240.0MB)   used     = 7431666880 (7087.389831542969MB)   free     = 3305751360 (3152.6101684570312MB)   69.2127913236618% usedPS Perm Generation   capacity = 268435456 (256.0MB)   used     = 128212824 (122.27327728271484MB)   free     = 140222632 (133.72672271728516MB)   47.762998938560486% used



2 0
原创粉丝点击