sqoop从mysql中导入数据到mysql遇到的问题

来源:互联网 发布:数据仓库软件 编辑:程序博客网 时间:2024/06/05 05:51
2014-11-04 09:56:54,398 FATAL [LeaseRenewer:hdfs@cdhnamenode.com:8020] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[LeaseRenewer:hdfs@cdhnamenode.com:8020,5,main] threw an Error.  Shutting down now...java.lang.OutOfMemoryError: GC overhead limit exceededat java.util.Arrays.copyOf(Arrays.java:2271)at java.util.zip.ZipCoder.getBytes(ZipCoder.java:89)at java.util.zip.ZipFile.getEntry(ZipFile.java:306)at java.util.jar.JarFile.getEntry(JarFile.java:227)at java.util.jar.JarFile.getJarEntry(JarFile.java:210)at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:840)at sun.misc.URLClassPath.getResource(URLClassPath.java:199)at java.net.URLClassLoader$1.run(URLClassLoader.java:358)at java.net.URLClassLoader$1.run(URLClassLoader.java:355)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:354)at java.lang.ClassLoader.loadClass(ClassLoader.java:425)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)at java.lang.ClassLoader.loadClass(ClassLoader.java:358)at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:406)at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)at java.lang.Thread.run(Thread.java:745)2014-11-04 09:56:54,402 INFO [LeaseRenewer:hdfs@cdhnamenode.com:8020] org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException                                                                                                                

参考链接

java.lang.OutOfMemoryError:GC overhead limit exceeded填坑心得

java.lang.OutOfMemoryError: GC overhead limit exceeded解决

JVM系列三:JVM参数设置、分析




查看


jps -lv
44812 sun.tools.jps.Jps -Dapplication.home=/usr/local/jdk1.7.0_60 -Xms8m
42075 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode -Dproc_secondarynamenode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-SECONDARYNAMENODE-cdhnamenode.com.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS
41996 org.apache.hadoop.hdfs.server.namenode.NameNode -Dproc_namenode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-NAMENODE-cdhnamenode.com.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS


参考链接

[已解决]vm.swappiness = 0,这样设置后,创建的swap分区是不是形同虚设了??


Linux 内核 2.6 版本后引入的
/proc/sys/vm/swappiness

The Linux 2.6 kernel added a new kernel parameter called swappiness to let administrators tweak the way Linux swaps.
https://www.linux.com/news/softw ... ut-linux-swap-space

默认是60
如果设置为 0 的话,则等同于禁用 swap

A value of 0 will avoid ever swapping out just for caching space. Using 100 will always favor making the disk cache bigger. Most distributions set this value to be 60, tuned toward moderately aggressive swapping to increase disk cache.
http://www.westnet.com/~gsmith/content/linux-pdflush.htm

这样的话,不管创建多大或多小的swap分区,都形同虚设了是不??

参考链接

Java虚拟机解析篇之---垃圾回收器


[root@cdhnamenode dev]# fdisk -l

Disk /dev/sda: 999.7 GB, 999653638144 bytes
255 heads, 63 sectors/track, 121534 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x180c3818

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2611    20971520   83  Linux
/dev/sda2            2611       15666   104857600   83  Linux
/dev/sda3           15666       22193    52428800   83  Linux
/dev/sda4           22193      121535   797965312    5  Extended
/dev/sda5           22193       28720    52428800   83  Linux
/dev/sda6           28720       35247    52428800   83  Linux
/dev/sda7           35247       41774    52428800   83  Linux
/dev/sda8           41774       48302    52428800   83  Linux
/dev/sda9           48302       50913    20971520   83  Linux
/dev/sda10          50913       53523    20971520   82  Linux swap / Solaris

/dev/sda11          53524      121535   546299904   83  Linux

[root@cdhnamenode dev]# swapon -s
Filename                Type        Size    Used    Priority
/dev/sda10                              partition    20971512    0    -1
[root@cdhnamenode dev]# swapoff /dev/sda10

[root@cdhnamenode dev]# swapon -s
Filename                Type        Size    Used    Priority

增加、删除、修改LINUX SWAP区

0 0
原创粉丝点击