sqoop从mysql中导入数据到mysql遇到的问题
来源:互联网 发布:数据仓库软件 编辑:程序博客网 时间:2024/06/05 05:51
2014-11-04 09:56:54,398 FATAL [LeaseRenewer:hdfs@cdhnamenode.com:8020] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[LeaseRenewer:hdfs@cdhnamenode.com:8020,5,main] threw an Error. Shutting down now...java.lang.OutOfMemoryError: GC overhead limit exceededat java.util.Arrays.copyOf(Arrays.java:2271)at java.util.zip.ZipCoder.getBytes(ZipCoder.java:89)at java.util.zip.ZipFile.getEntry(ZipFile.java:306)at java.util.jar.JarFile.getEntry(JarFile.java:227)at java.util.jar.JarFile.getJarEntry(JarFile.java:210)at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:840)at sun.misc.URLClassPath.getResource(URLClassPath.java:199)at java.net.URLClassLoader$1.run(URLClassLoader.java:358)at java.net.URLClassLoader$1.run(URLClassLoader.java:355)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:354)at java.lang.ClassLoader.loadClass(ClassLoader.java:425)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)at java.lang.ClassLoader.loadClass(ClassLoader.java:358)at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:406)at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)at java.lang.Thread.run(Thread.java:745)2014-11-04 09:56:54,402 INFO [LeaseRenewer:hdfs@cdhnamenode.com:8020] org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException
参考链接
java.lang.OutOfMemoryError:GC overhead limit exceeded填坑心得
java.lang.OutOfMemoryError: GC overhead limit exceeded解决
JVM系列三:JVM参数设置、分析
查看
jps -lv
44812 sun.tools.jps.Jps -Dapplication.home=/usr/local/jdk1.7.0_60 -Xms8m
42075 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode -Dproc_secondarynamenode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-SECONDARYNAMENODE-cdhnamenode.com.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS
41996 org.apache.hadoop.hdfs.server.namenode.NameNode -Dproc_namenode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-NAMENODE-cdhnamenode.com.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS
参考链接
[已解决]vm.swappiness = 0,这样设置后,创建的swap分区是不是形同虚设了??
Linux 内核 2.6 版本后引入的
/proc/sys/vm/swappiness
The Linux 2.6 kernel added a new kernel parameter called swappiness to let administrators tweak the way Linux swaps.
https://www.linux.com/news/softw ... ut-linux-swap-space
默认是60
如果设置为 0 的话,则等同于禁用 swap
A value of 0 will avoid ever swapping out just for caching space. Using 100 will always favor making the disk cache bigger. Most distributions set this value to be 60, tuned toward moderately aggressive swapping to increase disk cache.
http://www.westnet.com/~gsmith/content/linux-pdflush.htm
这样的话,不管创建多大或多小的swap分区,都形同虚设了是不??
参考链接
Java虚拟机解析篇之---垃圾回收器
[root@cdhnamenode dev]# fdisk -l
Disk /dev/sda: 999.7 GB, 999653638144 bytes
255 heads, 63 sectors/track, 121534 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x180c3818
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2611 20971520 83 Linux
/dev/sda2 2611 15666 104857600 83 Linux
/dev/sda3 15666 22193 52428800 83 Linux
/dev/sda4 22193 121535 797965312 5 Extended
/dev/sda5 22193 28720 52428800 83 Linux
/dev/sda6 28720 35247 52428800 83 Linux
/dev/sda7 35247 41774 52428800 83 Linux
/dev/sda8 41774 48302 52428800 83 Linux
/dev/sda9 48302 50913 20971520 83 Linux
/dev/sda10 50913 53523 20971520 82 Linux swap / Solaris
/dev/sda11 53524 121535 546299904 83 Linux
[root@cdhnamenode dev]# swapon -s
Filename Type Size Used Priority
/dev/sda10 partition 20971512 0 -1
[root@cdhnamenode dev]# swapoff /dev/sda10
[root@cdhnamenode dev]# swapon -s
Filename Type Size Used Priority
增加、删除、修改LINUX SWAP区
- sqoop从mysql中导入数据到mysql遇到的问题
- sqoop从mysql导入hdfs数据过程遇到的问题
- sqoop从mysql迁移数据到hive中遇到的问题
- SQOOP中从mysql导入数据到hive中报错解决方法
- sqoop从mysql数据库导入数据到hdfs中
- sqoop从mysql导入数据到hive中
- Sqoop 数据从HDFS导入到mysql
- SQOOP从MySQL导入数据到HDFS
- SQOOP从MySQL导入数据到Hive
- 使用Sqoop将数据从Hive导入MySQL可能遇到的问题
- sqoop 从 hive 导到mysql遇到的问题
- sqoop 从 hive 导到mysql遇到的问题
- sqoop 从 hive 导到mysql遇到的问题
- 使用Sqoop从MySQL中导入数据
- 使用sqoop将mysql的数据导入到HBase中
- 使用sqoop把数据从mysql导入到hbase
- 使用sqoop从MYSQL导入数据到HBase合集
- sqoop 从mysql导入数据到hdfs、hive
- MD5原理
- Java开发人员最常用19个Linux命令
- USB入门系列之一 —— USB概述【转】
- Sublime Text2 使用及插件配置
- 解压还原ramdisk.img
- sqoop从mysql中导入数据到mysql遇到的问题
- Android常用的api调用接口
- Android Activities 文档专题
- NSNotification与KVO的区别
- USB入门系列之二 —— USB的连接模型【转】
- HDU 1243 DP
- IP查询接口
- QML Image获取图片资源路径的细节
- 黑马程序员——注释,变量。第二天学习笔记,总结