Spark shell退出操作以及出现问题的解决方法

来源:互联网 发布:陆行鲨机甲风暴java 编辑:程序博客网 时间:2024/06/07 07:40

启动spark的操作是在其根目录下输入,在终端中输入:

 ./bin/spark-shell

退出的正确操作是:

:quit

然而我们的错误操作是:

Ctrl+C或Z

这样就会在重启的时候报错。

wugaosheng:spark-2.2.0-bin-hadoop2.7 eric$ ./bin/spark-shell Using Spark's default log4j profile: org/apache/spark/log4j-defaults.propertiesSetting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).17/10/06 17:15:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable17/10/06 17:15:26 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.17/10/06 17:15:33 ERROR Schema: Failed initialising database.Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------java.sql.SQLException: Failed to start database 'metastore_db' with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@30517a57, see the next exception for details.at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)at java.security.AccessController.doPrivileged(Native Method)at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown Source)at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)at java.sql.DriverManager.getConnection(DriverManager.java:664)at java.sql.DriverManager.getConnection(DriverManager.java:208)at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:501)at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:298)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)at java.security.AccessController.doPrivileged(Native Method)at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365)at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394)at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291)at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620).......

输出了很长的错误信息,并且输入正常的代码会报错。

这时候就需要删除spark目录下的metastore_db文件夹,如我的命令:

mv metastore_db/ metastore_db1/

这时候再重启就不会输出那么多错误信息了

如果想杀掉4040端口进程,我的Mac为例:

sudo lsof -i :4040

会输出如下信息:

COMMAND   PID USER   FD   TYPE            DEVICE SIZE/OFF NODE NAMEjava    38639 eric  262u  IPv6 0x731427fd0077f5d      0t0  TCP *:yo-main (LISTEN)

然后运行:

sudo kill -9 38639
这样,占用的端口就可以重新使用了


参考文献

[1]. 1.Spark启动时hive出现another instance of derby may have already booted the database的错误.

http://blog.csdn.net/sunlu1124/article/details/77717299

[2].【Linux—Spark1.1.0】spark-shell强制关闭再重启导致的端口占用问题.http://blog.sina.com.cn/s/blog_49cd89710102v1hl.html

[3]. MAC OS查看端口占用情况及杀死进程.http://blog.csdn.net/zkp0601/article/details/49765289


原创粉丝点击