Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决
来源:互联网 发布:仿美文网整站源码 编辑:程序博客网 时间:2024/06/06 06:40
随着新版本的spark已经逐渐稳定,最近拟将原有框架升级到spark 2.0。还是比较兴奋的,特别是SQL的速度真的快了许多。。
然而,在其中一个操作时却卡住了。主要是dataframe.map操作,这个之前在spark 1.X是可以运行的,然而在spark 2.0上却无法通过。。
看了提醒的问题,主要是:
******error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases. resDf_upd.map(row => {******
针对这个问题,网上所得获取的资料还真不多。不过想着肯定是dataset统一了datframe与rdd之后就出现了新的要求。
经过查看spark官方文档,对spark有了一条这样的描述。
Dataset is Spark SQL’s strongly-typed API for working with structured data, i.e. records with a known schema.
Datasets are lazy and structured query expressions are only triggered when an action is invoked. Internally, aDataset
represents a logical plan that describes the computation query required to produce the data (for a givenSpark SQL session).
A Dataset is a result of executing a query expression against data storage like files, Hive tables or JDBC databases. The structured query expression can be described by a SQL query, a Column-based SQL expression or a Scala/Java lambda function. And that is why Dataset operations are available in three variants.
从这可以看出,要想对dataset进行操作,需要进行相应的encode操作。特别是官网给的例子
// No pre-defined encoders for Dataset[Map[K,V]], define explicitlyimplicit val mapEncoder = org.apache.spark.sql.Encoders.kryo[Map[String, Any]]// Primitive types and case classes can be also defined as// implicit val stringIntMapEncoder: Encoder[Map[String, Any]] = ExpressionEncoder()// row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]teenagersDF.map(teenager => teenager.getValuesMap[Any](List("name", "age"))).collect()// Array(Map("name" -> "Justin", "age" -> 19))
从这看出,要进行map操作,要先定义一个Encoder。。
这就增加了系统升级繁重的工作量了。为了更简单一些,幸运的dataset也提供了转化RDD的操作。因此只需要将之前dataframe.map
在中间修改为:dataframe.rdd.map即可。
- Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决
- Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决
- Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决
- Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset问题的分析与解决
- 解决 Error:Unable to find encoder for type stored in a Dataset
- Unable to find a result type for extension [...] in location attribute
- Spark的RDD与DataFrame、DataSet
- 使用CocoaPods过程中 Unable to find a specification for
- 记一次git fatal: Unable to find remote helper for 'https'问题的解决
- spark 中 rdd to dataframe 问题
- EF中Unable to create a constant value of type...的错误解决
- 记录一下解决webdriver启动浏览器报“Unable to find a free port”问题的方法
- spark2.0版本的 DataFrame、DataSet 与 Spark sql
- 解决Cython在Window下Python2.7中:Unable to find vcvarsall.bat的问题
- python扩展问题”unable to find vcvarsall.bat“的解决
- Python扩展问题”unable to find vcvarsall.bat“的解决
- Unable to find vcvarsall.bat问题的解决
- Unable to find a value for "tStatus" in object of class org.entity.Passport using operator "."
- python-深拷贝和浅拷贝
- android AP侧通过sensor_class挂载sensor的一种方案
- Java解析网络数(Json)运用CloseableHttpClient
- 【Machine Learning】笔记:Transfer Learning
- C++引用:&
- Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决
- Win10专业版+VS2017+OpenCV3.3.0环境搭建
- 【2017.9.5】研究生,新的起点,新的开始
- Hexo默认的主题更改
- java面向对象(二)之继承
- 简单的背包问题
- Java 8 新特性(一)lambda表达式
- 基于MQTT协议的Mosquitto的使用及libmosquitto客户端编程
- 汉罗塔的一般解决方法