Dataframe query fails after upgrading to spark 2.3.1











up vote
1
down vote

favorite












I have the following snippet of scala which mirrors what I used to do in spark 2.1.1:



  val headers = Seq(StructField("A", StringType), StructField("B", StringType), StructField("C", StringType))
val data = Seq(Seq("A1", "B1", "C1"), Seq("A2", "B2", "C2"), Seq("A3", "B3", "C3"))
val rdd = sc.parallelize(data).map(Row.fromSeq)
sqlContext.createDataFrame(rdd, StructType(headers)).registerTempTable("TEMP_DATA")

val table = sqlContext.table("TEMP_DATA")
table
.select("A")
.filter(table("B") === "B1")
.show()


In 2.3.1 this throws the following error:



Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 

= B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false

org.apache.spark.sql.AnalysisException: Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 = B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false


I can fix this if I swap the select and filter around. My question is, why has this changed? I need to explain why it's happened, and ideally link them to documentation supporting it.



My understanding is that the select returns a dataframe that functionally only has the column A in it, so you can't filter on B. I've tried to recreate this issue in pyspark, but it seems to work just fine there.



Here is the stack trace:



at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:289)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:172)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:178)
at org.apache.spark.sql.Dataset$.apply(Dataset.scala:65)
at org.apache.spark.sql.Dataset.withTypedPlan(Dataset.scala:3301)
at org.apache.spark.sql.Dataset.filter(Dataset.scala:1458)









share|improve this question
























  • Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
    – Joe Widen
    Nov 9 at 14:56










  • Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
    – Andy
    Nov 9 at 15:04






  • 1




    when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
    – eliasah
    Nov 9 at 15:11










  • probably subject for a JIRA.
    – eliasah
    Nov 9 at 15:15










  • Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
    – Joe Widen
    Nov 9 at 15:48

















up vote
1
down vote

favorite












I have the following snippet of scala which mirrors what I used to do in spark 2.1.1:



  val headers = Seq(StructField("A", StringType), StructField("B", StringType), StructField("C", StringType))
val data = Seq(Seq("A1", "B1", "C1"), Seq("A2", "B2", "C2"), Seq("A3", "B3", "C3"))
val rdd = sc.parallelize(data).map(Row.fromSeq)
sqlContext.createDataFrame(rdd, StructType(headers)).registerTempTable("TEMP_DATA")

val table = sqlContext.table("TEMP_DATA")
table
.select("A")
.filter(table("B") === "B1")
.show()


In 2.3.1 this throws the following error:



Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 

= B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false

org.apache.spark.sql.AnalysisException: Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 = B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false


I can fix this if I swap the select and filter around. My question is, why has this changed? I need to explain why it's happened, and ideally link them to documentation supporting it.



My understanding is that the select returns a dataframe that functionally only has the column A in it, so you can't filter on B. I've tried to recreate this issue in pyspark, but it seems to work just fine there.



Here is the stack trace:



at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:289)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:172)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:178)
at org.apache.spark.sql.Dataset$.apply(Dataset.scala:65)
at org.apache.spark.sql.Dataset.withTypedPlan(Dataset.scala:3301)
at org.apache.spark.sql.Dataset.filter(Dataset.scala:1458)









share|improve this question
























  • Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
    – Joe Widen
    Nov 9 at 14:56










  • Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
    – Andy
    Nov 9 at 15:04






  • 1




    when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
    – eliasah
    Nov 9 at 15:11










  • probably subject for a JIRA.
    – eliasah
    Nov 9 at 15:15










  • Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
    – Joe Widen
    Nov 9 at 15:48















up vote
1
down vote

favorite









up vote
1
down vote

favorite











I have the following snippet of scala which mirrors what I used to do in spark 2.1.1:



  val headers = Seq(StructField("A", StringType), StructField("B", StringType), StructField("C", StringType))
val data = Seq(Seq("A1", "B1", "C1"), Seq("A2", "B2", "C2"), Seq("A3", "B3", "C3"))
val rdd = sc.parallelize(data).map(Row.fromSeq)
sqlContext.createDataFrame(rdd, StructType(headers)).registerTempTable("TEMP_DATA")

val table = sqlContext.table("TEMP_DATA")
table
.select("A")
.filter(table("B") === "B1")
.show()


In 2.3.1 this throws the following error:



Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 

= B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false

org.apache.spark.sql.AnalysisException: Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 = B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false


I can fix this if I swap the select and filter around. My question is, why has this changed? I need to explain why it's happened, and ideally link them to documentation supporting it.



My understanding is that the select returns a dataframe that functionally only has the column A in it, so you can't filter on B. I've tried to recreate this issue in pyspark, but it seems to work just fine there.



Here is the stack trace:



at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:289)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:172)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:178)
at org.apache.spark.sql.Dataset$.apply(Dataset.scala:65)
at org.apache.spark.sql.Dataset.withTypedPlan(Dataset.scala:3301)
at org.apache.spark.sql.Dataset.filter(Dataset.scala:1458)









share|improve this question















I have the following snippet of scala which mirrors what I used to do in spark 2.1.1:



  val headers = Seq(StructField("A", StringType), StructField("B", StringType), StructField("C", StringType))
val data = Seq(Seq("A1", "B1", "C1"), Seq("A2", "B2", "C2"), Seq("A3", "B3", "C3"))
val rdd = sc.parallelize(data).map(Row.fromSeq)
sqlContext.createDataFrame(rdd, StructType(headers)).registerTempTable("TEMP_DATA")

val table = sqlContext.table("TEMP_DATA")
table
.select("A")
.filter(table("B") === "B1")
.show()


In 2.3.1 this throws the following error:



Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 

= B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false

org.apache.spark.sql.AnalysisException: Resolved attribute(s) B#1604 missing from A#1603 in operator !Filter (B#1604 = B1).;;
!Filter (B#1604 = B1)
+- AnalysisBarrier
+- Project [A#1603]
+- SubqueryAlias temp_data
+- LogicalRDD [A#1603, B#1604, C#1605], false


I can fix this if I swap the select and filter around. My question is, why has this changed? I need to explain why it's happened, and ideally link them to documentation supporting it.



My understanding is that the select returns a dataframe that functionally only has the column A in it, so you can't filter on B. I've tried to recreate this issue in pyspark, but it seems to work just fine there.



Here is the stack trace:



at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:41)
at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:289)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:172)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:178)
at org.apache.spark.sql.Dataset$.apply(Dataset.scala:65)
at org.apache.spark.sql.Dataset.withTypedPlan(Dataset.scala:3301)
at org.apache.spark.sql.Dataset.filter(Dataset.scala:1458)






scala apache-spark pyspark






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 9 at 16:13

























asked Nov 9 at 14:42









Andy

7363920




7363920












  • Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
    – Joe Widen
    Nov 9 at 14:56










  • Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
    – Andy
    Nov 9 at 15:04






  • 1




    when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
    – eliasah
    Nov 9 at 15:11










  • probably subject for a JIRA.
    – eliasah
    Nov 9 at 15:15










  • Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
    – Joe Widen
    Nov 9 at 15:48




















  • Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
    – Joe Widen
    Nov 9 at 14:56










  • Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
    – Andy
    Nov 9 at 15:04






  • 1




    when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
    – eliasah
    Nov 9 at 15:11










  • probably subject for a JIRA.
    – eliasah
    Nov 9 at 15:15










  • Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
    – Joe Widen
    Nov 9 at 15:48


















Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
– Joe Widen
Nov 9 at 14:56




Let me do some research and I'll get back to you on this particular query. If its a regression, I'll file a spark jira
– Joe Widen
Nov 9 at 14:56












Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
– Andy
Nov 9 at 15:04




Ooh that's worrying. I will take a look further then. My codebase has a lot of it's own catalyst rules, though I'm not expecting any to have been triggered by the above code.
– Andy
Nov 9 at 15:04




1




1




when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
– eliasah
Nov 9 at 15:11




when i read that part of code that is generating this error, I'd say that it's an expected behavior but what i don't understand is that why was it working in the first place...
– eliasah
Nov 9 at 15:11












probably subject for a JIRA.
– eliasah
Nov 9 at 15:15




probably subject for a JIRA.
– eliasah
Nov 9 at 15:15












Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
– Joe Widen
Nov 9 at 15:48






Its working for me on spark 2.3.1 with scala and no additional catalyst rules. The filter should get pushed before the select. Whats the explain plan in 2.1.1 and in 2.3.1? Replace show() with .explain and let us know. Heres mine, comment won't let me format it correctly. == Physical Plan == *(1) Project [A#48] +- *(1) Filter (isnotnull(B#49) && (B#49 = B1)) +- *(1) Scan ExistingRDD[A#48,B#49,C#50]
– Joe Widen
Nov 9 at 15:48



















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53227887%2fdataframe-query-fails-after-upgrading-to-spark-2-3-1%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53227887%2fdataframe-query-fails-after-upgrading-to-spark-2-3-1%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Schultheiß

Verwaltungsgliederung Dänemarks

Liste der Kulturdenkmale in Wilsdruff