Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ case class CachedRDDBuilder(
storageLevel: StorageLevel,
@transient cachedPlan: SparkPlan,
tableName: Option[String])(
@transient private var _cachedColumnBuffers: RDD[CachedBatch] = null) {
@transient @volatile private var _cachedColumnBuffers: RDD[CachedBatch] = null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this just be a lazy val? I don't see any caller that specifies this argument.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC we use var here to support cache buffer clearance? def clearCache(blocking: Boolean = true)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bersprockets @cloud-fan Oops, I just noticed this causes the Scala 2.11 build to fail:

[error] /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7-ubuntu-scala-2.11/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryRelation.scala:52: values cannot be volatile
[error]     @transient @volatile private var _cachedColumnBuffers: RDD[CachedBatch] = null) {
[error]   

It looks like this might be a scalac bug, that is only fixed in 2.12; didn't look too hard but ended up here:
scala/bug#8873
scala/scala#5294

It might be sufficient to move this to a private field, as I don't think any caller actually sets this value? Let me try.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @srowen . Going forward, I assume I (and other contributors) should be building at least once with -Pscala-2.11 before submitting or updating a PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't go that far as we have another job to check 2.11 after the fact and these are rare

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bersprockets yeah that did it: #23864


val sizeInBytesStats: LongAccumulator = cachedPlan.sqlContext.sparkContext.longAccumulator

Expand Down