Skip to content

Conversation

@viirya
Copy link
Member

@viirya viirya commented Feb 25, 2017

What changes were proposed in this pull request?

Catalog.refreshByPath can refresh the cache entry and the associated metadata for all dataframes (if any), that contain the given data source path.

However, CacheManager.invalidateCachedPath doesn't clear all cached plans with the specified path. It causes some strange behaviors reported in SPARK-15678.

How was this patch tested?

Jenkins tests.

Please review http://spark.apache.org/contributing.html before opening a pull request.

@SparkQA
Copy link

SparkQA commented Feb 25, 2017

Test build #73464 has finished for PR 17064 at commit dd6d8ca.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@kiszk
Copy link
Member

kiszk commented Feb 25, 2017

Thanks, LGTM.

@viirya
Copy link
Member Author

viirya commented Feb 25, 2017

@kiszk Thanks.

@viirya
Copy link
Member Author

viirya commented Feb 27, 2017

cc @cloud-fan

}
sparkSession.sharedState.cacheManager.cacheQuery(Dataset.ofRows(sparkSession, data.plan))
case _ => // Do Nothing
cachedData.filter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the previous one doesn't work?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of collection can't be modified during iterating. Some elements are not iterated over if we delete/add elements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we are still modifying it during iteration, after the filter. can you be more specific about what the problem is?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use a java collection so that we can remove elements while iterating?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After filter, we iterate on a different collection than cachedData, so it is no problem to add/delete elements to cachedData.

Copy link
Member Author

@viirya viirya Feb 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem can be shown clearly with an example code snippet:

val t = new scala.collection.mutable.ArrayBuffer[Int]

t += 1
t += 2

t.foreach { 
  case i if i > 0 =>
    println(s"i = $i")
    val index = t.indexWhere(_ == i)
    if (index >= 0) {
      t.remove(index)
    }
    println(s"t: $t")
    t += (i + 2)
    println(s"t: $t")
}

Output:

i = 1    // The first iteration, we get the first element "1"
t: ArrayBuffer(2)   // "1" has been removed from the array
t: ArrayBuffer(2, 3) // New element "3" has been inserted
i = 3   // In next iteration, element "2" is wrongly skipped
t: ArrayBuffer(2)     // "3" has been removed from the array
t: ArrayBuffer(2, 5) 

The element "2" is never iterated over.

@viirya
Copy link
Member Author

viirya commented Feb 28, 2017

@cloud-fan I noticed you open #17097, so I should close this?

@cloud-fan
Copy link
Contributor

no you shouldn't. That's a refactor PR and accidently fixed the same bug.


test("refreshByPath should refresh all cached plans with the specified path") {
def f(path: String, spark: SparkSession, dataCount: Int): DataFrame = {
spark.catalog.refreshByPath(path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can put spark.range(dataCount).write.mode("overwrite").parquet(path) at the beginning of this method and name it testRefreshByPath instead of f

val df1 = df.filter("id > 11")
df1.cache
assert(df1.count == dataCount - 12)
df1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it, so we call refreshByPath before caching the query? Shouldn't we test the opposite order?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function is called twice. So actually it is meant to refresh the cache in first call. Since I will change the test to what you suggested #17064 (comment), we can get rid of this confusing.

}

withTempDir { dir =>
val path = dir.getPath()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we usually call dir.getCanonicalPath

assert(f(path, spark, 100).count == 88)

spark.range(1000).write.mode("overwrite").parquet(path)
assert(f(path, spark, 1000).count == 988)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can make this test more explicit

      spark.range(10).write.mode("overwrite").parquet(path)
      spark.read.parquet(path).cache()
      spark.read.parquet(path).filter($"id" > 4).cache()
      assert(spark.read.parquet(path).filter($"id" > 4).count() == 5)

      spark.range(20).write.mode("overwrite").parquet(path)
      spark.catalog.refreshByPath(path)
      assert(spark.read.parquet(path).filter($"id" > 4).count() == 15)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Looks simpler and more explicit.

@viirya
Copy link
Member Author

viirya commented Mar 1, 2017

@cloud-fan Thanks. I will address your comments soon.

@SparkQA
Copy link

SparkQA commented Mar 1, 2017

Test build #73641 has finished for PR 17064 at commit a9fe0be.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks merging to master!

@asfgit asfgit closed this in 38e7835 Mar 1, 2017
@viirya
Copy link
Member Author

viirya commented Mar 1, 2017

@cloud-fan Thank you!

asfgit pushed a commit that referenced this pull request Mar 17, 2017
…L] Backport Three Cache-related PRs to Spark 2.1

### What changes were proposed in this pull request?

Backport a few cache related PRs:

---
[[SPARK-19093][SQL] Cached tables are not used in SubqueryExpression](#16493)

Consider the plans inside subquery expressions while looking up cache manager to make
use of cached data. Currently CacheManager.useCachedData does not consider the
subquery expressions in the plan.

---
[[SPARK-19736][SQL] refreshByPath should clear all cached plans with the specified path](#17064)

Catalog.refreshByPath can refresh the cache entry and the associated metadata for all dataframes (if any), that contain the given data source path.

However, CacheManager.invalidateCachedPath doesn't clear all cached plans with the specified path. It causes some strange behaviors reported in SPARK-15678.

---
[[SPARK-19765][SPARK-18549][SQL] UNCACHE TABLE should un-cache all cached plans that refer to this table](#17097)

When un-cache a table, we should not only remove the cache entry for this table, but also un-cache any other cached plans that refer to this table. The following commands trigger the table uncache: `DropTableCommand`, `TruncateTableCommand`, `AlterTableRenameCommand`, `UncacheTableCommand`, `RefreshTable` and `InsertIntoHiveTable`

This PR also includes some refactors:
- use java.util.LinkedList to store the cache entries, so that it's safer to remove elements while iterating
- rename invalidateCache to recacheByPlan, which is more obvious about what it does.

### How was this patch tested?
N/A

Author: Xiao Li <[email protected]>

Closes #17319 from gatorsmile/backport-17097.
@viirya viirya deleted the fix-refreshByPath branch December 27, 2023 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants