Skip to content

Commit eda1ff4

Browse files
hhbyyhmengxr
authored andcommitted
[SPARK-11813][MLLIB] Avoid serialization of vocab in Word2Vec
jira: https://issues.apache.org/jira/browse/SPARK-11813 I found the problem during training a large corpus. Avoid serialization of vocab in Word2Vec has 2 benefits. 1. Performance improvement for less serialization. 2. Increase the capacity of Word2Vec a lot. Currently in the fit of word2vec, the closure mainly includes serialization of Word2Vec and 2 global table. the main part of Word2vec is the vocab of size: vocab * 40 * 2 * 4 = 320 vocab 2 global table: vocab * vectorSize * 8. If vectorSize = 20, that's 160 vocab. Their sum cannot exceed Int.max due to the restriction of ByteArrayOutputStream. In any case, avoiding serialization of vocab helps decrease the size of the closure serialization, especially when vectorSize is small, thus to allow larger vocabulary. Actually there's another possible fix, make local copy of fields to avoid including Word2Vec in the closure. Let me know if that's preferred. Author: Yuhao Yang <[email protected]> Closes #9803 from hhbyyh/w2vVocab. (cherry picked from commit e391abd) Signed-off-by: Xiangrui Meng <[email protected]>
1 parent e12fbd8 commit eda1ff4

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

mllib/src/main/scala/org/apache/spark/mllib/feature/Word2Vec.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -141,8 +141,8 @@ class Word2Vec extends Serializable with Logging {
141141

142142
private var trainWordsCount = 0
143143
private var vocabSize = 0
144-
private var vocab: Array[VocabWord] = null
145-
private var vocabHash = mutable.HashMap.empty[String, Int]
144+
@transient private var vocab: Array[VocabWord] = null
145+
@transient private var vocabHash = mutable.HashMap.empty[String, Int]
146146

147147
private def learnVocab(words: RDD[String]): Unit = {
148148
vocab = words.map(w => (w, 1))

0 commit comments

Comments
 (0)