This is a mathematical thesaurus. It takes a set of words and multipliers, adds them together, and returns the most similar words.
word2vec allows one to map words to vectors in a high (in this case 300) dimensional space in a way consistent with semantic similarity. By mapping each word to its corresponding vector and scaling, we can take linear combinations. We then compare this combination to each wordvector in the vocabulary using a metric, typically cosine similarity or Euclidean distance, and return the nearest words. To speed the search, the wordvectors are stored in a space-partitioning data structure called a ball tree.