Tutorial: How to build a Tokenizer in Spark and Scala | Knoldus
In our earlier blog A Simple Application in Spark and Scala , we explained how to build Spark and make a simple application using it. In this blog, we will see how to build a fast Tokenizer in Spark & Scala using sbt. Tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens. The list of tokens becomes input for further processing such as parsing or text mining. Although tokenization is a slow process. But, with the help of Spark we can make it fast by running it in chunks/parallel. In following example we will see how to tokenize (segregate) the words in a text file and count the number of times they occur in the text file (i.e., term frequency). Before start building this application follow the instructions of building an application in Spark given in here . After building the application, we can start building the Tokenizer. To build the Tokenizer, create a file TokenizerApp.Read full article from Tutorial: How to build a Tokenizer in Spark and Scala | Knoldus
No comments:
Post a Comment