[Design] Real Time Top k - Woodstock Blog



[Design] Real Time Top k - Woodstock Blog

Given a continuous twitter feed, design an algorithm to return the 100 most frequent words used at this minute, this hour and this day.

Analysis

This is a frequent and useful problem for companies like Google and Twitter.

The first solution below is an approximation method which select keywords that occur more than a certain threshold.

The second solution is more accurate but RAM-intensive.

Lossy Count (used to get an inaccurate trend)

Solution 1 is a modified version of Lossy Count. The detailed steps are explained here:

Start with an empty map (red-black tree). The keys will be search terms, and the values will be a counter for the term.

  1. Look at each item in the stream.

  2. If the term exists in the map, increment the associated counter.

  3. Otherwise, if the map has fewer candidates than you're looking for, add it to the map with a count of one.

  4. However, if the map is "full", decrement the counter in each entry. If any counter reaches zero during this process, remove it from the map.

This slide show explains Lossy Count, which is to divide input data into chunks. Then count elements and decrease counter by 1 after each chunk.

Note that the result is NOT the top frequency items. Instead, the final results are order-dependent, giving heavier weight to the counts processed last. It maybe helpful in some cases, cuz we want to check the latest trend. However, if we want more accurate top keywords for all data, we will do a second pass over the log data.

Now let's discuss the threshold. Use "aaabcd" and map size = 2 as example. 'a' will be inserted into map with occurance = 3. Then 'b' is inserted, and removed. 'c' is inserted, and removed. 'd' is inserted. Since we always decrease 1 at each step, 'a' should only have occurance of 1 at the end. As explained here:

If we limit the map to 99 entries, we are guaranteed to find any term that occurs more than 1/(1 + 99) (1%) of the time.

We change the size of the map to change the threshold. The occurance of in the final result does not matter.

Solution 2

The lossy count does not actually produce the hourly, daily and monthly result accurately. Solution 2 will discuss how we deal with retiring old data in an accurate way.

Suggested by this answer, we keep a 30-day list for each keyword, that counts the daily occurance. This list is FIFO. When we remove and insert a new counter value, we update monthly total.

Alaternatively, this answer suggests keeping 1440 (24 * 60) HashMaps, each storing the information for one minute. And another 2 HashMap for the rolling total for the past hour, and past day.

You need an array of 1440 (24*60) word+count hash maps organized the way that you describe; these are your minute-by-minute counts. You need two additional hash maps – for the rolling total of the hour and the day.

Define two operations on hash maps – add and subtract, with the semantic of merging counts of identical words, and removing words when their count drops to zero.

Each minute you start a new hash map, and update counts from the feed. At the end of the minute, you place that hash map into the array for the current minute, add it to the rolling total for the hour and for the day, and then subtract the hash map of an hour ago from the hourly running total, and subtract the hash map of 24 hours ago from the daily running total.

This is a very good solution, which I would recommend as the standard solution to this "Real Time Top k" problem.


Read full article from [Design] Real Time Top k - Woodstock Blog


No comments:

Post a Comment

Labels

Algorithm (219) Lucene (130) LeetCode (97) Database (36) Data Structure (33) text mining (28) Solr (27) java (27) Mathematical Algorithm (26) Difficult Algorithm (25) Logic Thinking (23) Puzzles (23) Bit Algorithms (22) Math (21) List (20) Dynamic Programming (19) Linux (19) Tree (18) Machine Learning (15) EPI (11) Queue (11) Smart Algorithm (11) Operating System (9) Java Basic (8) Recursive Algorithm (8) Stack (8) Eclipse (7) Scala (7) Tika (7) J2EE (6) Monitoring (6) Trie (6) Concurrency (5) Geometry Algorithm (5) Greedy Algorithm (5) Mahout (5) MySQL (5) xpost (5) C (4) Interview (4) Vi (4) regular expression (4) to-do (4) C++ (3) Chrome (3) Divide and Conquer (3) Graph Algorithm (3) Permutation (3) Powershell (3) Random (3) Segment Tree (3) UIMA (3) Union-Find (3) Video (3) Virtualization (3) Windows (3) XML (3) Advanced Data Structure (2) Android (2) Bash (2) Classic Algorithm (2) Debugging (2) Design Pattern (2) Google (2) Hadoop (2) Java Collections (2) Markov Chains (2) Probabilities (2) Shell (2) Site (2) Web Development (2) Workplace (2) angularjs (2) .Net (1) Amazon Interview (1) Android Studio (1) Array (1) Boilerpipe (1) Book Notes (1) ChromeOS (1) Chromebook (1) Codility (1) Desgin (1) Design (1) Divide and Conqure (1) GAE (1) Google Interview (1) Great Stuff (1) Hash (1) High Tech Companies (1) Improving (1) LifeTips (1) Maven (1) Network (1) Performance (1) Programming (1) Resources (1) Sampling (1) Sed (1) Smart Thinking (1) Sort (1) Spark (1) Stanford NLP (1) System Design (1) Trove (1) VIP (1) tools (1)

Popular Posts