LinkedIn Feed: Faster with Less JVM Garbage | LinkedIn Engineering
1. Be careful with Iterators
2. Estimate the size of a collection when initializing
Initializing the HashMap to the expected size avoided the resizing overhead. The initial capacity was set to the size of input array divided by the default load factor which is 0.7 similar to the examples below:
4. Compile the regex patterns in advance
Each call to String.replaceAll() method in our example has a constant pattern that will be applied to the input value. Therefore, pre-compiling the pattern would save both CPU and memory each time this transformation is applied.
5. Cache it if you can
6. String Interns are useful but dangerous
"A pool of strings, initially empty, is maintained privately by the class String. When the intern method is invoked, if the pool already contains a string equal to this String object as determined by the equals(Object) method, then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned".
The idea is very similar to having a cache, with a limitation that you can't set the maximum number of entries. Therefore, if the strings being interned are not limited (e.g. they might contain unique member ids) it will quickly increase your heap memory usage. We learned this lesson the hard way where we used string interns for some keys and everything looked normal in our offline simulation since the input data was limited. But soon after it was deployed, the memory usage started going up (due to lots of unique strings being interned). Therefore, we chose to go with having LRU caches with fixed maximum entries instead.
Read full article from LinkedIn Feed: Faster with Less JVM Garbage | LinkedIn Engineering
1. Be careful with Iterators
2. Estimate the size of a collection when initializing
Initializing the HashMap to the expected size avoided the resizing overhead. The initial capacity was set to the size of input array divided by the default load factor which is 0.7 similar to the examples below:
_map = new HashMap<String, Foo>((int)Math.ceil(input.size() / 0.7));3. Defer expression evaluation In Java, the method arguments are always fully evaluated (left-to-right) before calling the method. This can lead to some unnecessary operations. Consider for an instance the code block below which compares two objects of type Foo using ComparisonChain. The idea of using such comparison chain is to return the result as soon as one of the compareTo methods returns a non-zero value and avoid unnecessary execution. For example in this example the items are first compared based on their score, then based on their position, and finally based on the representation of the _bar field:
4. Compile the regex patterns in advance
Each call to String.replaceAll() method in our example has a constant pattern that will be applied to the input value. Therefore, pre-compiling the pattern would save both CPU and memory each time this transformation is applied.
5. Cache it if you can
6. String Interns are useful but dangerous
"A pool of strings, initially empty, is maintained privately by the class String. When the intern method is invoked, if the pool already contains a string equal to this String object as determined by the equals(Object) method, then the string from the pool is returned. Otherwise, this String object is added to the pool and a reference to this String object is returned".
The idea is very similar to having a cache, with a limitation that you can't set the maximum number of entries. Therefore, if the strings being interned are not limited (e.g. they might contain unique member ids) it will quickly increase your heap memory usage. We learned this lesson the hard way where we used string interns for some keys and everything looked normal in our offline simulation since the input data was limited. But soon after it was deployed, the memory usage started going up (due to lots of unique strings being interned). Therefore, we chose to go with having LRU caches with fixed maximum entries instead.
Read full article from LinkedIn Feed: Faster with Less JVM Garbage | LinkedIn Engineering
No comments:
Post a Comment