java - Asynchronous Logging - Stack Overflow



java - Asynchronous Logging - Stack Overflow

Have a look at Logback,AsyncAppender it already provide separate threadpool, queue etc and is easily configurable, it almost do the same as you are doing, but saves you from re-inventing the wheel.


Read full article from java - Asynchronous Logging - Stack Overflow


java - Order of the JSON objects, using jacksons ObjectMapper - Stack Overflow



java - Order of the JSON objects, using jacksons ObjectMapper - Stack Overflow

@JsonPropertyOrder({ "id", "label", "target", "source", "attributes" }) public class Relation { ... }

Read full article from java - Order of the JSON objects, using jacksons ObjectMapper - Stack Overflow


java - Why doesn't mvn dependency:purge-local-repository fetch the same dependencies as mvn install does? - Stack Overflow



java - Why doesn't mvn dependency:purge-local-repository fetch the same dependencies as mvn install does? - Stack Overflow

mvn dependency:purge-local-repository will remove the project dependencies from the local repository, and optionally re-resolve them

so in this case it will redownload all the dependencies that project needs after purging them from local repository

while mvn install will just update dependencies based on policy specified in settings.xml

most of the time it will just download the dependencies which aren't available in your local repository (or needing an update based on your policy in settings.xml)


Read full article from java - Why doesn't mvn dependency:purge-local-repository fetch the same dependencies as mvn install does? - Stack Overflow


How do you clear Apache Maven's cache? - Stack Overflow



How do you clear Apache Maven's cache? - Stack Overflow

To clean the local cache try using the dependency plug-in

mvn dependency:purge-local-repository  

or

mvn dependency:purge-local-repository -DreResolve=false  

or

mvn dependency:purge-local-repository -DactTransitively=false -DreResolve=false

Read full article from How do you clear Apache Maven's cache? - Stack Overflow


java - How can I get Maven to stop attempting to check for updates for artifacts from a certain group from maven-central-repo? - Stack Overflow



java - How can I get Maven to stop attempting to check for updates for artifacts from a certain group from maven-central-repo? - Stack Overflow

The updatePolicy tag didn't work for me. However Rich Seller mentioned that snapshots should be disabled anyways so I looked further and noticed that the extra repository that I added to my settings.xml was causing the problem actually. Adding the snapshots section to this repository in my settings.xml did the trick!

<repository>      <id>jboss</id>      <name>JBoss Repository</name>      <url>http://repository.jboss.com/maven2</url>      <snapshots>          <enabled>false</enabled>      </snapshots>  </repository>

Read full article from java - How can I get Maven to stop attempting to check for updates for artifacts from a certain group from maven-central-repo? - Stack Overflow


What is Maven dependency:purge-local-repository supposed to do? - Stack Overflow



What is Maven dependency:purge-local-repository supposed to do? - Stack Overflow

I'm trying to purge the local repository of a project dependency before launching releasing it in order to make sure every dependency required is on the central repository and is downloaded from it.

In the project folder (containing the pom.xml), I launch the following command:

mvn clean dependency:purge-local-repository -DreResolve=false -Dverbose=true  

The project's POM is very simple and just have a declared dependency to junit:junit:3.8.1


Read full article from What is Maven dependency:purge-local-repository supposed to do? - Stack Overflow


Fei Dong



Fei Dong

Inspiring Talks, Media


Read full article from Fei Dong


经典算法研究系列:七、深入浅出遗传算法 - 结构之法 算法之道 - 博客频道 - CSDN.NET



经典算法研究系列:七、深入浅出遗传算法 - 结构之法 算法之道 - 博客频道 - CSDN.NET

Ok,先看维基百科对遗传算法所给的解释:

遗传算法是计算数学中用于解决最优化的搜索算法,是进化算法的一种。进化算法最初是借鉴了进化生物学中的一些现象而发展起来的,这些现象包括遗传、突变、自然选择以及杂交等。

 

遗传算法通常实现方式为一种计算机模拟。对于一个最优化问题,一定数量的候选解(称为个体)的抽象表示(称为染色体)的种群向更好的解进化。传统上,解用二进制表示(即01的串),但也可以用其他表示方法。进化从完全随机个体的种群开始,之后一代一代发生。在每一代中,整个种群的适应度被评价,从当前种群中随机地选择多个个体(基于它们的适应度),通过自然选择和突变产生新的生命种群,该种群在算法的下一次迭代中成为当前种群。


Read full article from 经典算法研究系列:七、深入浅出遗传算法 - 结构之法 算法之道 - 博客频道 - CSDN.NET


Is there an equivalent of the "Cracking the Coding Interview" kind of book for system design / testing interviews? : cscareerquestions



Is there an equivalent of the "Cracking the Coding Interview" kind of book for system design / testing interviews? : cscareerquestions

This is my personal opinion here, but those don't sound like questions I would give an entry-level fresh graduate applying for a job at my company. Or, if I did give them, I would not be looking for very in-depth or insightful answers. Because, you're right -- design jobs/tasks are generally given after a person has a lot of experience in the professional world. No one in their right mind would plop down a new grad in a new job and tell them to go design a distributed archiving and indexing system.

I personally cannot recommend any really good design books. I haven't run across one. I'd actually be really curious if someone in this sub knows of one, but I also wouldn't be surprised if it didn't exist because design considerations are often very specific to the type of application, the expected results/features, the tech stack chosen, the environment, the industry, etc. I would expect a mid level or senior candidate to come in to an interview and be able to impress me with their design skills (giving me a robust, adaptable, resilient system design) without having read a book telling them exactly how to do it. In fact, a major difference marker between entry-level and higher-experience candidates is their design skills, because they generally need to be learned from experience.


Read full article from Is there an equivalent of the "Cracking the Coding Interview" kind of book for system design / testing interviews? : cscareerquestions


DataStax Enterprise Search Vs. SolrCloud : DataStax



DataStax Enterprise Search Vs. SolrCloud : DataStax

The Basics

Both DSE and SolrCloud are built on Apache Solr.  Apache Solr is, at its core, a service layer for Lucene.  It exposes indexes over HTTP and adds schema to validate and analyze complex types of fields.

DSE and SolrCloud build on this by taking care of things like durability, availability and scaling indexes to massive sizes across many different machines.  Both solutions use the Lucene index format to store the index data on disk and rely on Lucene for most of the index and query logic.

Since both are built on Solr they both offer the same featureset such as faceting, geo, near real time search, etc.


Read full article from DataStax Enterprise Search Vs. SolrCloud : DataStax


Geo Library for Amazon DynamoDB - Part 1: Table Structure - AWS Developer Blog - Mobile



Geo Library for Amazon DynamoDB - Part 1: Table Structure - AWS Developer Blog - Mobile

Geo Library for Amazon DynamoDB supports geospatial indexing on Amazon DynamoDB datasets. The library takes care of managing Geohash indexes. You can use these indexes for fast and efficient execution of location-based queries over DynamoDB items representing points of interest (latitude/longitude pairs). Some features of this library are:

  • Life Cycle Operations: Create, retrieve, update, and delete geospatial data items.
  • Query Support: Box queries return items that fall within a pair of geo points that define a rectangle as projected on a sphere. Radius queries return items that fall within a given distance from a geo point.
  • Easy Integration: This library extends the AWS SDK for Java, making it easy to use from your existing Java applications on AWS.

To help you get started, we have added an AWS Elastic Beanstalk application and a sample iOS project which you can get from GitHub. You can follow the Getting Started section of README.md and run the sample apps to find out what Geo Library for Amazon DynamoDB offers.

Geo Library for Amazon DynamoDB automatically generates values for Geohash, GeoJSON, Hash Key, and Range Key attributes in your table and uses them for querying. When you run the sample app, you will see those attributes in your table. In this post, I will briefly explain what they are and how the library uses them.


Read full article from Geo Library for Amazon DynamoDB - Part 1: Table Structure - AWS Developer Blog - Mobile


Damn Cool Algorithms: Spatial indexing with Quadtrees and Hilbert Curves - Nick's Blog



Damn Cool Algorithms: Spatial indexing with Quadtrees and Hilbert Curves - Nick's Blog

Spatial indexing is increasingly important as more and more data and applications are geospatially-enabled. Efficiently querying geospatial data, however, is a considerable challenge: because the data is two-dimensional (or sometimes, more), you can't use standard indexing techniques to query on position. Spatial indexes solve this through a variety of techniques. In this post, we'll cover several - quadtrees, geohashes (not to be confused with geohashing), and space-filling curves - and reveal how they're all interrelated.

Quadtrees

Quadtrees are a very straightforward spatial indexing technique. In a Quadtree, each node represents a bounding box covering some part of the space being indexed, with the root node covering the entire area. Each node is either a leaf node - in which case it contains one or more indexed points, and no children, or it is an internal node, in which case it has exactly four children, one for each quadrant obtained by dividing the area covered in half along both axes - hence the name.


Read full article from Damn Cool Algorithms: Spatial indexing with Quadtrees and Hilbert Curves - Nick's Blog


Use Eclipse as diff tool « Heiko Behrens (Blog)



Use Eclipse as diff tool « Heiko Behrens (Blog)

Coming from the windows platform I am used to tools like WinMerge or AraxisMerge (commercial) that offer a more comfortable way to compare the content of files than diff --side-by-side. To my suprise I did not find a single useful external diff tool for Mac OS.

More or less accidentally while working with Eclipse and CVS and its function “Compare with latest from HEAD” I stumbled on a grayed menu item that says “…with each other”.


Read full article from Use Eclipse as diff tool « Heiko Behrens (Blog)


fluent-builders-generator-eclipse-plugin - Fluent builders generator for Eclipse - Google Project Hosting



fluent-builders-generator-eclipse-plugin - Fluent builders generator for Eclipse - Google Project Hosting

A plugin that is going to change your way of creating Java objects, by leveraging the idea of fluent interfaces. By using it, You get:

  • ease of creating objects in one clean and readable method-chain
  • clever collections support
  • creation of complex object trees empowered by IDE's code completion

Read full article from fluent-builders-generator-eclipse-plugin - Fluent builders generator for Eclipse - Google Project Hosting


Thinking about IT: How to generate equals(), hashCode(), toString() and compareTo() using Apache Commons Lang in Eclipse



Thinking about IT: How to generate equals(), hashCode(), toString() and compareTo() using Apache Commons Lang in Eclipse

Apache Commons project represents a set of reusable Java components. Apache Commons Lang is one of the Apache Commons components. It provides much needed additions to the standard JDK's java.lang package. One of its packages, org.apache.commons.lang3.builder contains a set of very useful builders that helps in creating consistent equals(), hashCode(), toString() and compareTo() methods.

In this post, I will show how to generate equals(), hashCode(), toString() and compareTo() methods using Apache Commons Lang builders in Eclipse. For demonstrations I will use the latest stable release - Apache Commons Lang 3.0. Note that Lang 3.0 uses a different package (org.apache.commons.lang3) than its predecessors (org.apache.commons.lang). All the examples presented in this post will work also with the previous Apache Commons Lang releases; just change the package name from lang3 to lang.


Read full article from Thinking about IT: How to generate equals(), hashCode(), toString() and compareTo() using Apache Commons Lang in Eclipse


Clean compareTo methods with Google Guava « EclipseSource Blog



Clean compareTo methods with Google Guava « EclipseSource Blog

Basically a ComparisonChain is just a util that provides a fluent API to write clean compareTo methods. And, as you might agree, clean means readable and maintanable. Let’s convert the example above using Guavas ComparisonChain.

public class Fruit implements Comparable<Fruit> {       private String name;    private String family;    private int calories;       @Override    public int compareTo( Fruit otherFruit ) {      return ComparisonChain.start()        .compare( name, otherFruit.name )        .compare( family, otherFruit.family )        .compare( calories, otherFruit.calories )        .result();    }  }

The code that performs the checking for the result is completely gone. It’s done for us now by Guava’s implementation. In addition to the nice Interface another cool thing about ComparisionChain’s is that they compare lazily. This means that values will only be compared if the previous comparison was zero. From my point of view the result of using this is much more readable code. As always, feel free to disagree in a comment.


Read full article from Clean compareTo methods with Google Guava « EclipseSource Blog


(2) What's the best way to learn how to scale web applications? - Quora



(2) What's the best way to learn how to scale web applications? - Quora

Simple solution:
1. move db to a separate server.
2. adding some web server
3. store your session in the db, maybe a cache server like memcache needed.
4. having an app server such as  nigix/apache
5. some transaction or data synchronizing related problems maybe need to be done.

Read full article from (2) What's the best way to learn how to scale web applications? - Quora


Intuitively Showing How To Scale a Web Application Using a Coffee Shop as an Example - High Scalability -



Intuitively Showing How To Scale a Web Application Using a Coffee Shop as an Example - High Scalability -

I own a small coffee shop.

My expense is proportional to resources
100 square feet of built up area with utilities, 1 barista, 1 cup coffee maker.

My shop's capacity
Serves 1 customer at a time, takes 3 minutes to brew a cup of coffee, a total of 5 minutes to serve a customer.

Since my barista works without breaks and the German made coffee maker never breaks down,
my shop's maximum throughput = 12 customers per hour.

Read full article from Intuitively Showing How To Scale a Web Application Using a Coffee Shop as an Example - High Scalability -


Tomcat集群Session共享解决方案Memcached - 我的博客 - ITeye技术网站



Tomcat集群Session共享解决方案Memcached - 我的博客 - ITeye技术网站

Tomcat集群为什么需要Session共享

当客户端访问Tomcat集群时,所有的请求将被Nginx拦截,由Nginx做负载均衡后转发给后台真实Tomcat。按照这个流程就可能出现一个问题,当用户进行页面刷新或跳转时,每次请求将被转发给不同的Tomcat处理,这样就会造成Session的不同步。举个简单的栗子,例如当用户往购物车添加商品时,兴高采烈地准备买单了,当他跳转到付款页面却发现购物车被清空了,这就是Session丢失的典型栗子。因此,我们需要为集群环境做Session同步。

Session共享方案讨论

在服务器集群的环境下,共享Session的方案主要分为4类:
1.用户端本地保存Cookie
在这种方式下,Web应用会将用户状态写到Cookie并保存到用户本地。但是,如果用户使用的浏览器不支持Cookie或者禁用Cookie,该方案将会失效。并且Cookie能保存的数据是有大小限制的,而且数据暴露给用户本地浏览器,存在安全性问题。


2.采用数据库方式保存Session
相对本地Cookie方式,将用户信息保存到服务端数据库解决了数据安全性问题。然而,这么做是有代价的,应用中所有对Session的访问都必须经过数据库,加大数据库负担,导致系统整体性能降低。


3.代理服务器
通过代理服务器实现Session共享的思路非常简单,就是Session数据在哪台Tomcat,之后的请求都转发到这台Tomcat。例如Nginx,具体实现只需要修改转发规则为ip_hash即可。但这时候可能存在某一时间段大量用户始终访问某台Tomcat的负载很大,也就失去了负载均衡的意义。


4.搭建缓存服务器
这种方案也是应用最普遍的方案,通过搭建缓存服务器,并使用第三方工具接管Tomcat对Session的管理。
本文例子采用方案4进行Session管理,使用的缓存服务器是Memcached,并使用memcached-session-manager管理Session。memcached-session-manager(以下简称msm)的使用方法很简单,只需要根据Tomcat版本和序列化方式下载相应jar包,拷贝至Tomcat的lib目录下,最后修改Tomcat配置文件,更换Session管理模块即可


Read full article from Tomcat集群Session共享解决方案Memcached - 我的博客 - ITeye技术网站


同主题阅读:f design question 总结 - sigh1988的专栏 - 博客频道 - CSDN.NET



同主题阅读:f design question 总结 - sigh1988的专栏 - 博客频道 - CSDN.NET

. facebook timeline,这个也不太是个考题,看看就行了
https://www.facebook.com/note.php?note_id=10150468255628920
http://highscalability.com/blog/2012/1/23/facebook-timeline-bro


除了这些,准备一下这些题目
implement memcache
http://www.adayinthelifeof.nl/2011/02/06/memcache-internals/

implement tinyurl(以及distribute across multiple servers)
http://stackoverflow.com/questions/742013/how-to-code-a-url-sho

determine trending topics(twitter)
http://www.americanscientist.org/issues/pub/the-britney-spears-
http://www.michael-noll.com/blog/2013/01/18/implementing-real-t

copy one file to multiple servers
http://vimeo.com/11280885

稍微知道一下dynamo key value store,以及google的gfs和big table


另外推荐一些网站
http://highscalability.com/blog/category/facebook
这个high scalability上有很多讲system design的东西,不光是facebook的,没空的
话,就光看你要面试的那家就好了..
facebook engineering blog
http://www.quora.com/Facebook-Engineering/What-is-Facebooks-arc
http://stackoverflow.com/questions/3533948/facebook-architectur

Read full article from 同主题阅读:f design question 总结 - sigh1988的专栏 - 博客频道 - CSDN.NET


Facebook系统设计准备 - 我的博客 - ITeye技术网站



Facebook系统设计准备 - 我的博客 - ITeye技术网站

另外,f家还喜欢让你估算机器之类的,做一些back-of-envelopme calculation。所以
最好对一些计算机相关的基本常数,fb的用户量等等有个大概的了解。

准备的时候建议看看fb的design高频题。一方面有可能面试的时候刚好碰到这几个
topic,另一方面其实很多design都是相通的。
之前有个帖子讲这个,原帖已经被删了,这儿有个备份http://blog.csdn.net/sigh1988/article/details/9790337

另外补充一点我收集的材料

a) 首先你可以从整体上了解一下facebook的architecture
http://www.quora.com/Facebook-Engineering/What-is-Facebooks-arc
http://www.ece.lsu.edu/hpca-18/files/HPCA2012_Facebook_Keynote.
http://www.quora.com/Facebook-Engineering/What-have-been-Facebo
除了下面给出的一些资料,fb engineering page里还有很多不错的内容
https://www.facebook.com/Engineering

Read full article from Facebook系统设计准备 - 我的博客 - ITeye技术网站


[Design] Find similar library books - Shuatiblog.com



[Design] Find similar library books - Shuatiblog.com

有很大一个电子图书馆,里面每本书的每一页都是OCR转换出来的text,所有每页大 约有5%的error(转换错误,错误分割单词,跳脱。。。)。设计一个方法判定图书馆 里是否有完全一样的书(duplicate),或者将来有书进来时判定同样的书是否已存在。

Solution

Very large string matching, we mush use hashing or similar technique. Since books have error, we need to do fuzzy search/matching. Bloom filter is designed to do this!

Refer to [Design] Big Data – Fuzzy Search Url (Bloom Filter).


Read full article from [Design] Find similar library books - Shuatiblog.com


[Design] Design Cache System (`) - Shuatiblog.com



[Design] Design Cache System (`) - Shuatiblog.com

[Q] Design a layer in front of a system which cache the last n requests and the responses to them from the system.

Solution:

[a] When a request comes look it up in the cache and if it hits then return the response from here and do not pass the request to the system

[b] If the request is not found in the cache then pass it on to the system

[c] Since cache can only store the last n requests, Insert the n+1th request in the cache and delete one of the older requests from the cache

[d]Design one cache such that all operations can be done in O(1) – lookup, delete and insert.


Read full article from [Design] Design Cache System (`) - Shuatiblog.com


500 Lines or Less项目介绍 - 头条 - 伯乐在线



500 Lines or Less项目介绍 - 头条 - 伯乐在线

所有建筑专业的学生都会在求学期间学习居家建筑、公寓、学校以及其他类型的建筑的设计。同样的,每一个程序员也应该知道编译器如何编译指令,电子表格如何更新单元格,浏览器如何渲染页面,这本书的目标就是帮助读者从宏观的视角,理解程序设计中的思维方式。

这本书不会专注于算法细节、使用的语言等,而是重点讨论在开发程序中如何做出决策、在软件架构的时候做出何种妥协,比如:

  • 为何将程序设计成这些模块,为何提供这些接口?
  • 这里为什么用继承或者是合成?
  • 这里为什么用多线程,那些为什么不用?
  • 什么时候程序应该依赖插件,插件应该如何被配置,如何载入?

方针

写作应该是有趣的,所以我们尽量精简流程,这里是习作流程的最小集。

  • 当你开始编写的时候,尽量早些提交一个pull请求,这样我们可以尽早获得我们没有意识到的问题。
  • 第一次提交以后,你可以按照你的意愿继续提交。
  • 当你的第一稿完成的时候,在提交中注明,或者直接给我们发邮件,我们会为你的工作分配一两个核查者。

Read full article from 500 Lines or Less项目介绍 - 头条 - 伯乐在线


Tomcat学习之Acceptor - 绝情谷 - 博客频道 - CSDN.NET



Tomcat学习之Acceptor - 绝情谷 - 博客频道 - CSDN.NET

Acceptor顾名思义就是接收器,用于接收用户请求,这节主要是分析Acceptor的启动和处理请求!

首先来看Acceptor的类图


从图中可以看出Acceptor实现了Runnable接口,可以作为一个线程启动,且都是Endpoint的内部类。Acceptor有3种实现JIo、Apr、Nio本节不会介绍他们之间的区别,常用的是JIoEndpoint.Acceptor,后面的例子,如果没有做特别说明指的是JIoEndpoint类里面的接收器。


Read full article from Tomcat学习之Acceptor - 绝情谷 - 博客频道 - CSDN.NET


Apache Tomcat 7 Configuration Reference (7.0.62) - The HTTP Connector



Apache Tomcat 7 Configuration Reference (7.0.62) - The HTTP Connector

acceptCount

The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.

acceptorThreadCount

The number of threads to be used to accept connections. Increase this value on a multi CPU machine, although you would never really need more than 2. Also, with a lot of non keep alive connections, you might want to increase this value as well. Default value is 1.

acceptorThreadPriority

The priority of the acceptor threads. The threads used to accept new connections. The default value is 5 (the value of the java.lang.Thread.NORM_PRIORITY constant). See the JavaDoc for the java.lang.Thread class for more details on what this priority means.


Read full article from Apache Tomcat 7 Configuration Reference (7.0.62) - The HTTP Connector


High Performance Browser Networking



High Performance Browser Networking

 

HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection… Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.

The resulting protocol is more friendly to the network, because fewer TCP connections can be used in comparison to HTTP/1.x. This means less competition with other flows, and longer-lived connections, which in turn leads to better utilization of available network capacity. Finally, HTTP/2 also enables more efficient processing of messages through use of binary message framing.


Read full article from High Performance Browser Networking


java - Prioritze between http requests with tomcat - Stack Overflow



java - Prioritze between http requests with tomcat - Stack Overflow

Priority seems like a strange concern in a multi-threaded environment. I don't believe there is anything in the servlet spec that would do what you're looking for.

The thread priority seems like the only solution to me. Tomcat allows you to specify the priority of each thread in an executor's thread pool: http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html

You would need to somehow route requests for each interface to the appropriate connector. That would need to be done outside of Tomcat, perhaps with Apache or nginx. If you have a load balancer, that may be an interesting place to do that sort of routing.

You could also give a larger thread pool to the higher priority interface instead of messing with thread priorities. But I'm not quite sure about what you are trying to prioritize.


Read full article from java - Prioritze between http requests with tomcat - Stack Overflow


Tsuna's blog: How long does it take to make a context switch?



Tsuna's blog: How long does it take to make a context switch?

First idea: with syscalls (fail)

My first idea was to make a cheap system call many times in a row, time how long it took, and compute the average time spent per syscall. The cheapest system call on Linux these days seems to be gettid. Turns out, this was a naive approach since system calls don't actually cause a full context switch anymore nowadays, the kernel can get away with a "mode switch" (go from user mode to kernel mode, then back to user mode). That's why when I ran my first test program, vmstat wouldn't show a noticeable increase in number of context switches. But this test is interesting too, although it's not what I wanted originally.

Source code: timesyscall.c Results:
  • Intel 5150: 105ns/syscall
  • Intel E5440: 87ns/syscall
  • Intel E5520: 58ns/syscall
  • Intel X5550: 52ns/syscall
  • Intel L5630: 58ns/syscall
  • Intel E5-2620: 67ns/syscall
Now that's nice, more expensive CPUs perform noticeably better (note however the slight increase in cost on Sandy Bridge). But that's not really what we wanted to know. So to test the cost of a context switch, we need to force the kernel to de-schedule the current process and schedule another one instead. And to benchmark the CPU, we need to get the kernel to do nothing but this in a tight loop. How would you do this?

Second idea: with futex

The way I did it was to abuse futex (RTFM). futex is the low level Linux-specific primitive used by most threading libraries to implement blocking operations such as waiting on a contended mutexes, semaphores that run out of permits, condition variables and friends. If you would like to know more, go read Futexes Are Tricky by Ulrich Drepper. Anyways, with a futex, it's easy to suspend and resume processes. What my test does is that it forks off a child process, and the parent and the child take turn waiting on the futex. When the parent waits, the child wakes it up and goes on to wait on the futex, until the parent wakes it and goes on to wait again. Some kind of a ping-pong "I wake you up, you wake me up...".

Read full article from Tsuna's blog: How long does it take to make a context switch?


How to enable Reader Mode in Chrome for Windows - CNET



How to enable Reader Mode in Chrome for Windows - CNET

In October of last year I wrote about a way to view Web pages in Chrome for Android without any of the distracting ads or other page elements. This feature, called Reader Mode, presents only the elements within the body of the story, so you can stay focused on the text and pertinent images.

Now this feature can now be enabled on the desktop version of Chrome for Windows, according to SlashGear, but it carries a new name: Distill mode. Here's how to use it on your desktop:


Read full article from How to enable Reader Mode in Chrome for Windows - CNET


The Architecture of Open Source Applications



The Architecture of Open Source Applications

Architects look at thousands of buildings during their training, and study critiques of those buildings written by masters. In contrast, most software developers only ever get to know a handful of large programs well—usually programs they wrote themselves—and never study the great programs of history. As a result, they repeat one another's mistakes rather than building on one another's successes.

Our goal is to change that. In these two books, the authors of four dozen open source applications explain how their software is structured, and why. What are each program's major components? How do they interact? And what did their builders learn during their development? In answering these questions, the contributors to these books provide unique insights into how they think.

If you are a junior developer, and want to learn how your more experienced colleagues think, these books are the place to start. If you are an intermediate or senior developer, and want to see how your peers have solved hard design problems, these books can help you too.


Read full article from The Architecture of Open Source Applications


Minimum Spanning Trees (Prim, Kruskal, Boruvka) | Algorithm KnapSack



Minimum Spanning Trees (Prim, Kruskal, Boruvka) | Algorithm KnapSack

Spanning Tree :  is a subgraph that is a Tree and connects all the vertices together in an undirected connected graph.

Minimum Spanning Tree(MST) :  is a Spanning Tree with weight less than or equal to the weight of every other spanning tree.

Minimum Spanning Forest(MSF) : In an undirected (possibly disconnected graph) a spanning forest is a collection of all MST for that graph.

Few well known algorithms and my implementations for finding MST and MSF in Java are :

Prim's Algorithm (Java Implementation)

Kruskal's Algorithm (Java Implementation)

Boruvka's Algorithm (Java Implementation)

Happy Weekend everyone, enjoy the outdoors and forget the throbbing pain from an unfinished and not yet understood algorithm..


Read full article from Minimum Spanning Trees (Prim, Kruskal, Boruvka) | Algorithm KnapSack


Algorithm of the Week: A Look At Coursera's Design and Analysis of Algorithms: Part I | Javalobby



Algorithm of the Week: A Look At Coursera's Design and Analysis of Algorithms: Part I | Javalobby

The video lectures (about 2 hours per week) were very good and easy to follow, and Professor Roughgarden is quite good at explaining the different concepts and algorithms. My only wish is that I had the option of reading the material (as presented in a text book) instead of watching it. The slides used in the videos are available for download, but they don't have enough information to be read on their own.

For me it is faster to read than to listen to someone explaining something, it is easier to skim things I already know, and it is easier to go back and forth in a text when something isn't clear. Also, I like the visual structure of the material on the pages – something I don't get from a video. But the main reasons is that it would be faster. Taking the course is quite a big time commitment, especially when you work full time and have a family, and if things can be sped up it's a big plus.

This is a of course a matter of personal taste, and I suspect many people prefer having the material presented in a lecture format. Also, it is possible to speed up the videos (1.25 or 1.5 times the normal speed), which I did make use of sometimes. But for me it still does not beat reading from a book.

The topics I already knew quite well were merge sort, big-O notation, quicksort, and hash tables (in particular hash tables – I use them almost daily when coding). I thought all of these topics were well presented.

I had never heard about the Master Method before, but found both the formula (for finding the big-O performance of recursive algorithms) and the derivation of it quite interesting. I also liked the graph algorithms, in particular the algorithm for finding strongly connected components in a directed graph (the algorithm uses the neat trick of reversing all the edges as one of its steps). The lecture on heaps was interesting – I studied heaps in my class at university, but I had completely forgotten about them. So there it really was a case of  needing a refresher (and heap sort now makes total sense too).


Read full article from Algorithm of the Week: A Look At Coursera's Design and Analysis of Algorithms: Part I | Javalobby


Las Vegas algorithm - Wikipedia, the free encyclopedia



Las Vegas algorithm - Wikipedia, the free encyclopedia

In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. In other words, a Las Vegas algorithm does not gamble with the correctness of the result; it gambles only with the resources used for the computation. A simple example is randomized quicksort, where the pivot is chosen randomly, but the result is always sorted. The usual definition of a Las Vegas algorithm includes the restriction that the expected run time always be finite, when the expectation is carried out over the space of random information, or entropy, used in the algorithm. An alternative definition requires that a Las Vegas algorithm always terminate (be effective), but it may output a symbol not part of the solution space to indicate failure in finding a solution.[1]


Read full article from Las Vegas algorithm - Wikipedia, the free encyclopedia


Monte Carlo algorithm - Wikipedia, the free encyclopedia



Monte Carlo algorithm - Wikipedia, the free encyclopedia

The related class of Las Vegas algorithms are also randomized, but in a different way: they take an amount of time that varies randomly, but always produce the correct answer. A Monte Carlo algorithm can be converted into a Las Vegas algorithm whenever there exists a procedure to verify that the output produced by the algorithm is indeed correct. If so, then the resulting Las Vegas algorithm is merely to repeatedly run the Monte Carlo algorithm until one of the runs produces an output that can be verified to be correct.


Read full article from Monte Carlo algorithm - Wikipedia, the free encyclopedia


Karger Randomized Contraction algorithm for finding Minimum Cut in undirected Graphs | M+ Blog



Karger Randomized Contraction algorithm for finding Minimum Cut in undirected Graphs | M+ Blog

Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected Graph. It was invented by David Karger and first published in 1993.

A cut is a set of edges that, if removed, would disconnect the Graph; a minimum cut is the smallest possible set of edges that, when removed, produce a disconnected Graph. Every minimum cut corresponds to a partitioning of the Graph vertices into two non-empty subsets, such that the edges in the cut only have their endpoints in the two different subsets.

Karger algorithm builds a cut of the graph by randomly creating these partitions, and in particular by choosing at each iteration a random edge, and contracting the graph around it: basically, merging its two endpoints in a single vertex, and updating the remaining edges, such that the self-loops introduced (like the chosen edge itself) are removed from the new Graph, and storing parallel-edges (if the algorithm chooses an edge (u,v) and both u and v have edges to a third vertex w, then the new Graph will have two edges between the new vertex z and w) After n-2 iterations, only two macro-vertex will be left, and the parallel edges between them will form the cut.

The algorithm is a Montecarlo algorithm, i.e. its running time is deterministic, but it isn't guaranteed that at every iteration the best solution will be found.

Actually the probability of finding the minimum cut in one run of the algorithm is pretty low, with an upper bound of 1/(n ^ 2), where n is the number of vertices in the Graph. Nonetheless, by running the algorithm multiple times and storing the best result found, the probability that none of the runs founds the minimum cut becomes very small: 1/e (Neper) for n squared runs, and 1/n for n ^2 * log(n) runs - for large values of n, i.e. for large Graphs, that's a negligible probability.

The implementation provided is written in Python, assumes the Graph represented with adjacency list (as a Dictionary) and is restricted to having only integer vertices labels (ideally the number from 0 to n-1): this limitation allows to exploit the union-find implementation provided, and can be easily overcome by mapping the original labels to the range [0..n-1].


Read full article from Karger Randomized Contraction algorithm for finding Minimum Cut in undirected Graphs | M+ Blog


Maximum-flow: Ford-Fulkerson algorithm



Maximum-flow: Ford-Fulkerson algorithm

  • Initially set all weights in the flow graph to 0. Then:
  • 1. Compute the edge weights in the residual graph Gr from the current flow graph Gf :
    • For each edge (u,v,c) in G, and corresponding edge (u,v,f) in Gf , do the following:
      • Update forward edge (u,v,x) in Gr to be (u,v, c-f)
      • If f>0, create or update backward edge (v,u,x) in Gr to be (v,u, f)
    • Consider 0-weight edges in the resulting Gr as nonexistant.
  • 2. Find the s-t augmenting path P in Gr with largest bottleneck edge weight b. If there is no s-t path in Gr , Done: Gf shows the maximum flow.
  • 3. Update the flow graph Gf with the flow along the augmenting path P:
    • For each edge (u,v,x) in P, update edge weights in the flow graph Gf :
      • If (u,v) is a forward edge in Gr , update edge (u,v,f) in Gf to be (u,v, f+b).
      • If (u,v) is a backward edge in Gr , update edge (v,u,f) in Gf to be (v,u, f-b).
  • 4. Go to 1.

  • Read full article from Maximum-flow: Ford-Fulkerson algorithm


    二分图大讲堂――彻底搞定最大匹配数(最小覆盖数)、最大独立数、最小路径覆盖、带权最优匹配 - One thing I know,that is I know nothing.(Socrates Greek) - ITeye技术网站



    二分图大讲堂――彻底搞定最大匹配数(最小覆盖数)、最大独立数、最小路径覆盖、带权最优匹配 - One thing I know,that is I know nothing.(Socrates Greek) - ITeye技术网站

    二分图大讲堂――彻底搞定最大匹配数(最小覆盖数)、最大独立数、最小路径覆盖、带权最优匹配

    文本内容框架:

    §1图论点、边集和二分图的相关概念和性质

    §2二分图最大匹配求解

    匈牙利算法、Hopcroft-Karp算法

    §3二分图最小覆盖集和最大独立集的构造

    §4二分图最小路径覆盖求解

    §5二分图带权最优匹配求解

    Kuhn-Munkers算法

    §6小结

    每章节都详细地讲解了问题介绍,算法原理和分析,算法流程,算法实现四部分内容,力求彻底解决问题。

     

    §1图论点、边集和二分图的相关概念和性质

     

    点覆盖、最小点覆盖

    点覆盖集即一个点集,使得所有边至少有一个端点在集合里。或者说是"点" 覆盖了所有"边"。。极小点覆盖(minimal vertex covering):本身为点覆盖,其真子集都不是。最小点覆盖(minimum vertex covering):点最少的点覆盖。点覆盖数(vertex covering number):最小点覆盖的点数。

     

    边覆盖、极小边覆盖

         边覆盖集即一个边集,使得所有点都与集合里的边邻接。或者说是"边" 覆盖了所有"点"。极小边覆盖(minimal edge covering):本身是边覆盖,其真子集都不是。最小边覆盖(minimum edge covering):边最少的边覆盖。边覆盖数(edge covering number):最小边覆盖的边数。

     

    独立集、极大独立集

    独立集即一个点集,集合中任两个结点不相邻,则称V为独立集。或者说是导出的子图是零图(没有边)的点集。极大独立集(maximal independent set):本身为独立集,再加入任何点都不是。最大独立集(maximum independent set):点最多的独立集。独立数(independent number):最大独立集的点。

     

    团即一个点集,集合中任两个结点相邻。或者说是导出的子图是完全图的点集。极大团(maximal clique):本身为团,再加入任何点都不是。最大团(maximum clique):点最多的团。团数(clique number):最大团的点数。

     

    边独立集、极大边独立集

    边独立集即一个边集,满足边集中的任两边不邻接。极大边独立集(maximal edge independent set):本身为边独立集,再加入任何边都不是。最大边独立集(maximum edge independent set):边最多的边独立集。边独立数(edge independent number):最大边独立集的边数。

     

    边独立集又称匹配(matching),相应的有极大匹配(maximal matching),最大匹配(maximum matching),匹配数(matching number)。

     

    支配集、极小支配集

    支配集即一个点集,使得所有其他点至少有一个相邻点在集合里。或者说是一部分的"点"支配了所有"点"。极小支配集(minimal dominating set):本身为支配集,其真子集都不是。最小支配集(minimum dominating set):点最少的支配集。支配数(dominating number):最小支配集的点数。

     

    边支配集、极小边支配集

    边支配集即一个边集,使得所有边至少有一条邻接边在集合里。或者说是一部分的"边"支配了所有"边"。极小边支配集(minimal edge dominating set):本身是边支配集,其真子集都不是。最小边支配集(minimum edge dominating set):边最少的边支配集。边支配数(edge dominating number):最小边支配集的边数。

     

    最小路径覆盖

    最小路径覆盖(path covering):是"路径" 覆盖"点",即用尽量少的不相交简单路径覆盖有向无环图G的所有顶点,即每个顶点严格属于一条路径。路径的长度可能为0(单个点)。

    最小路径覆盖数=G的点数-最小路径覆盖中的边数。应该使得最小路径覆盖中的边数尽量多,但是又不能让两条边在同一个顶点相交。拆点:将每一个顶点i拆成两个顶点Xi和Yi。然后根据原图中边的信息,从X部往Y部引边。所有边的方向都是由X部到Y部。因此,所转化出的二分图的最大匹配数则是原图G中最小路径覆盖上的边数。因此由最小路径覆盖数=原图G的顶点数-二分图的最大匹配数便可以得解。


    Read full article from 二分图大讲堂――彻底搞定最大匹配数(最小覆盖数)、最大独立数、最小路径覆盖、带权最优匹配 - One thing I know,that is I know nothing.(Socrates Greek) - ITeye技术网站


    Java Programming Examples on Graph Problems & Algorithms - Sanfoundry



    Java Programming Examples on Graph Problems & Algorithms - Sanfoundry

    This section covers Java Programming Examples on Graph Problems & Algorithms. Every example program includes the description of the program, Java code as well as output of the program. Here is the listing of Java programming examples:

    1. Java Programming examples on "Connected Components"

    Java Program to Solve any Linear Equation in One Variable
    Java Program to Check whether Undirected Graph is Connected using DFS
    Java Program to Check whether Directed Graph is Connected using DFS
    Java Program to Check whether Undirected Graph is Connected using BFS
    Java Program to Check whether Directed Graph is Connected using BFS
    Java Program to Check whether Graph is Biconnected
    Java Program to Find Strongly Connected Components in Graphs
    Java Program to Traverse a Graph using BFS
    Java Program to Traverse Graph using DFS
    Java Program to Check the Connectivity of Graph Using BFS
    Java Program to Check the Connectivity of Graph Using DFS
    Java Program to Test Using DFS Whether a Directed Graph is Weakly Connected or Not
    Java Program to Check Whether a Graph is Strongly Connected or Not
    Java Program to Check if an UnDirected Graph is a Tree or Not Using DFS
    Java Program to Check if a Directed Graph is a Tree or Not Using DFS
    Java Program to Find the Connected Components of an UnDirected Graph
    Java Program to Create a Minimal Set of All Edges Whose Addition will Convert it to a Strongly Connected DAG
    Java Program to Implement Kosaraju Algorithm
    Java Program to Implement Tarjan Algorithm
    Java Program to Implement Gabow Algorithm

    2. Java Programming examples on "Topological Sorting"

    Java Program for Topological Sorting in Graphs
    Java Program to Check Cycle in a Graph using Topological Sort
    Java Program to Apply DFS to Perform the Topological Sorting of a Directed Acyclic Graph
    Java Program to Check Whether Topological Sorting can be Performed in a Graph
    Java Program to Create a Random Linear Extension for a DAG
    Java Program to Generate All the Possible Linear Extensions of a DAG
    Java Program to Remove the Edges in a Given Cyclic Graph such that its Linear Extension can be Found

    Read full article from Java Programming Examples on Graph Problems & Algorithms - Sanfoundry


    瞍��瘜��閮�- Domination



    瞍��瘜��閮�- Domination


    Read full article from 瞍��瘜��閮�- Domination


    What is a good algorithm for getting the minimum vertex cover of a tree? - Stack Overflow



    What is a good algorithm for getting the minimum vertex cover of a tree? - Stack Overflow

    I hope here you can find more related answer to your question.


    I was thinking about my solution, probably you will need to polish it but as long as dynamic programing is in one of your tags you probably need to:

    1. For each u vertex define S+(u) is cover size with vertex u and S-(u) cover without vertex u.
    2. S+(u)= 1 + Sum(S-(v)) for each child v of u.
    3. S-(u)=Sum(max{S-(v),S+(v)}) for each child v of u.
    4. Answer is max(S+(r), S-(r)) where r is root of your tree.


    After reading this. Changed the above algorithm to find maximum independent set, since in wiki article stated

    A set is independent if and only if its complement is a vertex cover.

    So by changing min to max we can find the maximum independent set and by compliment the minimum vertex cover, since both problem are equivalent.


    Read full article from What is a good algorithm for getting the minimum vertex cover of a tree? - Stack Overflow


    Tristan's Collection of Interview Questions: Find the Minimun Vertex Cover for a Tree



    Tristan's Collection of Interview Questions: Find the Minimun Vertex Cover for a Tree

    Problem: Given a tree, find its minimum vertex cover. Wait, what is a vertex cover? Given a undirected graph G(V,E),  a vertex cover is a subset of V such that for any egde e in E, at least one of e's two endpoints should be in this subset (vertex cover).

    Solution: The minimum vertex cover for a general graph is a NP-hard problem. However, for a tree, there is a linear solution. The idea here is to do DFS search plus post-order traversal. If we encounter a leaf node and the edge connecting this leaf node with its parent, we know in order to construct a vertex cover, we must include at least one of the node (the leaf node, or its parent). Here we can use a greedy approach. We can see selecting the leaf doesn't give us any extra benefit, while selecting the parent can give us some benefit, since the parent must be also connected to other nodes. By selecting the parent node, we can further "cover" some extra edges. With this strategy in mind, our algorithm is as follow:

    • we do a DFS search. When a DFS call on a child node returns, we check if the child and the parent are both unselected. If yes, we select the parent node.
    • After all the DFS finishes (we traverse the tree), those selected nodes form the minimum vertex cover. The cost is O(N).

    Read full article from Tristan's Collection of Interview Questions: Find the Minimun Vertex Cover for a Tree


    Contemplation of my learning. : 2's complement - The way computer counts.



    Contemplation of my learning. : 2's complement - The way computer counts.

    2's complement - The way computer counts.

    Let's start first post with counting. Computers understand binary language which consists of only two symbols, basically represented as '0' and '1'. I assume you know how to represent decimal numbers into binary domain. I'm giving here a fictional story to make you understand how computer counts.
    In binary world there are two persons, Jedi and Vader. Jedi stands for good and Vader is of evil. Jedi rules over '+'(positive) domain and Vader rules over '-'(negative) domain. Now both have to count but they are opposite to each other so they start with exact opposite ends(as they don't like each other). Jedi start with all 'zero' bits while counting in his positive domain and Vader start with all 'one' bits his domain. They have something in common(both are powerful) and that's why the pattern in which they counts is same but they use just opposite symbols. What is '0' to Jedi is '1' to Vader and vice versa. Below two lists are given. The numbers in list are of 4-bits(1 bit for sign, 3 bits for value) for demo. Entries in list are written as "s vvv". Here 's' is sign bit and 'v' is the data bit. Space between sign bit and data bits is just for sake of clarity. It has no other meaning. Left list is for Jedi and right list is for Vader. While counting, sign bit is always '0' for Jedi and '1' for Vader. As you can see the bit pattern is exactly same for these lists, except that they are opposite to each other in bit.  Number in parentheses is the corresponding value of that bit pattern. 

    Read full article from Contemplation of my learning. : 2's complement - The way computer counts.


    Check if a given graph is tree or not - GeeksQuiz



    Check if a given graph is tree or not - GeeksQuiz

    Check if a given graph is tree or not

    Write a function that returns true if a given undirected graph is tree and false otherwise. For example, the following graph is a tree.

    cycleGraph

    But the following graph is not a tree.
    cycleGraph

    An undirected graph is tree if it has following properties.
    1) There is no cycle.
    2) The graph is connected.

    For an undirected graph we can either use BFS or DFS to detect above two properties.


    Read full article from Check if a given graph is tree or not - GeeksQuiz


    The Vertex Cover Problem - CodeProject



    The Vertex Cover Problem - CodeProject

    This project discusses different techniques and algorithms used to solve the parameterized Vertex Cover problem. A vertex cover of a graph G(V,E) is a subset of vertices V such that for every edge (u, v) ⊆ E, at least one of the vertices u or v is in the vertex cover. The best algorithm for this problem is known to run at O(1.2852k + kn). The optimal solution is intractable, thus optimization strategies in solving the vertex cover problem are brought off-the-shelves, including pre-processing, kernelization, and branching methodologies. A performance bound is considered for approximation algorithms listed in this research.

    Background

    Vertex cover problem is a Fixed Parameter Tractable (FPT) problem, where an input k, usually an integer in our case, can be used to minimize the computational density of a problem x. For example, if we have a table with n records and a query of size k, then finding objects in the table that suit the query k can be done in time O(n k). When parameter k is small, this solution can be feasible. Sometimes, it is possible to write a O(n 2 k) algorithm for special tasks; this will also be feasible when the parameter is small. Moreover, a problem with a parameter k is called Fixed Parameter Tractable (FPT) if it can be solved or decided by an algorithm within a running time O(f(k).poly(n)), for some function f. That was the basic approach of Robert Downey and Michael Fellows; they took into consideration the Vertex Cover problem as follows:


    Read full article from The Vertex Cover Problem - CodeProject


    Graph Coloring | Graph Coloring Algorithm | Math@TutorCircle.com



    Graph Coloring | Graph Coloring Algorithm | Math@TutorCircle.com

    Graph Coloring Algorithm

    Back to Top
    There are many heuristic sequential techniques for coloring a graph. Given below are different graph coloring algorithms.

    Greed Graph Coloring: This algorithm focuses on carefully picking the next vertex to be colored. In this, once a vertex is colored, its color never changes.
    Vertices are considered to be in a specific order v1,v2,........,vn and vi is the smallest available color not used by vi's neighbours.
    If the vertices are ordered according to their degrees, the resulting greedy coloring uses at most maxi min{d(xi) + 1, i} colors, at most one more than the graph's maximum degree.
    This heuristic is sometimes called the Welsh–Powell algorithm.
    A coloring F of the vertices v0,v1,...... ,vn1 of the graph G is tight with respect to the given order, if
    F(ai) colors (i - 1) + 1 for all i = 0, 1, ...., n - 1
    This is the backtracking sequential coloring algorithm, which returns the exact value of $x(G), first developed by Brown.
    First Fit Algorithm: This is an easiest and fastest technique of all greedy coloring heuistics. It sequentially assigns each vertex the lowest legal color. This algorithm has the advantage of being very simple and fast and this can be implemented to run in O(n).
    This algorithm simply picks a vertex from an arbitrary order.

    Vertex Coloring

    Back to Top
    Vertex coloring is a way of coloring the vertices of a graph such that no two adjacent vertices share the same color.
    Edge and Face coloring can be transformed into Vertex version.

    In vertex coloring, given a graph, identify how many colors are required to color its vertices in such a way that no two adjacent vertices receive the same color. The required number of colors is called the chromatic number of G and is denoted by χ(G). It assigns colors to each vertex of the graph in such a way that no edge connects two identically colored vertices. vertex coloring tries to minimize the number of colors for a given graph.

    A vertex coloring of a graph with k or fewer colors is known as a k-coloring. A graph having a k-coloring, χ(G) = k is said to be a k-colorable graph, while a graph having chromatic number χG) = k is called a k-chromatic graph
    Vertex Coloring

    Edge Coloring

    Back to Top
    An edge coloring of a graph G is a coloring of the edges of G such that adjacent edges receive different colors. An edge coloring containing the smallest possible number of colors for a given graph is known as a minimum edge coloring. Whether it is possible to color the edges of a given graph using at most k different colors, for a given value of k, or with the fewest possible colors. The minimum required number of colors for the edges of a given graph is called the chromatic index of the graph.
    Edge Coloring

    Face Coloring

    Back to Top
    Face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color. Faces that meet only at a vertex are allowed to be colored the same color. The (face) chromatic number of a map is the smallest number of colors that can be used to color the map subject to our rule, that faces with an edge in common get different colors.

    Read full article from Graph Coloring | Graph Coloring Algorithm | Math@TutorCircle.com


    Labels

    Algorithm (219) Lucene (130) LeetCode (97) Database (36) Data Structure (33) text mining (28) Solr (27) java (27) Mathematical Algorithm (26) Difficult Algorithm (25) Logic Thinking (23) Puzzles (23) Bit Algorithms (22) Math (21) List (20) Dynamic Programming (19) Linux (19) Tree (18) Machine Learning (15) EPI (11) Queue (11) Smart Algorithm (11) Operating System (9) Java Basic (8) Recursive Algorithm (8) Stack (8) Eclipse (7) Scala (7) Tika (7) J2EE (6) Monitoring (6) Trie (6) Concurrency (5) Geometry Algorithm (5) Greedy Algorithm (5) Mahout (5) MySQL (5) xpost (5) C (4) Interview (4) Vi (4) regular expression (4) to-do (4) C++ (3) Chrome (3) Divide and Conquer (3) Graph Algorithm (3) Permutation (3) Powershell (3) Random (3) Segment Tree (3) UIMA (3) Union-Find (3) Video (3) Virtualization (3) Windows (3) XML (3) Advanced Data Structure (2) Android (2) Bash (2) Classic Algorithm (2) Debugging (2) Design Pattern (2) Google (2) Hadoop (2) Java Collections (2) Markov Chains (2) Probabilities (2) Shell (2) Site (2) Web Development (2) Workplace (2) angularjs (2) .Net (1) Amazon Interview (1) Android Studio (1) Array (1) Boilerpipe (1) Book Notes (1) ChromeOS (1) Chromebook (1) Codility (1) Desgin (1) Design (1) Divide and Conqure (1) GAE (1) Google Interview (1) Great Stuff (1) Hash (1) High Tech Companies (1) Improving (1) LifeTips (1) Maven (1) Network (1) Performance (1) Programming (1) Resources (1) Sampling (1) Sed (1) Smart Thinking (1) Sort (1) Spark (1) Stanford NLP (1) System Design (1) Trove (1) VIP (1) tools (1)

    Popular Posts