hasCode.com » Blog Archive » Lucene by Example: Specifying Analyzers on a per-field-basis and writing a custom Analyzer/Tokenizer
Lucene Dependencies
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>${lucene.version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>${lucene.version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queries</artifactId>
<version>${lucene.version}</version>
</dependency>
Writing a custom Analyzer and Tokenizer
As final result we want to be able to create multiple tokens from an input string by splitting it by the character “e” and case-insensitive and in addition the character “e” should not be part of the tokens created.
Two simple examples:
123e456e789 -> the tokens “123“, “456” and “789” should be extracted
123Eabcexyz -> the tokens “123“, “abc” and “xyz“ should be extracted
Character based Tokenizer
To create a tokenizer to fit the scenario above is easy for us as there already exists the CharTokenizer that our custom tokenizer class may inherit.
We just need to implement one method that gets the codepoint value of the parsed character as parameter and returns whether it matches the character “e” .
Older Lucene Versions: Since Lucene 3.1 the CharTokenizer API has changed, in older versions we’re using isTokenChar(char c) instead.
Now that we’ve got a simple tokenizer we’d like to add an analyzer using our tokenizer and making our analysis case-insensitive.
This is really easy as there already exists a LowerCaseFilter and we may assemble our solution with the following few lines of code:
An analyzer is used when input is stored in the index and when input is processed in a search query.
Lucene’s PerFieldAnalyzerWrapper allows us to specify an analyzer for each field name and a default analyzer as a fallback.
In the following example, we’re assigning two analyzers to the fields named “somefield” and “someotherfield” and the StandardAnalyzer is used as a default for every other field not specified in the mapping.
Map<String, Analyzer> analyzerPerField = new HashMap<String, Analyzer>();
analyzerPerField.put("email", new KeywordAnalyzer());
analyzerPerField.put("specials", new ECharacterAnalyser(version));
PerFieldAnalyzerWrapper analyzer = new PerFieldAnalyzerWrapper(
new StandardAnalyzer(version), analyzerPerField);
IndexWriterConfig config = new IndexWriterConfig(version, analyzer)
.setOpenMode(OpenMode.CREATE);
IndexWriter writer = new IndexWriter(index, config);
Appendix A: Installing/Running Luke – The Lucene Index Toolbox
there is this project maintained by Dmitry Kan on GitHub with the following releases ready for download.
Appendix C: Running Luke with custom Analyzers
Therefore we need to add our analyzer to the class-path when running Luke – as the command line option -jar makes Java ignore class-paths set with -cp , we need to skip this option an specify the main-class to run like in the following example:
java -cp "luke-with-deps-4.8.0.jar:/path/to/lucene-per-field-analyzer-tutorial/target/lucene-perfield-analyzer-tutorial-1.0.0.jar" org.getopt.luke.Luke
This allows us to enter the full qualified name of our analyzer class in the Luke analyzer tool and run an analysis.
Read full article from hasCode.com » Blog Archive » Lucene by Example: Specifying Analyzers on a per-field-basis and writing a custom Analyzer/Tokenizer
Lucene Dependencies
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>${lucene.version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>${lucene.version}</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queries</artifactId>
<version>${lucene.version}</version>
</dependency>
Writing a custom Analyzer and Tokenizer
As final result we want to be able to create multiple tokens from an input string by splitting it by the character “e” and case-insensitive and in addition the character “e” should not be part of the tokens created.
Two simple examples:
123e456e789 -> the tokens “123“, “456” and “789” should be extracted
123Eabcexyz -> the tokens “123“, “abc” and “xyz“ should be extracted
Character based Tokenizer
To create a tokenizer to fit the scenario above is easy for us as there already exists the CharTokenizer that our custom tokenizer class may inherit.
We just need to implement one method that gets the codepoint value of the parsed character as parameter and returns whether it matches the character “e” .
Older Lucene Versions: Since Lucene 3.1 the CharTokenizer API has changed, in older versions we’re using isTokenChar(char c) instead.
public class ECharacterTokenizer extends CharTokenizer { public ECharacterTokenizer(final Version matchVersion, final Reader input) { super(matchVersion, input); } protected boolean isTokenChar(final int character) { return 'e' != character; } }Analyzer using the custom Tokenizer
Now that we’ve got a simple tokenizer we’d like to add an analyzer using our tokenizer and making our analysis case-insensitive.
This is really easy as there already exists a LowerCaseFilter and we may assemble our solution with the following few lines of code:
public class ECharacterAnalyser extends Analyzer { private final Version version; public ECharacterAnalyser(final Version version) { this.version = version; } // just for luke ;) public ECharacterAnalyser() { version = Version.LUCENE_48; } protected TokenStreamComponents createComponents(final String field, final Reader reader) { Tokenizer tokenizer = new ECharacterTokenizer(version, reader); TokenStream filter = new LowerCaseFilter(version, tokenizer); return new TokenStreamComponents(tokenizer, filter); } }Specifying Analyzers for each Document Field
An analyzer is used when input is stored in the index and when input is processed in a search query.
Lucene’s PerFieldAnalyzerWrapper allows us to specify an analyzer for each field name and a default analyzer as a fallback.
In the following example, we’re assigning two analyzers to the fields named “somefield” and “someotherfield” and the StandardAnalyzer is used as a default for every other field not specified in the mapping.
Map<String, Analyzer> analyzerPerField = new HashMap<String, Analyzer>();
analyzerPerField.put("email", new KeywordAnalyzer());
analyzerPerField.put("specials", new ECharacterAnalyser(version));
PerFieldAnalyzerWrapper analyzer = new PerFieldAnalyzerWrapper(
new StandardAnalyzer(version), analyzerPerField);
IndexWriterConfig config = new IndexWriterConfig(version, analyzer)
.setOpenMode(OpenMode.CREATE);
IndexWriter writer = new IndexWriter(index, config);
Appendix A: Installing/Running Luke – The Lucene Index Toolbox
there is this project maintained by Dmitry Kan on GitHub with the following releases ready for download.
Appendix C: Running Luke with custom Analyzers
Therefore we need to add our analyzer to the class-path when running Luke – as the command line option -jar makes Java ignore class-paths set with -cp , we need to skip this option an specify the main-class to run like in the following example:
java -cp "luke-with-deps-4.8.0.jar:/path/to/lucene-per-field-analyzer-tutorial/target/lucene-perfield-analyzer-tutorial-1.0.0.jar" org.getopt.luke.Luke
This allows us to enter the full qualified name of our analyzer class in the Luke analyzer tool and run an analysis.
Read full article from hasCode.com » Blog Archive » Lucene by Example: Specifying Analyzers on a per-field-basis and writing a custom Analyzer/Tokenizer
No comments:
Post a Comment