Mapping an entity-
1. A Basic mapping: There are following most commonly used annotations for mapping an entity.
@Indexed-
Foremost we must declare a persistent class as indexable. This is done by annotating the class with @Indexed (all entities not annotated with @Indexed will be ignored by the indexing process).
@Entity @Indexed public class Book{ ... }
You can optionally specify the index attribute of the @Indexed annotation to change the default name of the index.
@Field
For each property (or attribute) of your entity, you have the ability to describe how it will be indexed. The default (no annotation present) means that the property is ignored by the indexing process. @Field does declare a property as indexed and allows to configure several aspects of the indexing process by setting one or more of the following attributes:
@NumericField-
There is a companion annotation to @Field called @NumericField that can be specified in the same scope as @Field or @DocumentId. It can be specified for Integer, Long, Float and Double properties. At index time the value will be indexed using a Tree structure. When a property is indexed as numeric field, it enables efficient range query and sorting, orders of magnitude faster than doing the same query on standard @Field properties. The @NumericField annotation accept the following parameters:
@Id-
Finally, the id property of an entity is a special property used by Hibernate Search to ensure index unicity of a given entity. By design, an id has to be stored and must not be tokenized. To mark a property as index id, use the @DocumentId annotation. If you are using JPA and you have specified @Id you can omit @DocumentId. The chosen entity id will also be used as document id.
@Entity @Indexed public class Book{ ... @Id @DocumentId public Long getId() { return id; } @Field(name="Abstract", store=Store.YES) public String getSummary() { return summary; } @Lob @Field public String getText() { return text; } @Field @NumericField( precisionStep = 6) public float getGrade() { return grade; } }
2. A Mapping properties multiple times: Sometimes one has to map a property multiple times per index, with slightly different indexing strategies. For example, sorting a query by field requires the field to be un-analyzed. If one wants to search by words in this property and still sort it, one need to index it twice – once analyzed and once un-analyzed. @Fields allows to achieve this goal.
@Entity @Indexed(index = "Book" ) public class Book { @Fields( { @Field, @Field(name = "summary_forSort", analyze = Analyze.NO, store = Store.YES) } ) public String getSummary() { return summary; } ... }
@Field supports 2 attributes useful when @Fields is used:
3. A Embedded and associated objects:
Associated objects as well as embedded objects can be indexed as part of the root entity index. This is useful if you expect to search a given entity based on properties of associated objects.
@Entity @Indexed public class Place { @Id @GeneratedValue @DocumentId private Long id; @Field private String name; @OneToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @IndexedEmbedded private Address address; .... } @Entity public class Address { @Id @GeneratedValue private Long id; @Field private String street; @Field private String city; @ContainedIn @OneToMany(mappedBy="address") private Set<Place> places; ... }
4. A Analysis:
Analysis is the process of converting text into single terms (words) and can be considered as one of the key features of a fulltext search engine. Lucene uses the concept of Analyzers to control this process. In the following section we cover the multiple ways Hibernate Search offers to configure the analyzers.
Default analyzer and analyzer by class-
The default analyzer class used to index tokenized fields is configurable through the hibernate.search.analyzer property. The default value for this property is org.apache.lucene.analysis.standard.StandardAnalyzer.
You can also define the analyzer class per entity, property and even per @Field (useful when multiple fields are indexed from a single property).
@Entity @Indexed @Analyzer(impl = EntityAnalyzer.class) public class MyEntity { @Id @GeneratedValue @DocumentId private Integer id; @Field private String name; @Field @Analyzer(impl = PropertyAnalyzer.class) private String summary; @Field(analyzer = @Analyzer(impl = FieldAnalyzer.class) private String body; ... }
In this example, EntityAnalyzer is used to index all tokenized properties (eg. name), except summary and body which are indexed with PropertyAnalyzer and FieldAnalyzer respectively.
Named analyzers-
Analyzers can become quite complex to deal with. For this reason introduces Hibernate Search the notion of analyzer definitions. An analyzer definition can be reused by many @Analyzer declarations and is composed of:
This separation of tasks – a list of char filters, and a tokenizer followed by a list of filters – allows for easy reuse of each individual component and let you build your customized analyzer in a very flexible way (just like Lego). Generally speaking the char filters do some pre-processing in the character input, then the Tokenizer starts the tokenizing process by turning the character input into tokens which are then further processed by the TokenFilters. Hibernate Search supports this infrastructure by utilizing the Solr analyzer framework.
@Entity @Indexed @AnalyzerDef(name = "customanalyzer",tokenizer =@TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = SnowballPorterFilterFactory.class, params = { @Parameter(name = "language", value = "English") }) }) public class Book { .... }
Strategy Design Patterns We can easily create a strategy design pattern using lambda. To implement…
Decorator Pattern A decorator pattern allows a user to add new functionality to an existing…
Delegating pattern In software engineering, the delegation pattern is an object-oriented design pattern that allows…
Technology has emerged a lot in the last decade, and now we have artificial intelligence;…
Managing a database is becoming increasingly complex now due to the vast amount of data…
Overview In this article, we will explore Spring Scheduler how we could use it by…