简单分词器 #
简单(simple
)分词器是一种非常基础的分词器,它会将文本中的非字母字符串拆分成词项,并将这些词项转换为小写形式。与标准分词器不同的是,简单分词器将除字母字符之外的所有内容都视为分隔符,这意味着它不会把数字、标点符号或特殊字符识别为词元的一部分。
参考样例 #
以下命令创建一个名为 my_simple_index
并使用简单分词器的索引:
PUT /my_simple_index
{
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "simple"
}
}
}
}
配置自定义分词器 #
以下命令配置了一个索引,该索引带有一个自定义分词器,这个自定义分词器等同于添加了 html_strip
字符过滤器的简单分词器:
PUT /my_custom_simple_index
{
"settings": {
"analysis": {
"char_filter": {
"html_strip": {
"type": "html_strip"
}
},
"tokenizer": {
"my_lowercase_tokenizer": {
"type": "lowercase"
}
},
"analyzer": {
"my_custom_simple_analyzer": {
"type": "custom",
"char_filter": ["html_strip"],
"tokenizer": "my_lowercase_tokenizer",
"filter": ["lowercase"]
}
}
}
},
"mappings": {
"properties": {
"my_field": {
"type": "text",
"analyzer": "my_custom_simple_analyzer"
}
}
}
}
产生的词元 #
以下请求用来检查分词器生成的词元:
POST /my_custom_simple_index/_analyze
{
"analyzer": "my_custom_simple_analyzer",
"text": "<p>The slow turtle swims over to dogs © 2024!</p>"
}
返回内容中包含了产生的词元
{
"tokens": [
{"token": "the","start_offset": 3,"end_offset": 6,"type": "word","position": 0},
{"token": "slow","start_offset": 7,"end_offset": 11,"type": "word","position": 1},
{"token": "turtle","start_offset": 12,"end_offset": 18,"type": "word","position": 2},
{"token": "swims","start_offset": 19,"end_offset": 24,"type": "word","position": 3},
{"token": "over","start_offset": 25,"end_offset": 29,"type": "word","position": 4},
{"token": "to","start_offset": 30,"end_offset": 32,"type": "word","position": 5},
{"token": "dogs","start_offset": 33,"end_offset": 37,"type": "word","position": 6}
]
}