為Elasticsarch添增ik分析器優化中文搜索(二)

前言

雖然說安裝好ik分析器可以對中文能夠比較友善的處理了; 但測試後發現有些詞彙還是沒有成功分詞. 不過幸好也找到了解決辦法; 順便也介紹一下如何熱更新流行詞彙吧!

調整之前

我們看看針對這個句子可以得到什麼結果: 首先呢, 既然要使用那個模塊, 就必須先確保你的 Nginx 有編譯該模塊

root@ghost-elastic01:~# curl 'http://localhost:9200/ikhell/_analyze?pretty=true' -H 'Content-Type: application/json' -d '{ "field": "content", "text":"首先呢, 既然要使用那個模塊, 就必須先確保你的 Nginx 有編譯該模塊"}'
{
  "tokens" : [
    {
      "token" : "首先",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "呢",
      "start_offset" : 2,
      "end_offset" : 3,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "既然",
      "start_offset" : 5,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "要使",
      "start_offset" : 7,
      "end_offset" : 9,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "使用",
      "start_offset" : 8,
      "end_offset" : 10,
      "type" : "CN_WORD",
      "position" : 4
    },
    {
      "token" : "那",
      "start_offset" : 10,
      "end_offset" : 11,
      "type" : "CN_CHAR",
      "position" : 5
    },
    {
      "token" : "個",
      "start_offset" : 11,
      "end_offset" : 12,
      "type" : "CN_CHAR",
      "position" : 6
    },
    {
      "token" : "模",
      "start_offset" : 12,
      "end_offset" : 13,
      "type" : "CN_CHAR",
      "position" : 7
    },
    {
      "token" : "塊",
      "start_offset" : 13,
      "end_offset" : 14,
      "type" : "CN_CHAR",
      "position" : 8
    },
    {
      "token" : "就",
      "start_offset" : 16,
      "end_offset" : 17,
      "type" : "CN_CHAR",
      "position" : 9
    },
    {
      "token" : "必",
      "start_offset" : 17,
      "end_offset" : 18,
      "type" : "CN_CHAR",
      "position" : 10
    },
    {
      "token" : "須",
      "start_offset" : 18,
      "end_offset" : 19,
      "type" : "CN_CHAR",
      "position" : 11
    },
    {
      "token" : "先",
      "start_offset" : 19,
      "end_offset" : 20,
      "type" : "CN_CHAR",
      "position" : 12
    },
    {
      "token" : "確",
      "start_offset" : 20,
      "end_offset" : 21,
      "type" : "CN_CHAR",
      "position" : 13
    },
    {
      "token" : "保",
      "start_offset" : 21,
      "end_offset" : 22,
      "type" : "CN_CHAR",
      "position" : 14
    },
    {
      "token" : "你",
      "start_offset" : 22,
      "end_offset" : 23,
      "type" : "CN_CHAR",
      "position" : 15
    },
    {
      "token" : "的",
      "start_offset" : 23,
      "end_offset" : 24,
      "type" : "CN_CHAR",
      "position" : 16
    },
    {
      "token" : "nginx",
      "start_offset" : 25,
      "end_offset" : 30,
      "type" : "ENGLISH",
      "position" : 17
    },
    {
      "token" : "有",
      "start_offset" : 31,
      "end_offset" : 32,
      "type" : "CN_CHAR",
      "position" : 18
    },
    {
      "token" : "編",
      "start_offset" : 32,
      "end_offset" : 33,
      "type" : "CN_CHAR",
      "position" : 19
    },
    {
      "token" : "譯",
      "start_offset" : 33,
      "end_offset" : 34,
      "type" : "CN_CHAR",
      "position" : 20
    },
    {
      "token" : "該",
      "start_offset" : 34,
      "end_offset" : 35,
      "type" : "CN_CHAR",
      "position" : 21
    },
    {
      "token" : "模",
      "start_offset" : 35,
      "end_offset" : 36,
      "type" : "CN_CHAR",
      "position" : 22
    },
    {
      "token" : "塊",
      "start_offset" : 36,
      "end_offset" : 37,
      "type" : "CN_CHAR",
      "position" : 23
    }
  ]
}

首先呢, 既然要使用那個模塊, 就必須先確保你的 Nginx 有編譯該模塊 被分解為
首先, , 既然, 要使, 使用, , , , , , , , , , , , , nginx, , , , , ,

恩..可謂不盡理想呀..


添加自訂義字典

修改配置文件

根據作者文檔說明ik分析器, 我們可以在/etc/elasticsearch/analysis-ik/IKAnalyzer.cfg.xml這個設定檔添增自定義的字典檔.

ext_dict的地方填入我們的字典路徑.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
        <comment>IK Analyzer 扩展配置</comment>
        <!--用户可以在这里配置自己的扩展字典 -->
        <entry key="ext_dict">custom/custom.dic</entry>
        <!--用户可以在这里配置自己的扩展停止词字典-->
        <entry key="ext_stopwords"></entry>
        <!--用户可以在这里配置远程扩展字典 -->
        <entry key="remote_ext_dict"></entry>
        <!--用户可以在这里配置远程扩展停止词字典-->
        <!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>

添加字典檔

cd /etc/elasticsearch/analysis-ik
mkdir custom
wget https://raw.githubusercontent.com/samejack/sc-dictionary/master/main.txt -O custom/custom.dic

這邊使用網友製作的超齊百萬字典檔

重啟elasticsearch

systemctl restart elasticsearch

驗證結果

經過自定義的字典檔, 我們來看看分詞如何不同吧.

root@ghost-elastic01:/etc/elasticsearch/analysis-ik# curl 'http://localhost:9200/ikhell/_analyze?pretty=true' -H 'Content-Type: application/json' -d '{ "field": "content", "text":"首先呢, 既然要使用那個模塊, 就必須先確保你的 Nginx 有編譯該模塊"}'
{
  "tokens" : [
    {
      "token" : "首先",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "呢",
      "start_offset" : 2,
      "end_offset" : 3,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "既然",
      "start_offset" : 5,
      "end_offset" : 7,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "要使",
      "start_offset" : 7,
      "end_offset" : 9,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "使用",
      "start_offset" : 8,
      "end_offset" : 10,
      "type" : "CN_WORD",
      "position" : 4
    },
    {
      "token" : "那個",
      "start_offset" : 10,
      "end_offset" : 12,
      "type" : "CN_WORD",
      "position" : 5
    },
    {
      "token" : "模塊",
      "start_offset" : 12,
      "end_offset" : 14,
      "type" : "CN_WORD",
      "position" : 6
    },
    {
      "token" : "就必須",
      "start_offset" : 16,
      "end_offset" : 19,
      "type" : "CN_WORD",
      "position" : 7
    },
    {
      "token" : "必須先",
      "start_offset" : 17,
      "end_offset" : 20,
      "type" : "CN_WORD",
      "position" : 8
    },
    {
      "token" : "必須",
      "start_offset" : 17,
      "end_offset" : 19,
      "type" : "CN_WORD",
      "position" : 9
    },
    {
      "token" : "先",
      "start_offset" : 19,
      "end_offset" : 20,
      "type" : "CN_CHAR",
      "position" : 10
    },
    {
      "token" : "確保",
      "start_offset" : 20,
      "end_offset" : 22,
      "type" : "CN_WORD",
      "position" : 11
    },
    {
      "token" : "你的",
      "start_offset" : 22,
      "end_offset" : 24,
      "type" : "CN_WORD",
      "position" : 12
    },
    {
      "token" : "nginx",
      "start_offset" : 25,
      "end_offset" : 30,
      "type" : "ENGLISH",
      "position" : 13
    },
    {
      "token" : "有",
      "start_offset" : 31,
      "end_offset" : 32,
      "type" : "CN_CHAR",
      "position" : 14
    },
    {
      "token" : "編譯",
      "start_offset" : 32,
      "end_offset" : 34,
      "type" : "CN_WORD",
      "position" : 15
    },
    {
      "token" : "該",
      "start_offset" : 34,
      "end_offset" : 35,
      "type" : "CN_CHAR",
      "position" : 16
    },
    {
      "token" : "模塊",
      "start_offset" : 35,
      "end_offset" : 37,
      "type" : "CN_WORD",
      "position" : 17
    }
  ]
}

首先呢, 既然要使用那個模塊, 就必須先確保你的 Nginx 有編譯該模塊 被分解為
首先, , 既然, 要使, 使用, 那個, 模塊, 就必須, 必須先, 必須, , 確保, 你的, nginx, , 編譯, , 模塊

是不是好了很多呢?


熱更新

網路上的詞彙日新月異, 因此我們的必須要能夠熱更新我們的字典檔; 這邊就依照文檔做一次示範.
依照文檔說明, 可以在配置文件填入外部獲取字典的接口, 然後依照Last-ModifiedETag兩個header來決定是否重新獲取字典檔案.

若文件被編輯過, 則Last-Modified和ETag會變動; 代表文件被改動過, 此時就會重新獲取字典檔.

編輯配置文件

首先也是編輯IKAnalyzer.cfg.xml這個配置文件; 找到remote_ext_dict這個入口.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
        <comment>IK Analyzer 扩展配置</comment>
        <!--用户可以在这里配置自己的扩展字典 -->
        <entry key="ext_dict">custom/custom.dic</entry>
        <!--用户可以在这里配置自己的扩展停止词字典-->
        <entry key="ext_stopwords"></entry>
        <!--用户可以在这里配置远程扩展字典 -->
        <entry key="remote_ext_dict">http://127.0.0.1/es/dic</entry>
        <!--用户可以在这里配置远程扩展停止词字典-->
        <!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>

這邊因為外掛內容被改動過, 所以還是重啟elasticsearch一次: systemctl restart elasticsearch

配置nginx

server區塊當中加入以下的路徑接口

    location /es {
      alias /etc/nginx/elasticsearch;
    }

重新載入配置文件nginx -s reload

配置字典檔案

mkdir /etc/nginx/elasticsearch
touch /etc/nginx/elasticsearch/dic

到這邊為止; 我們先測試能不能獲取到字典

root@ubuntu-87:~# curl http://127.0.0.1/es/dic


因為目前字典是空的; 這樣就是成功囉.

驗證熱更新

在更新之前先測試一次分詞: 傻眼貓咪氣pupu

root@ubuntu-87:~# curl 'http://localhost:9200/ikhell/_analyze?pretty=true' -H 'Content-Type: application/json' -d '{ "field": "content", "text":"傻眼貓咪氣pupu"}'
{
  "tokens" : [
    {
      "token" : "傻眼",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "貓咪",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 1
    },
    {
      "token" : "氣",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "CN_CHAR",
      "position" : 2
    },
    {
      "token" : "pupu",
      "start_offset" : 5,
      "end_offset" : 9,
      "type" : "ENGLISH",
      "position" : 3
    }
  ]
}

可以看到被分詞為: 傻眼,貓咪, , pupu.

然後我們寫入新的詞彙
echo -e "傻眼貓咪\n氣pupu" > /etc/nginx/elasticsearch/dic

接著再測試一次

root@ubuntu-87:~# curl 'http://localhost:9200/ikhell/_analyze?pretty=true' -H 'Content-Type: application/json' -d '{ "field": "content", "text":"傻眼貓咪氣pupu"}'
{
  "tokens" : [
    {
      "token" : "傻眼貓咪",
      "start_offset" : 0,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 0
    },
    {
      "token" : "傻眼",
      "start_offset" : 0,
      "end_offset" : 2,
      "type" : "CN_WORD",
      "position" : 1
    },
    {
      "token" : "貓咪",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "氣pupu",
      "start_offset" : 4,
      "end_offset" : 9,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "pupu",
      "start_offset" : 5,
      "end_offset" : 9,
      "type" : "ENGLISH",
      "position" : 4
    }
  ]
}

可以看到我們的流行詞彙都進來了唷! 分詞結果為: 傻眼貓咪, 傻眼, 貓咪, 氣pupu, pupu
經測試發現這並不是立即生效的, 可能要等數分鐘; 請大家給他一點時間XD


尾聲

中文搜尋的基本優化也就到這邊告個段落了, 之後會沿著這個脈絡繼續做Ghost博客和Elasticsearch搭配的相關文章唷! 下次見~