on 2018 Jul 23 10:03 AM
Hi Experts, We currently have CJKTokenizer for Chinese text but the results are not accurate. I came to know implementing smart chinese will tokenize each character with min/max length of the word.
I couldnt find much of documentation to add analyzer like where to jar for solr, any additional configuration needed on embedded or standalone solr systems...any pointers will help.
Thanks, Vijay
Request clarification before answering.
| User | Count |
|---|---|
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.