Tag Archives: Elasticsearch

[Solved] ElasticSearch Error: FORBIDDEN/12/index read-only/allow delete (api) ,read_only_allow_delete Set windows

Elasticsearch reports an error. For bind/12/index read only/allow delete (API)
this error is because the indexes in elasticsearch are read-only and cannot be added or modified. Query the official website. The reason for this problem may be that the storage disk where es is located does not have enough space, which causes es to automatically turn on data protection and limit it to read-only
just execute the following command. If curl is not installed in the window, please install curl first

Note: when executing the following commands in Windows system, because the single quotation mark “‘” cannot be recognized, please modify it to double quotation mark and or escape the double quotation mark

The windows system executes the following commands:

curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d "{\"index.blocks.read_only_allow_delete\": null}"

the Linux system executes the following command:

curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allo

[Solved] Docker Run ElasticSearch Error: docker: invalid reference format: repository name must be lowercase.

Problems encountered

When starting the elasticsearch container with docker, enter

docker run --name elasticsearch -p 9200:9200 -p 9300:9300\
-e "discovery.type=single-node"\
-e ES_JAVA_OPTS="-Xms64m -Xmx128m"\
-v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml\
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data\
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins\
-d elasticsearch:7.6.2

report errors

docker: invalid reference format: repository name must be lowercase.

reason:

Docker: invalid reference format: repository name must be lowercase.

There is no space before \ in the shell

Solution:

Add a space before all \, and after modification, it is as follows:

docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms64m -Xmx128m" \
-v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-d elasticsearch:7.6.2

Run successfully

Elasticsearch configuration cluster + elk error Summary and Solution

when configuring elk, I usually encounter the following error reports, which are sorted as follows (under continuous update):

elasticsearch failed to start

# systemctl start elasticsearch
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.

#At this point, check the system logs directly, as elasticsearch does not have a dedicated log audit
tail -f /var/log/messages

The following errors are reported:

Dec 13 10:16:30 oldboy elasticsearch: ERROR: [1] bootstrap checks failed
Dec 13 10:16:30 oldboy elasticsearch: [1]: initial heap size [536870912] not equal to maximum heap size [775946240]; this can cause resize pauses and prevents mlockall from locking the entire heap

In fact, the prompt is obvious. The memory given by the JVM is insufficient, so we can directly increase the memory

#Modify jvm memory size
# vim /etc/elasticsearch/jvm.options
-Xms1500m 
-Xms1500m
# Because I just changed the memory to a very small size, just change it back

If you do not use the SYSTEMd method to start, you can directly call bin/elasticsearch to start. There are several points to note

#1. You cannot use root to log in
useradd elk #Create the user elk

#2. Give elk the user rights involved

kibana displays Chinese garbled code

#First check what the format of the log to pull is
file file.txt #View on linux

Open the log file in notepad and click save as to see if it says ANSI, then it is gbk # on windows

# Configure the character set in filebeat

# vim /etc/filebeat/filebeat.yml

filebeat.inputs:

- type: log

 
  enabled: true

  paths:
    - c:\work\CA*
  encoding: gbk   #Add the character format here, if it is utf8 then no need to add

Continue to generate the test log, log in to kibana and check it. It is found that the Chinese characters have been displayed normally and there is no garbled code

after the ES cluster configuration xpack is started, the password creation fails

[root@db01 elasticsearch]# bin/elasticsearch-setup-passwords interactive

Failed to determine the health of the cluster running at http://10.0.0.200:9200
Unexpected response code [503] from calling GET http://10.0.0.200:9200/_cluster/health?pretty
Cause: master_not_discovered_exception

It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.

Do you want to continue with the password setup process [y/N]y

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


#Error, cluster link fails when starting xpack because of dirty data

# Ultimate trick (only for initial cluster creation, or test environments)

1. Stop the service
2. Delete the data directory
3. Configure only xpack.security.enabled: true for all three nodes, start
4. set password

# configuration file (all three are the same except for the ip)
cluster.name: think
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 10.0.0.200,127.0.0.1
http.port: 9200
discovery.seed_hosts: ["10.0.0.200", "10.0.0.201"]
cluster.initial_master_nodes: ["10.0.0.200", "10.0.0.201","10.0.0.202"]
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.security.enabled: true


#Test results
[root@db01 elasticsearch]# bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

#success

 

4. The same situation as that in title 3 occurs at work the next day. The following solutions are provided

#Directly pair with ca certificate authentication and turn on ssl

# Set the default role password
bin/elasticsearch-setup-passwords interactive # This step didn't work for me, but title 3 had already been created, so I skipped it

Add the following to the elasticsearch.yml
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate # certificate verification level
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 # node certificate path
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

# Create the certificate
# Create the keystore file
# bin/elasticsearch-keystore create # You don't need to do this step if you have it in the config folder

# Generate the CA certificate, keep entering
bin/elasticsearch-certutil ca (CA certificate: elastic-stack-ca.p12)

# Generate the certificate to be used by the node, keep entering
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 (node certificate: elastic-certificates.p12)

# Create a directory to save the certificate and move it to the config file
mkdir -p /etc/elasticsearch/certs
mv elastic-certificates.p12 /etc/elasticsearch/certs 
chmod 777 /etc/elasticsearch/certs #You can't log in without giving authorization, so you can test yourself to see how much is appropriate

#Restart

Elasticsearch startup error, bootstrap checks failed

Modify the elasticsearch.yml configuration file to allow extranet access.

vim config/elasticsearch.yml
# Add

network.host: 0.0.0.0

Failed to start, check did not pass, error reported

[2018-05-18T17:44:59,658][INFO ][o.e.b.BootstrapChecks    ] [gFOuNlS] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]

Edit /etc/security/limits.conf and append the following.

* soft nofile 65536
* hard nofile 65536

This file needs to be re-logged in to the user after modification to take effect

[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Edit /etc/sysctl.conf and append the following.

vm.max_map_count=655360

After saving, execute:

sysctl -p

Restart, successful

bin/elasticsearch

Recommendation: install elasticsearch (es for short) on Linux, and briefly introduce the installation steps

Expected < block end >, but found blockmappingstart

With multi-dimensional model as the core, let the factory digital transformation and upgrading “within reach”>>>

elasticsearch error expected, but found blockmappingstart solution

Reference article:

(1) Expected, but found blockmappingstart solution to elasticsearch error

(2) https://www.cnblogs.com/chenkeyu/p/6858342.html

Let’s make a note.

Possible reasons for elasticsearch using curl to guide datagram 400

With multi-dimensional model as the core, let the factory digital transformation and upgrading “within reach”>>>

curl -H 'Content-Type: application/x-ndjson'  -s -XPOST 10.0.3.73:9200/yyzj2019-04/trans/_bulk?pretty --data-binary @../test_6000.txt

Error report

 1 {
 2   "error" : {
 3     "root_cause" : [ {
 4       "type" : "parse_exception",
 5       "reason" : "Failed to derive xcontent"
 6     } ],
 7     "type" : "parse_exception",
 8     "reason" : "Failed to derive xcontent"
 9   },
10   "status" : 400
11 }

Possible causes:

1. Carefully check the IP, port and the file name path after @ in the command

2. Check the encoding format of data file TXT, use UTF-8

3。。。

Follow up supplement

Elasticsearch error mapper_ parsing_ exception

Geeks, please accept the hero post of 2021 Microsoft x Intel hacking contest>>>

Recently, when using Java API to store es, the following error was reported:

{“took”:150,”errors”:true,”items”:[{“index”:{“_ index”:”test”,”_ type”:”type1″,”_ id”:”794719072″,”status”:400,”error”:{“type”:”mapper_ parsing_ exception”,”reason”:”failed to parse”,”caused_ by”:{“type”:”not_ x_ content_ exception”,”reason”:”Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes”}}}}

After investigation, it is found that during the warehousing process, Java wrongly put the string 1 (“1”) into the warehouse, that is, the data in the following format is put into the warehouse:

{"index":{"_index":"test","_type":"type1"}}
"1"

The above error will be reported. The format of the input data is wrong and mapping cannot parse it

[Solved] Elasticsearch:exception [type=search_phase_execution_exception, reason=all shards failed]

 

Exception in thread "main" ElasticsearchStatusException[Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.]]; nested: ElasticsearchException[Elasticsearch exception [type=illegal_argument_exception, reason=Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.]];
	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:177)
	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1727)
	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1704)
	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1467)
	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424)
	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394)
	at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:930)
	at com.softsec.util.demoTime.main(demoTime.java:98)
	Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://192.168.101.92:9200], URI [/news/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"news","node":"8GuMfo5aRz2CCgl49bY0aQ","reason":{"type":"illegal_argument_exception","reason":"Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.","caused_by":{"type":"illegal_argument_exception","reason":"Fielddata is disabled on text fields by default. Set fielddata=true on [content_type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."}}},"status":400}
		at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:253)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:231)
		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205)
		at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454)
		... 4 more

The reason for this is because of the string (content) of my group aggregation query_ Type is the text type

Cause analysis:

When using term query, because it is a precise match, the keyword type on ES must be keyword instead of text. For example, if your search criteria are “name”: “Cai Xukun”, then the ES type of the name field must be keyword instead of text

In ES, only strings of keyword type can be grouped and aggregated by aggregationbuilders. Terms (“aggs class”). If you want to group query, you can specify the keyword attribute of the grouped field (as shown in the figure below)

How to modify it in our java code?As follows, add “. Keyword”

The previous error was added as follows: