Execute these on host machine first:
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
add plugins.security.ssl.http.enabled=false
to environment key in compose file, to disable ssl.
Set initial password
export OPENSEARCH_INITIAL_ADMIN_PASSWORD=mypwd_1
Run compose:
docker-compose up -d
If you don't have a Certificate authority already, you can use the script here to generate all the necessary certs.
If you don't want a separate cert for each node you can set
plugins.security.ssl.transport.enforce_hostname_verification: false
in opensearch.yml config and use just the node1 certs from below.
If you have existing CA create new node and admin certs using that.
Node certs are used to secure communication between the nodes. The admin.pem is used by the securityadmin.sh script that applies the configuration from config files into the OS indices.
When the cluster is running ok you should see the following the line in the logs
opensearch-node2 | [2024-07-29T13:02:30,748][INFO ][o.o.c.r.a.AllocationService] [opensearch-node2] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.plugins-ml-config][0]]]).
Also check the API endpoint e.g.
root@debian12-12:[/opt/opensearch]: curl "https://localhost:9200/_cluster/health?pretty" -ku admin:admin { "cluster_name" : "opensearch-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "discovered_master" : true, "discovered_cluster_manager" : true, "active_primary_shards" : 5, "active_shards" : 10, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
Example opensearch.yml
# Bind to all interfaces because we don't know what IP address Docker will assign to us. network.host: 0.0.0.0 plugins.security.ssl.transport.pemcert_filepath: opensearch-node.crt plugins.security.ssl.transport.pemkey_filepath: opensearch-node.key plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.crt # use the same certificate on every node plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.ssl.http.enabled: true plugins.security.ssl.http.pemcert_filepath: opensearch-node.crt plugins.security.ssl.http.pemkey_filepath: opensearch-node.key plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.crt plugins.security.allow_default_init_securityindex: true plugins.security.authcz.admin_dn: - 'EMAILADDRESS=dev@example.com,CN=osadmin,OU=IT,O=COMPANY,L=ST,ST=SD,C=HR' plugins.security.nodes_dn: - 'EMAILADDRESS=dev@example.com,CN=opensearch-node1,OU=IT,O=COMPANY,L=Split,ST=SD,C=HR' plugins.security.audit.type: internal_opensearch plugins.security.enable_snapshot_restore_privilege: true plugins.security.check_snapshot_restore_write_privileges: true plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] cluster.routing.allocation.disk.threshold_enabled: false opendistro_security.audit.config.disabled_rest_categories: NONE opendistro_security.audit.config.disabled_transport_categories: NONE
The CN field in plugins.security.nodes_dn must be the same as in SAN extension file step e.g.
echo 'subjectAltName=DNS:opensearch-node1' > node1.ext
in this example
... opensearch-node1 | [2024-07-29T12:23:32,528][INFO ][o.o.s.c.ConfigurationRepository] [opensearch-node1] Wait for cluster to be available ... opensearch-node1 | [2024-07-29T12:23:33,533][INFO ][o.o.s.c.ConfigurationRepository] [opensearch-node1] Wait for cluster to be available ... ...
If running in docker, make sure you have the line
network.host: 0.0.0.0
in opensearch.yml
If you are not using https this option needs to be set
opensearch_security.cookie.secure: false
opensearch_dashboars.yml no SSL example
server.host: '0.0.0.0' server.ssl.enabled: false opensearch.hosts: ["https://localhost:9200"] opensearch.username: "kibanaserver" opensearch.password: "kibanaserver" opensearch.ssl.verificationMode: none opensearch.requestHeadersAllowlist: [ authorization,securitytenant ] opensearch_security.multitenancy.enabled: true opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"] opensearch_security.readonly_mode.roles: ["kibana_read_only"] opensearch_security.cookie.secure: false opensearch_security.auth.type: ["basicauth"] # cosmetics opensearchDashboards.branding: useExpandedHeader: false
E.g.
{"statusCode":401,"error":"Unauthorized","message":"Authentication Exception"}
This might happen if you are proxying connections to OS Dashboard for example, using nginx + auth_basic for authentication. It seems the username/passwd set for auth_basic gets passed to OS and that user/pwd most likely does not exist in the internal database.
There might be a solution for this here.