Module Installation
Installation Prerequisites
Installation requires following files to be present:
- An archive
kafkasm-installer-1.0.tar.gz
containing the following modulekafkasm
component files:- installation script for
OpenSearch
- installation script for
OpenSearch Dashboards
- installation script for
Search Anywhere Framework
interface - installation script for
Logstash
- installation script for
Installation Procedure
Unzip the archive into a directory of your choice.
OpenSearch
To install the module component for the OpenSearch
servers with the data
role, launch the script in the data
folder via following terminal command:
sudo bash install.sh
Before installation set following parameters:
- OpenSearch home directory - path to the
OpenSearch
root directory. Default value:/app/opensearch
- KafkaSM license key - special token key for module access. Mandatory. Provided by the vendor. Without the token present,
OpenSearch
will not launch after the module installation process is done. No default values set - All Kafka brokers - list of
Apache Kafka
brokers inip:port
format, separated with commas and without spaces. No default values set
Every parameter must be confirmed with a press of y
and Enter
after input. Pressing n
allows to re-enter the parameter.
After installation token values and broker parameters can be changed in the configuration file at the path {opensearch_root_directory}/config/nn2/kafka_plugin.yml
.
During the installation an additional confirmation will be necessary. For that press y
and then Enter
.
OpenSearch Dashboards
To install the module component for opensearch-dashboards
, launch the script in the web
folder via following terminal command:
bash install.sh
Before installation set following parameters:
- Opensearch-dashboards home directory - path to the
opensearch-dashboards
. Default value:/app/opensearch-dashboards
During the installation an additional confirmation will be necessary. For that press y
and then Enter
.
Logstash
To install the module component for the Logstash
servers, launch the script in the logstash/kraft
or logstash/zookeeper
folder (depending on the type of the Apache Kafka
cluster used) via following terminal command:
sudo bash install.sh
Before installation set following parameters:
Output file parameters:
- Hosts - IP-addresses for
OpenSearch
servers in thehttps://[ip:port]
format. No default values set - User -
OpenSearch
username for authorization. No default value set - Password -
OpenSearch
password for authorization. It's possible to pull the value from keystore (y
to insert the keystore password,n
for manual input). No default value set - Set the desired SSL setting -
y
– to turn it on orn
- to turn it off - Set the CA certificate path. Usually it can be found at
/app/opensearch/config/ca-cert.pem
. No default value set
Every parameter must be confirmed with a press of y
after input. Pressing n
allows to re-enter the parameter.
Cluster parameters:
- Logstash home directory - path to
Logstash
root directory. Default value:/app/logstash
- Logstash pipelines configuration directory - path to directory containing pipeline configuration files. Default value:
/app/logstash/config/conf
- All Kafka brokers IP and aliases - list of
Apache Kafka
brokers inip:alias
format, separated with commas and without spaces. No default values set
Only for the KRaft cluster:
- All Kafka controllers IP and aliases - list of
Apache Kafka
controllers inip:alias
format, separated with commas and without spaces. No default values set
Every parameter must be confirmed with a press of y
and Enter
after input. Pressing n
allows to re-enter the parameter. After installation ip
and alias
parameters can be changed in the configuration file at the path {directory_containing_pipeline_configuration_files}/jmx_conf/
.
Search Anywhere Framework Interface
To install the module Search Anywhere Framework
interface component for the server with the opensearch-dashboards
installation, launch the script in the saf-interface
folder via following terminal command:
bash install.sh
Before installation set following parameters:
- User - username for authorization. No default value set
- Password – password for authorization. No default value set
Every parameter must be confirmed with a press of y
and Enter
after input. Pressing n
allows to re-enter the parameter.
After the entry of all correct parameters and successful authorization the script will begin the installation process. Otherwise an authorization error will abort the install and it will be necessary to relaunch the script.
During the installation it will be necessary to input the type of the Apache Kafka
cluster used - either kraft
(for the new type KRaft clusters) or zookeeper
(for old type Apache Zookeeper clusters). Entry must be confirmed with a press of y
and Enter
after input. Pressing n
allows to re-enter the parameter.
Module Initialization for Apache Kafka Servers
Open the launch file for Apache Kafka
service with the following terminal command:
nano/etc/systemd/system/kafka.service
Add the following line into the file:
Environment=JMX_PORT=9989
Refresh the service unit configuration and restart the service with the following commands:
systemctl daemon-reload
systemctl restart kafka
Use the following command to check the connection:
netstat -tunlp
JMX port must be present in the displayed table.