Updating Search Anywhere Framework
This instruction describes the process of updating Search Anywhere Framework from version 4.3.* to 5.0.*.
Information
Conventions:
SAF_INSTALLER
- the directory where theSearch Anywhere Framework
version 5.0 installation package is unpacked.USER
- a system user with administrator rights, usuallyadmin
.OS_HOME
- the OpenSearch home directory, usually/app/opensearch/
.OS_DATA
- the directory where indexed data is stored, usually/app/data/
.OSD_HOME
- the OpenSearch Dashboards home directory, usually/app/opensearch-dashboards/
.PATH_SSL
- the location of the certificate, theadmin
private key, and theca-cert
, usually/app/opensearch/config/
.
The first step in updating is to determine the currently installed version of Search Anywhere Framework. This can be done by viewing the module versions on the main page or by running the following command in the command line:
curl https://127.0.0.1:9200/_cat/plugins -k -u $USER
After entering this command, you will need to enter the password for the $USER
account. It is recommended to use the admin
user.
A detailed list of new features can be found in the article What's New in SAF 5.0.
Let's consider the update procedure for each component. The 5.0 installer needs to be unpacked into a directory, for example, /app/distr/
.
Before starting work, it is strongly recommended to back up the main configuration files and Security settings.
After completing the update, be sure to update SAF Beat Manager
. The RestAPI has been updated without backward compatibility, so in the menu item Main Menu
- Settings
- SAF Beat Management
, the agent list will be empty. Use the SAF Beat Manager update instructions.
Recommended Actions
It is recommended to create a directory, for example, /app/backup
, where you should save:
-
The
config
directory, usually$OS/config
or$OSD_HOME/config
. -
The
systemd
files, usually/etc/systemd/system/opensearch.service
and/etc/systemd/system/opensearch-dashboards.service
,/etc/systemd/system/sme-re.service
. -
The file
/etc/sysctl.d/00-opensearch.conf
. -
A copy of the Security settings. This needs to be done once, and requires the certificate and private key of the admin user. (The command below will create a directory with the current date containing the OpenSearch security settings.)
chmod +x $OS_HOME/plugins/opensearch-security/tools/securityadmin.sh
JAVA_HOME=$OS_HOME/jdk/ $OS_HOME/plugins/opensearch-security/tools/securityadmin.sh -backup /app/backup/security_$(date +%Y%m%d) \
-icl \
-nhnv \
-cacert $OS_HOME/config/ca-cert.pem \
-cert $OS_HOME/config/admin-cert.pem \
-key $OS_HOME/config/admin-key.pem
Disabling the Inventory Processor
If the Inventory
module is not installed, proceed to the next step.
Starting from version 5.0, the functionality of Inventory Processor
has been incorporated into the Inventory
module. It is recommended to back up the Inventory Processor
module and disable it in crontab.
The Inventory Processor
must be disabled before performing the main update. These actions need to be performed only once.
Typically, Inventory Processor runs as a single instance on the first Smart Monitor Data Storage node with long-term data storage (routing mode: cold) via crond according to a schedule. You can view the list of crond jobs using the following command:
crontab -l
Comment out the execution of Inventory Processor and save the changes in crond.
Updating OpenSearch
The Search Anywhere Framework 5.0 installer needs to be unpacked into a directory, for example, /app/distr/
. The location where you unpack the archive contents will be referred to as $SAF_INSTALLER
.
SAF_INSTALLER=/app/distr/saf_5.0
For clusters consisting of multiple nodes, it is recommended to disable allocation before upgrading. This can be done through the developer console (Main Menu
- Settings
- Dev Console
) by executing the following command:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}
The same can be done from the terminal with the following command:
curl -XPUT -k -u admin "https://127.0.0.1:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d '{"persistent":{"cluster.routing.allocation.enable": "none"}}'
When upgrading cluster nodes, do not use the update script to disable allocation. After upgrading all cluster nodes, enable allocation:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}
The same can be done from the terminal with the following command:
curl -XPUT -k -u admin "https://127.0.0.1:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d '{"persistent":{"cluster.routing.allocation.enable": "all"}}'
Automatic Mode
The script requires the following pre-installed packages:
curl
zip
unzip
If you do not see the message indicating that Search Anywhere Framework has been updated at the end, do not rerun the update script. Take a screenshot of where the script stopped and contact technical support.
The automatic update script automates the update process and is located at $SAF_INSTALLER/opensearch/update.sh
. You can specify a configuration file $SAF_INSTALLER/opensearch/example_config_opensearch.yaml
when calling the script. The file format is YAML and is similar to the configuration file used during installation.
The update script supports the following launch parameters:
-c, --config <path_to_config_file_yaml>
- specify the configuration file for the update-h, --help
- displays help information about available commands
Start the upgrade with nodes that do not have the master role. Data nodes can connect to older versions of master nodes, but not vice versa.
To start the update, run the script:
$SAF_INSTALLER/opensearch/update.sh
After launching, the script automatically finds the paths to the main directories:
OpenSearch Home Directory
- the OpenSearch installation directory, usually/app/opensearch
OpenSearch Conf Directory
- the OpenSearch configuration files directory, usually/app/opensearch/config/
OpenSearch Data Directory
- the data directory, usually/app/data/
OpenSearch Logs Directory
- the logs directory, usually/app/logs/
The update script does not perform any actions with the data
and logs
directories. The configuration files directory
and systemd files
will be saved to a temporary directory $SAF_INSTALLER/opensearch/staging/
.
If you run the script again, the staging
directory will be cleared.
================================================================================
SEARCH ANYWHERE FRAMEWORK UPDATE SCRIPT - OPENSEARCH
================================================================================
Current working directory: /app/distr/saf_5.0/opensearch
Current name of install's archive: opensearch-2.18.0-linux-x64.tar.gz
New version OpenSearch: 2.18.0
================================================================================
-- STEP 1. INSTALLATION DIRECTORIES
opensearch.service file found. Will get necessary paths from there
Final Opensearch home directory: /app/opensearch
Final Opensearch conf directory: /app/opensearch/config
Final Opensearch data directory: /app/data/opensearch
Final Opensearch logs directory: /app/logs/opensearch
Is this correct? [y/n]:
After entering the directories, you need to confirm the automatically found data by pressing y
, or enter your directories manually by pressing n
.
In the second step, you need to answer the question about allocation. If you enter y
, the script will disable allocation before the update and enable it at the end of the script.
-- STEP 2. CONFIGURE ALLOCATION
Do you want to disable allocation during update? [y/N]: n
You don't want to disable allocation: n
Is this correct? [y/n]:
In the third step, you will need to enter the password for the admin user. The password will not be displayed while typing.
-- STEP 3. GET ADMIN PASSWORD
Enter password for user "admin":
If you enter an incorrect password, the allocation will not be disabled even if selected in the previous step, and information about the current node will not be displayed. However, the update will not be interrupted.
Then, preparatory actions will be performed before the update. Before applying the update, you will be asked to continue. No changes are made to the system until this point. Some information about the current node and the cluster as a whole will also be displayed.
... (Example output omitted for brevity) ...
!!! AT THIS POINT WE START TO MAKE CHANGES IN OPERATING SYSTEM !!!
Do you want to continue? [y/N]:
Pressing Enter
will interrupt the update; press y
to continue.
Upon successful completion of the update, you should see the message SEARCH ANYWHERE FRAMEWORK SUCCESSFULLY UPDATED!
. Information about the cluster and the current node will be displayed beforehand.
... (Example output omitted for brevity) ...
================================================================================
-- SEARCH ANYWHERE FRAMEWORK SUCCESSFULLY UPDATED!
================================================================================
If for some reason the update script was unable to update some plugins, it will display additional information about these plugins at the end, as in the example above (the text The following plugins cannot be installed
).
Note that the update script takes into account the current list of installed plugins on OpenSearch nodes. If you need to install any additional plugins, you should do this manually at the end of the node update.
UpdatingOpenSearch Dashboards
This script automates the OpenSearch Dashboards update process. It requires the following pre-installed packages:
curl
zip
unzip
The automatic update script is located at $SAF_INSTALLER/opensearch-dashboards/update.sh
. You can specify a configuration file using $SAF_INSTALLER/opensearch-dashboards/example_config_dashboards.yaml
. The file format is YAML and is identical to the installation configuration file.
The update script supports the following parameters:
-c, --config <path_to_config_file_yaml>
- Specifies the configuration file for the update.-h, --help
- Displays help information about available commands.
During execution, the script backs up the systemd service file, opensearch-dashboards.yml
, and the configuration directory to a temporary directory: $SAF_INSTALLER/opensearch-dashboards/staging/
.
The update script does not modify the data
and logs
directories. The configuration directory
and systemd files
are backed up to the temporary directory $SAF_INSTALLER/opensearch-dashboards/staging/
.
Running the script again will clear the staging
directory.
To update, run the script:
$SAF_INSTALLER/opensearch-dashboards/update.sh
The script automatically detects the main paths on the current server for the following directories:
OpenSearch Dashboards Home Directory
- The OpenSearch Dashboards installation directory, typically/app/opensearch-dashboards
OpenSearch Dashboards Conf Directory
- The OpenSearch Dashboards configuration directory, typically/app/opensearch-dashboards/config/
OpenSearch Dashboards Data Directory
- The data directory, typically/app/data/
OpenSearch Dashboards Logs Directory
- The logs directory, typically/app/logs/
Example output:
================================================================================
SEARCH ANYWHERE FRAMEWORK INSTALL SCRIPT - OPENSEARCH DASHBOARDS
================================================================================
Current working directory: /opt/saf_5.0/opensearch-dashboards
Current name of install's archive: opensearch-dashboards-2.18.0-linux-x64.tar.gz
Current version of OpenSearch-Dashboards: 2.18.0
================================================================================
-- STEP 1. INSTALLATION DIRECTORIES
opensearch-dashboards.service file found. Will get necessary paths from there
Final Opensearch Dashboards home directory: /app/opensearch-dashboards
Final Opensearch Dashboards conf directory: /app/opensearch-dashboards/config
Final Opensearch Dashboards data directory: /app/data/opensearch-dashboards
Final Opensearch Dashboards logs directory: /app/logs/opensearch-dashboards
Is this correct? [y/n]:
After the directories are displayed, confirm the information by pressing y
, or enter your directories manually by pressing n
.
The script then performs preparatory actions for the update. Before applying the update, it prompts for confirmation. Up to this point, no actions affecting the system's operability are performed. Information about the current node and the cluster will also be displayed.
Current list of plugins:
-- smartMonitor
-- smartMonitorColumnChart
-- smartMonitorCyberSecurity
-- smartMonitorDrawio
-- smartMonitorHeatmapChart
-- smartMonitorHtmlChart
-- smartMonitorIncidentManager
-- smartMonitorInventory
-- smartMonitorKnowledgeCenter
-- smartMonitorLineChart
-- smartMonitorLookupManager
-- smartMonitorMitreAttack
-- smartMonitorPDFExport
-- smartMonitorPieChart
-- smartMonitorSingleValue
-- smartMonitorTable
-- smartMonitorUserBehaviorAnalytics
Current version of OpenSearch-Dashboards: 2.18.0
!!! AT THIS POINT WE START TO MAKE CHANGES IN OPERATING SYSTEM !!!
Do you want to continue? [y/N]:
Upon successful completion of the update script, the following message is displayed: SEARCH ANYWHERE FRAMEWORK DASHBOARDS SUCCESSFULLY UPDATED
.
Migration of Inventory Module Configurations
If the Inventory
module is not installed, proceed to the next step.
In version 5.0, the Inventory Processor
module was integrated into the Inventory
module. To complete this process:
Open the Developer Console (Main Menu → System Settings → Developer Console
)
Execute the following command:
POST _reindex
{
"source": {
"index": ".sm_inv_config"
},
"dest": {
"index": ".sm_inv_configs"
},
"script": {
"source": """
Map field(def f, def b) {
return [
"name": f["name"],
"display_name": f["display_name"],
"weight": f["weight"] != null ? f["weight"] : 1,
"base": b
];
}
List fields(def b, def a) {
def fl = [];
b.forEach(f -> fl.add(field(f, true)));
a.forEach(f -> fl.add(field(f, false)));
return fl;
}
Map meta(def id) {
return ["id": id];
}
Map mapping_rule(def d, def s) {
return ["dest_field": d, "source_field": s];
}
Map period(def f, def i) {
return i != null ? ["field": f, "interval": i] : ["field": f];
}
Map source(def s) {
def mapping_rules = [];
for (entry in s["mapping_rules"].entrySet()) {
mapping_rules.add(mapping_rule(entry.getKey(), entry.getValue()));
}
return [
"id": s["source"],
"name": s["source"],
"index": s["index"],
"mapping_rules": mapping_rules,
"period": period("@timestamp", s["time_window"]),
"not_mapping_agg_fields": false
];
}
List sources(def is) {
def sl = [];
is.forEach(s -> sl.add(source(s)));
return sl;
}
List aggs_keys(def ak) {
def sl = [];
ak.forEach(s -> {
sl.add([
"sources": s["sources"],
"fields": s["keys"]
]);
});
return sl;
}
Map schedule_params() {
return [
"with_index": true,
"case_insensitive": false,
"join_with_null_value": false,
"fast_only": false,
"bulk_changes": true
]
}
Map schedule() {
return [
"cron": [
"expression": "* * * * *",
"timezone": "GMT"
]
];
}
Date now = new Date();
Instant instant = Instant.ofEpochMilli(now.getTime());
ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z'));
if (ctx._source.get("_meta") == null) {
ctx._source.put("_meta", [
"id": ctx._id,
"created": zdt.format(DateTimeFormatter.ISO_INSTANT),
"updated": zdt.format(DateTimeFormatter.ISO_INSTANT),
"from_system": false,
"type": "user"
]);
}
if (ctx._source.get("_permissions") == null) {
ctx._source.put("_permissions", [
"read": [
"roles": [],
"users": []
],
"write": [
"roles": [],
"users": []
],
"owner": "admin"
]);
}
ctx._source.put("fields", fields(ctx._source.get("base"), ctx._source.get("advanced")));
ctx._source.remove("base");
ctx._source.remove("advanced");
ctx._source.put("aggregation_keys", aggs_keys(ctx._source.get("key")));
ctx._source.remove("key");
ctx._source.put("name", ctx._source.get("inventory_name"));
ctx._source.remove("inventory_name");
ctx._source.put("output_index", ctx._source.get("output"));
ctx._source.put("output_replica", ctx._source.get("output") + "_replica");
ctx._source.remove("output");
ctx._source.put("priorities", ctx._source.get("priorities"));
ctx._source.put("sources", sources(ctx._source.get("inventory_sources")));
ctx._source.remove("inventory_sources");
if (ctx._source.get("category") != null) {
ctx._source.put("category", ctx._source.get("category"));
}
ctx._source.put("asset_name", ctx._source.get("asset_name"));
if (ctx._source.get("ttl") != null) {
ctx._source.put("ttl", ctx._source.get("ttl"));
if (ctx._source.get("ttl") == "") {
ctx._source.remove("ttl");
}
}
ctx._source.put("schedule_params", schedule_params());
ctx._source.put("enabled", false);
ctx._source.put("schedule", schedule());
""",
"lang": "painless"
}
}
Incident-to-Inventory Relationship Migration
If Inventory
and Incident Manager
modules are not installed, proceed to the next step.
Version 5.0 introduces modifications to the linkage structure between the Inventory
and Incident Manager
modules. The installer now includes a dedicated migration utility for these relationship updates. Utility location: $SAF_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/
. You need:
- Python 3.8+
- opensearch-py package
The remaining packages are included in a standard Python installation. A more detailed list of packages is provided below:
- certifi==2023.7.22
- charset-normalizer==3.3.2
- idna==3.4
- opensearch-py==2.3.2
- python-dateutil==2.8.2
- requests==2.31.0
- six==1.16.0
- urllib3==2.0.7
The Search Anywhere Framework
5.0 installer includes Python 3.8 with the required set of packages.
Configuration File
Before running the utility, configure the parameters in the file $SAF_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/default.ini
. An example configuration file is shown below:
[server]
host = 127.0.0.1
port = 9200
[user]
name = admin
pass = password
The server.host
parameter should specify the IP address of any OpenSearch
node. It is recommended to specify a node with the data
role and the routing_mode: hot
attribute. If the user.pass
parameter is omitted, the utility will prompt for the OpenSearch user password interactively.
Utility Launch Parameters
The utility has the following launch parameters:
-c, --config
- Configuration file (Optional). Defaults to./default.ini
-h, --help
- Display help information
Running the Utility
To perform the migration, run the utility with the following command:
$SAF_INSTALLER/utils/python/bin/python3 $SAF_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/main.py -c $SAF_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/default.ini
Inventory configuration
If the Inventory
module is not installed, proceed to the next step.
To integrate the Inventory
module with Search Anywhere Framework, you must add key inv.os.pass
to the keystore on both SAF Data Storage и SAF Master Node. If you need PostgreSQL integration add key inv.pg.pass
.
Adding a Password to the Keystore via Request
Open the Main Menu - System Settings - Developer Console
and execute the following query:
GET _core/keystore
The API allows modifying the keystore only for specific cluster nodes, for example, using a node name mask. For more details on the keystore management API, refer to the corresponding article.
The output will display the contents of the keystores on all cluster nodes. To add a key to all cluster nodes, edit and execute the following query, specifying the password for the admin user:
POST _core/keystore/inv.os.pass
{
"value" : "<PASSWORD_USER>"
}
Manually Adding a Password to the Keystore
On OpenSearch
nodes with the Inventory
plugin installed, the key inv.os.pass
must be added to the keystore. This can be done using the following command:
sudo -u opensearch $OPENSEARCH_HOME/bin/opensearch-keystore add inv.os.pass
When executed, the script will prompt you to enter the key value—input the password for the admin
user. After running the command, restart the OpenSearch
node.
Initializing the Inventory Module
To initialize the module, navigate to Main Menu - System Settings - Module Configuration - Inventory - Initialization
:
Modifying the Inventory Menu
If the Inventory
module is not installed, proceed to the next step.
In version 5.0, the system name of Inventory has been changed. To access the new functionality, edit the navigation menu item by going to Main Menu - System Settings - Module Settings - Main - Menu Settings
. Locate the Inventory
menu module, expand it, then expand the Assets section inside. Change the System Name
field to configs/list
.
Click the Save Changes
button.
Adding Multiline Comments
If the Incident Manager
module is not installed, proceed to the next step.
Version 5.0 introduces support for multiline comments in Incident Manager
. To enable this feature, open the Developer Console (Main Menu - System Settings - Developer Console
) and execute the following command:
PUT _core/im_settings/incident-manager-settings
{
"editFields" : {
"comment" :
{ "type" : "textarea" }
}
}
Adding Sigma Rules to the Menu
If the Cyber Security
module is not installed, proceed to the next step.
Version 5.0 introduces Sigma rules to the Cyber Security module. To access this new functionality, create a navigation menu item by following these steps - Main Menu → System Settings → Module Settings → Main → Menu Settings
. Click the Add Module
button.
Fill in the module fields as follows:
Field Name | Content |
---|---|
Element Type | Group |
System Name | sigma-rules |
Title | Sigma Rules |
Enable Display | Yes (flag must be enabled) |
Inside the Sigma Rules
module, click the Add Section
button.
Fill in the section fields as follows:
Field Name | Content |
---|---|
Element Type | Page |
System Name | |
Title | Rules List |
Enable Display | Yes (flag must be enabled) |
Click the Save Changes
button. Configure user group permissions if necessary.
Alternatively, you can add the menu item via JSON
structure. Navigate to Main Menu - System Settings - Module Settings - Main - Menu Settings
. Open the JSON Structure
tab and add the following fragment to the top list (comma-separated):
{
"itemType": "group",
"name": "sigma-rules",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "iff6f40d1-e210-11ef-b57c-6bad33908cd9",
"title": "Sigma Rules",
"enabled": true,
"sections": [
{
"itemType": "page",
"name": "sigma-rules-name",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i0ce9b921-e211-11ef-b57c-6bad33908cd9",
"title": "Rules List",
"enabled": true
}
]
}
You need importing Sigma Rules from the official repository, recommended the sigma_all_rules.zip
file.
Web Interface Import
Follow the Sigma rules initialization guide.
Terminal Import
On all nodes with the sm-sigma
module installed create the directory $OS_PATH/utils/sigma
(typically/app/opensearch/utils/sigma
). Upload the sigma_all_rules.zip
file.
mkdir -p /app/opensearch/utils/sigma
cp ./sigma_all_rules.zip /app/opensearch/utils/sigma/
chown -R opensearch:opensearch /app/opensearch/utils/sigma
Disable allocation following these instructions, Restart cluster nodes and re-enable allocation.
Open the Developer Console (Main Menu → System Settings → Developer Console
) and execute:
POST _core/sigma/rule
{
"zipped_package_filename": "sigma_all_rules.zip"
}
MITRE ATT&CK Matrix Initialization
If the MITRE ATT&CK
module is not installed, proceed to the next step.
After updating, reinitialize MITRE ATT&CK by following the relevant guide.
Adding Notes
Version 5.0 adds note list functionality to the Knowledge Center
component. To enable open Main Menu - System Settings - Module Settings - Main - Menu Settings
, locate and expand Knowledge Center
. Click Add Section.
Fill in the fields as follows:
Field Name | Content |
---|---|
Element Type | Page |
System Name | notebooks/list |
Title | Notes List |
Enable Display | Yes (flag must be enabled) |
Click Save Changes
and configure permissions as needed.
For JSON-based addition open Main Menu - System Settings - Module Settings - Main - Menu Settings
, open JSON structure
, locate Knowledge Center
and add this to the sections block:
{
"itemType": "page",
"name": "notebooks/list",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "ic1c91b81-145a-11f0-82e8-c104a1233526",
"title": "Notes List",
"enabled": true
}
Click Save changes
button.
For file attachment functionality, see the configuration guide.
Adding RSM 2.0
Version 5.0 introduces Resource-Service Model v2.0. To enable open Main Menu - System Settings - Module Settings - Main - Menu Settings
and click Add Module
.
Fill in module fields:
Field Name | Content |
---|---|
Element Type | Group |
System Name | rsm-v2 |
Title | Asset Service Model 2.0 |
Enable Display | Yes (flag must be enabled) |
Inside RSM 2.0
, add two sections:
Fill in module fields:
Field Name | Content |
---|---|
Element Type | Page |
System Name | tree |
Title | Asset Service Model |
Enable Display | Yes (flag must be enabled) |
Fill in module fields:
Field Name | Content |
---|---|
Element Type | Page |
System Name | layers |
Title | Layers |
Enable Display | Yes (flag must be enabled) |
Click Save Changes
and configure permissions as needed.
For JSON implementation
, add this structure:
{
"itemType": "group",
"name": "rsm-v2",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i166ef2a1-13ba-11f0-a668-6560e9eb14d5",
"title": "Asset Service Model 2.0",
"enabled": true,
"sections": [
{
"itemType": "page",
"name": "tree",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i46e0d571-13ba-11f0-a668-6560e9eb14d5",
"title": "Asset Service Model",
"enabled": true
},
{
"itemType": "page",
"name": "layers",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i5de13351-13ba-11f0-a668-6560e9eb14d5",
"title": "Layers",
"enabled": true
}
]
}
Click Save changes
button.
To start using RSM 2.0, initialize and migrate the data.