Skip to main content

❓ Frequently Asked Questions

Business

Is it possible to build a business providing services based on SAF?

Yes, SAF supports the MSSP (Managed Security Service Provider) mode. Partners can build their business by providing services based on this mode.


How can I learn more about the system?

You can request a system demonstration here.


Where can I find SAF use cases?

SAF use cases are available here.


Architecture

Is the system scalable?

Yes, the system can be deployed on a varying number of nodes of different capacities, organizing a single cluster, which ensures horizontal and vertical scalability.


How does the system behave when some nodes fail?

The degree of fault tolerance is determined by the system configuration - the number of nodes of different roles and the data replication settings.


How is SAF different from OpenSearch / Elasticsearch?

SAF is a universal platform for collecting and analyzing machine data, designed to solve problems in the field of information security, IT infrastructure monitoring, and business process analysis.

A distinctive feature of the platform is the ability to use various storage, such as OpenSearch, ElasticSearch, Hadoop, ClickHouse, and others. OpenSearch is just one of the possible storage options used in the platform as a base.


Is SAF an on-premise or cloud-like solution?

The system is installed and runs on servers within your own IT infrastructure (on-premise). Cloud deployment can be implemented as part of project work.


Can SAF be installed on virtual infrastructure?

Yes, the installation process is the same as on physical servers.


Can SAF be installed on Astra Linux?

Yes. A full list of supported operating systems is available here


Data Collection

What data can be collected in SAF?

Any machine data.


What collection methods can be used to receive events from the source?

Data can be collected using a variety of protocols and connectors. Frequently used ones include:

  • Agent-based collection, including:
    • Reading from files
    • Reading Windows logs
    • Reading Linux Audit logs
    • Remote script execution
  • Syslog / UDP
  • SNMP
  • HTTP
  • JDBC
  • Kafka

Is it mandatory to install an agent to collect data?

No, data can be sent to the collector host, for example, using Syslog, or collected by the collector itself by polling the source.


Can the SAF Beat agent run under a non-privileged account?

Yes, if it has sufficient privileges to collect the required information.


Is it possible to centrally install and manage agents for collecting data from hosts regardless of the OS?

Centralized agent installation on hosts can be performed using automation tools such as Ansible or AD policies. Centralized agent configuration management is carried out by the SAF Beat agent manager.


How is data collected, filtered, and sent to the repository from files?

Reading and filtering events from files can be performed either on the agent side with sending to the collector, or on the collector itself, from which the data is transferred to the repository.


Can SAF collect and analyze server performance metrics?

Yes, performance metrics are a typical data source, and analytics can be performed using the query language.


Can ICS components be connected as sources? Are there correlation rule templates for ICS?

Yes, data can be collected from technological segment objects with or without agents. Correlation rules for ICS are not formalized into a separate solution module, but they can be developed as part of a project.


Security

Is SAML supported?

Yes, the security module includes SAML support.


Is it possible to mask sensitive data when collecting it from the source?

Yes, masking can be performed using data processing tools on the agent or collector side.


Is SSO supported?

Yes, single sign-on (SSO) authentication can be configured using a reverse proxy or SAML.


Integration with External Systems

For searching data, does it have to be stored in the SAF base repository?

No, if the data is already in an external repository, there is no need to duplicate it in SAF, as the Search Anywhere technology can be used.


Which data repositories are compatible with SAF?

Full translation of search queries in the SA Language, utilizing all the advantages of the storage, is available for OpenSearch, Elasticsearch, and ClickHouse. Partial translation is supported for any database connected via the JDBC protocol.


Is it possible to send alerts to external systems?

Yes. You can use the Email Action for email notifications or the Webhook Action for any API integration, such as sending SMS messages, Telegram notifications, or to other systems.


Is it possible to access external systems with APIs or databases in search queries?

Yes, using the Search Anywhere mechanism or the script command.


Incident Management

Is it possible to manually create incidents?

Yes, incidents can be created manually. Details are available in the article here.


Is it possible to change the values of main fields, such as criticality or assignee, for an existing incident?

Yes, the values of the main fields can be changed for any existing incident. Details are available in the article here.


Is bulk editing of incidents supported?

Yes, it is supported. You can select multiple incidents and perform bulk editing or use the incident aggregation mechanism and work with a group of incidents as a single entity.


Correlation Rules

Can I develop my own correlation rules?

Yes, you can use the SA Language to perform various search queries on data from connected sources. Once you have formulated a search query that includes the necessary logic, you can independently create a correlation rule based on it.


Are there pre-installed correlation rules in SAF?

Correlation rules are provided as part of separate content modules.


How often are correlation rules updated in content modules?

Quarterly, with the release of a new version of SAF.


Is it possible to transfer developed correlation rules to another installation?

Yes, rules can be exported and imported to another installation via the web interface.


Is it possible to work with the task scheduler via the API?

Yes, the scheduler API allows you to perform basic operations on tasks, such as create, read, update, and delete. Details are available in the article herejob_scheduler_api/index.md .


Inventory

What information can be used in the inventory?

Any existing index with data can be used as a source of information.


Is it possible to automate the collection of asset information in the inventory?

Yes, the Inventory module operates on data from the sources specified in the asset configuration. Therefore, changing or adding a new entry in the source will automatically be reflected in the asset inventory.