Table of Contents

General Feature Release 10.6.4

Note

For known issues with this version, refer to Known issues.

Tip

Prerequisites

Before you upgrade to this DataMiner version:

New features

New BPA test: Detect unsupported connector versions [ID 44607]

From now on, a new BPA test named Detect unsupported connector versions will run every day to check for elements that are using connector versions that have been removed from the Catalog.

When a connector version is removed from the Catalog, this means that it is no longer supported by Skyline Communications. Using unsupported connector versions can lead to compatibility issues, lack of support, and potential security vulnerabilities. It is important to regularly check for unsupported connector versions and update them to supported versions to ensure optimal performance and security of the system.

Automation: Time zone of the client can now be passed to the automation script that is executed [ID 44742]

When an automation script is executed, it is now possible to pass the time zone of the client to that script.

In the ExecuteScriptMessage, you can add the time zone information to the string parameter array in the following format:

CLIENT_TIME_ZONE:<Serialized TimeZone String>

Example: CLIENT_TIME_ZONE:Tokyo Standard Time;540;(UTC+09:00) Osaka, Sapporo, Tokyo;Tokyo Standard Time;Tokyo Summer Time;;

In the automation script, the time zone will be available on the IEngine input argument:

engine.ClientInfo.TimeZone

Note
  • If the script was executed from a source other than a web app, or if the time zone information could not be parsed, the TimeZone property can be null.
  • In case a subscript is executed, the ClientInfo of the parent script will also be available in the subscript.

Offloading data is now partially supported when Swarming is enabled [ID 44751]

Up to now, it was not possible to offload data on systems with Swarming enabled. From now on, provided the info and alarm tables have a compatible primary key definition, offloading data will be supported when Swarming is enabled, except for the following tables:

  • alarm_property
  • brainlink
  • interface_alarm
  • service_alarm

When Swarming is enabled, the alarm and info tables in the offload database will need an updated primary key that includes the eid column next to the id and dmaid columns. Swarming prerequisites will complain if the primary key is incorrect.

The CentralTable*.* scripts in C:\Skyline DataMiner\tools\ have been updated to initialize any new offload database with the expected primary keys right from the start.

SLNet: Minimum number of worker threads and I/O threads is now configurable [ID 44843]

In the SLNet.exe.config file, it is now possible to configure the minimum number of worker threads and I/O threads for the SLNet process.

Configuring a specific minimum number of threads will especially be useful for systems that experience bursts of high message throughput, which can lead to thread starvation under default .NET ThreadPool behavior. Examples of such systems include SRM systems on which a large number of bookings start simultaneously.

See the following example:

<configuration>
   ...
    <appSettings>
        ...
        <add key="ThreadPoolMinWorkerThreads" value="64" />
        <add key="ThreadPoolMinIOThreads" value="64" />
    </appSettings>
    ...
</configuration>

On startup, SLNet adds an entry mentioning the configured thread pool values in the SLNet.txt log file. See the following example:

2026-02-23 10:26:07.802|5|ConfigureMinThreadPoolThreads|Setting ThreadPool minimum worker threads to 64 and minimum IO threads to 64

If no value (or an invalid value) is configured, SLNet will fall back to the default behavior to avoid issues related to excessively high thread counts. By default, the minimum number of I/O and completion port threads will be set to at least 16 if the default chosen by .NET would be less than that.

Changes

Breaking changes

SNMP trap binding values will now only display plain ASCII characters [ID 44527]

When the system receives a trap binding value of type OctetString, that value will either be automatically converted into characters (e.g., 0x41 will become "A") or remain in a hexadecimal string format (e.g., when the value contains a byte that is not printable like 0x02, which is an STX control character).

Up to now, hexadecimal values above the ASCII range (i.e., values >= 0x7F) were considered printable characters, and were not converted into a hexadecimal string. This would cause issues with, for example, the Unicode control character 0x8C, which would be displayed as a question mark. In such cases, complex QAction code would then be required to have it converted back into a hexadecimal value.

Also, DataMiner is not aware of whether a binding value actually contains text (e.g., a MAC address consisting of octets) or, if the value contains text, how that text was encoded (e.g., Windows code page 1252, UTF-8, UTF-16, etc.).

From now on, hexadecimal values outside of the ASCII range will be considered non-printable characters, and will remain in hexadecimal string format.

This is a breaking change.

Up to now, text containing characters that were encoded in extended ASCII (i.e., Windows code page 1252) were converted from raw octets into string text. For example, the French word "hélicoptère" would be received correctly. From now on, that same word will be received as hexadecimal string "68e96c69636f7074e87265", and a QAction will need to convert it back into a string using the correct encoding.

Enhancements

SLWatchdog will now report SLNet/SLDataGateway TPL ThreadPool and 'time dilation' issues as run-time errors [ID 44186]

From now on, whenever the TPL ThreadPool of SLNet or SLDataGateway would get stuck or a "time dilation" would occur on your system (for example, when a freeze of a virtual machine would cause sleep actions to take longer than anticipated), SLWatchDog will report these issues as a run-time error.

New SMTP settings for OAuth authentication added to DataMiner.xml [ID 44478]

In order to allow SLNet to automatically update the OAuth token needed to access an SMTP mail server that requires authentication via XOAuth2, a number of OAuth settings have now been added to the DataMiner.xml file. However, these settings can only be configured via DataMiner Cube.

Setting Description
OAuthClientID The client ID that has been requested for DataMiner.
OAuthClientSecret The client secret corresponding to the client ID.
As this secret is treated as a password, it will not be visible in plain text here.
OAuthTokenEndpoint The URI of the OAuth token endpoint.
OAuthConfigData Placeholder for additional settings that can be stored here by the client application (for example, DataMiner Cube)

See also: System Center: Configuring outgoing email [ID 44594]

BPA test 'Large Alarm Trees' will now run on a daily basis [ID 44565]

From now on, the Large Alarm Trees BPA test will run on a daily basis, and will now generate an error or a warning in the following cases:

  • It will generate an error when there is at least one alarm tree that consists of 5000 or more alarms. Only the alarm trees that have reached this size will be returned in the detailed result.

  • It will generate a warning when there is at least one alarm tree that consists of 1000 or more alarms, but all alarm trees have less than 5000 alarms. Only the alarm trees that have reached this size will be returned in the detailed result.

Also, no notice will be generated anymore when alarm trees are getting large. As a result, in the AlarmSettings section of the MaintenanceSettings.xml file, the recurring attribute of the AlarmsPerParameter element is now obsolete.

Security enhancements [ID 44579] [ID 44821]

A number of security enhancements have been made.

DataMiner Objects Models: Selected subset of fields from DomInstance objects will now be read from the repository API [ID 44600]

Since DataMiner 10.6.0/10.6.1, it is possible to read only a selected subset of fields from DomInstance objects. In order to further enhance performance, from now on, those subsets will be read from the repository API.

Currently, the repository API will still request the full objects from the database and extract the required values.

Note

When a field value is requested, the type defined in the field descriptor will be used. In order to determine that type, field descriptor IDs should be unique across section definitions in a DOM module.

SLDataGateway: StorageTypeNotFoundException will now always mention the StorageType that could not be found [ID 44603]

When SLDataGateway throws a StorageTypeNotFoundException, from now on, the message will always mention the StorageType that could not be found.

An updated parameter value will no longer be written to the database if it is equal to the old value [ID 44609]

When a user or a QAction updated a parameter value, up to now, the new value would always be written to the database, even when the new value was equal to the old value.

From now on, when the new value is equal to the old value, the value will no longer be written to the database. If any triggers or QActions are configured to be executed following a parameter update, these will still be executed.

Also, write parameters will no longer be saved as this would cause unnecessary load.

NotifyMail.html has been updated in order to better support both classic Microsoft Outlook and new Microsoft Outlook [ID 44617]

The C:\Skyline DataMiner\NotifyMail.html file, i.e., the email report template, has been updated to better support both classic Microsoft Outlook and new Microsoft Outlook.

Enhanced distribution of SNMPv3 traps [ID 44626]

When a DMA receives an SNMPv3 trap that it cannot process (e.g., because the SNMPv3 user is unknown), and trap distribution is enabled, from now on, the trap will be distributed to the other DMAs in the cluster in an attempt to have it processed by one of those other DMAs.

Also, in some cases, traps could be forwarded to the wrong elements because the SNMPv3 USM ID was not validated correctly.

SLDataGateway: Job queue updates will now be logged in SLJobQueues.txt [ID 44661]

Up to now, log entries regarding SLDataGateway job queue updates would be logged in the C:\Skyline DataMiner\Logging\SLDbConnection.txt file.

From now on, these log entries will be logged in the C:\Skyline DataMiner\Logging\SLDataGateway\SLJobQueues.txt file instead.

Enhanced performance when filtering history alarms using complex filters [ID 44664]

Because of a number of enhancements, overall performance has increased when filtering history alarms using complex filters.

Performance has especially increased using filters that consist of multiple equality conditions involving the following types of objects:

  • Element
  • Function
  • Protocol
  • Service
  • View
Note
  • Non-equality and wildcard/regex filtering has not been altered.
  • If more than 1,000 elements are affected, filtering will revert to the legacy behavior.

SLLogCollector: Separate log file per instance [ID 44668]

Up to now, all SLLogCollector logging of all SLLogCollector instances would end up in the following files, stored in the C:\ProgramData\Skyline\DataMiner\SL_LogCollector\Log folder:

  • SL_LogCollector_fulllog.log
  • SL_LogCollector_log.log

From now on, each SLLogCollector instance will have its own dedicated log file named log-[creation timestamp].txt, stored in the C:\ProgramData\Skyline Communications\SLLogCollector folder.

Up to 10 log files will be kept on disk, and the log file of the current instance will be added to the SLLogCollector package.

Enhanced performance when activating DaaS systems [ID 44737]

Because of a number of enhancements, overall performance has increased when activating DaaS systems.

Generating BrokerGateway client secrets [ID 44757] [ID 44778]

From now on, it is possible to generate BrokerGateway client secrets. These are designed for DxMs or other clients connecting to the DataMiner NATS bus from a server without a local DataMiner installation. The secrets enable secure authentication with BrokerGateway, which then provides the necessary connection details for the NATS bus.

Using internal BrokerGateway Administrator keys for these connections is discouraged, as these keys may be refreshed during cluster maintenance or because of other actions. By contrast, user-generated client secrets persist throughout the cluster's lifecycle and are immediately distributed to all BrokerGateway instances for cluster-wide availability.

Common examples of clients requiring this setup include the Data Aggregator DxM and Dashboard Gateway.

API calls are available to manage the BrokerGateway client secrets.

Method Route Description
POST /api/clientSecret/generate Generates a new random API key associated with a specific client name. The key is returned in the response body.
DELETE /api/clientSecret/delete Deletes the client secret associated with the specified clientName argument.
GET /api/clientSecret/list Retrieves a list of all existing client secrets with their respective names.
Note: The sensitive key values are redacted in the response (e.g., abcd****************) for security purposes.

In order to perform these API calls on a BrokerGateway instance, you will need the Administrator key. You can find this key in the file C:\Program Files\Skyline Communications\DataMiner BrokerGateway\appsettings.runtime.json. In the file, look for an entry in APIKeys with the name Administrator. The key property is the administrator key.

You can execute the API calls by calling the REST API via PowerShell.

Important

Using client secrets prevents the root certificate authority from being cycled during DataMiner Agent removals or NATSRepair calls. This is done to ensure that external clients maintain stable connectivity with the cluster, without having to change credentials or trusted root certificates.

Enhanced performance when executing a full element update on STaaS systems with Swarming enabled [ID 44772]

Because of a number of enhancements, on STaaS systems with Swarming enabled, overall performance has increased when executing a full element update.

Fixes

Problem with SLNet when receiving a subscription with a large filter that contained wildcards [ID 44512]

When SLNet received a dynamic table subscription with a very large filter that contained wildcards, up to now, it would throw a stack overflow exception and stop working.

From now on, SLNet subscriptions will now be blocked when they contain a filter that exceeds 140,000 characters.

SLNetClientTest tool: External authentication would not work when the Microsoft Edge (WebView2) browser engine was installed on a per user basis [ID 44583]

When you connected to a DataMiner Agent, up to now, it would not be possible to use external authentication from a client computer on which the Microsoft Edge (WebView2) browser engine was installed on a per user basis.

Note

When the Microsoft WebView2 browser engine is installed on a per user basis, it will be automatically updated each time you open Microsoft Edge.

Caution

Always be extremely careful when using the SLNetClientTest tool, as it can have far-reaching consequences on the functionality of your DataMiner System.

Problem with SLDataMiner after sending an NT_READ_SAVED_PARAMETER_VALUE call [ID 44597]

When an NT_READ_SAVED_PARAMETER_VALUE call was sent to retrieve data from an element without a connector while that data was still present in SLDataGateway, up to now, SLDataMiner could stop working.

Data would not show up in DVE child elements due to a problem with foreign key linking to logger tables [ID 44651]

In some cases, a problem with foreign key linking to logger tables would cause data to not show up in DVE child elements.

Alarm properties passed along by Correlation or SLAnalytics could get lost when an alarm was created [ID 44669]

In some cases, alarm properties passed along by Correlation or SLAnalytics could get lost when an alarm was created.

API Gateway would incorrectly add multiple routes with the same basePath when multiple registration requests were received for the same route [ID 44676]

When multiple registration requests were received for the same route, in some cases, instead of updating the route, API Gateway would incorrectly add multiple routes with the same basePath. As a result, the proxy would not be able to route the HTTP request.

Failover: Two Agents in a Failover pair could get stuck during startup [ID 44680]

In some cases, the two Agents in a Failover pair could get stuck during startup.

Scheduler: Windows task will no longer be recreated when only the actions of a scheduled task were changed [ID 44691]

When a scheduled task was updated close to its execution time, in some cases, the task would incorrectly not be executed. It would miss its execution window because, during the update, the Windows task would be deleted and recreated again.

From now on, when only the task actions are changed during an update of a scheduled task, the Windows task will no longer be recreated. The latter will only be recreated when the status, name, description, or timing of the scheduled task are changed.

Up to now, history set trending would show gaps where no gaps were expected.

From now on, trend records with the following iStatus values will no longer cause gaps in trend graphs:

Value Description
-1 Element is starting up.
-2 Element is being paused.
-3 Element is being activated.
-4 Element is going into a timeout state.
-5 Element is coming out of a timeout state.
-6 Element is being stopped.
-9 Trending was started for the specified parameter.
-10 Trending was stopped for the specified parameter.

Problem with SLNet when rolling over log files [ID 44711]

In some cases, SLNet could stop working when rolling over from one log file to another (e.g., from SLNet.txt to SLNet0.txt).

From now on, when an issue occurs when rolling over log files, an error will be logged in the Windows Event Viewer.

Note

Some logging may get lost because of this fix.

BrokerGateway installation could fail when the nsc.exe file was locked by an antivirus application [ID 44721]

Up to now, a BrokerGateway installation could fail when the nsc.exe file was locked by an antivirus application.

From now on, a locked nsc.exe file will no longer cause a BrokerGateway installation to fail.

Problem with SLAnalytics during the storage initialization routine [ID 44745]

In some rare cases, the SLAnalytics process could stop working during the storage initialization routine.

Problem with SLAnalytics when trying to process an invalid database record [ID 44748]

In some cases, SLAnalytics would stop working when trying to process an invalid database record after having serialized it.

Problem when an alarm was updated while a hysteresis timer was scheduled [ID 44749]

When an alarm was updated while a hysteresis timer was scheduled, in some cases, the timestamp of the alarm update would be more recent than that of the alarm generated by the clear hysteresis. As a result, the state changes timeline would no longer be correct.

Problem with SLProtocol when multiple connections of the same element went into a timeout state simultaneously [ID 44752]

In some rare cases, SLProtocol could stop working when multiple connections of the same element went into a timeout state simultaneously.

BPA test 'Check Deprecated DLL Usage' would incorrectly flag the MySql.Data NuGet as deprecated [ID 44758]

Since DataMiner 10.5.12/10.6.0, the Check Deprecated DLL Usage BPA test would incorrectly flag the MySql.Data NuGet (MySql.Data.dll) as deprecated.

SLLogCollector: Problem when process dumps were triggered in parallel [ID 44780]

Up to now, when SLLogCollector tried to trigger process dumps in parallel, in some cases, certain dumps would not be added to the package.

From now on, in order to be able to include all dumps in the package, process dumps will no longer be triggered in parallel.

Incorrect error message would appear when a configuration mismatch prevented DataMiner Agents from being clustered [ID 44781]

When a configuration mismatch prevented DataMiner Agents from being clustered, up to now, the following incorrect error message would appear:

Cannot cluster Agents as remote Agent has an unsupported database type.

From now on, the following correct error message will appear instead:

Cannot cluster Agents as the agent configuration is incompatible. Please check SLNet logging for more information.

Problem when an element was updated immediately after having been swarmed [ID 44783]

When an element was updated immediately after having been swarmed from one host to another, in some cases, it would incorrectly re-appear on its former host.

STaaS: Retrieving the active alarms of an element would incorrectly be limited to 10,000 [ID 44793]

Up to now, if an element had more than 10,000 active alarms, on STaaS systems, only the first 10,000 would incorrectly be retrieved.

From now on, all active alarms will be retrieved, even if the element in question has more than 10,000 active alarms.

Problem when a component in a dashboard or low-code app was unable to retrieve data from a remote DataMiner Agent [ID 44848]

On systems where each DMA has its own Cassandra database, up to now, when a component in a dashboard or low-code app was unable to retrieve data from a remote DataMiner Agent (for example because that Agent was unavailable), an error would be thrown inside the UI of that dashboard or low-code app.

From now on, when a component in a dashboard or low-code app is not able to retrieve data from a remote DataMiner Agent, a "Nothing to show" message will appear in that component instead.

Cassandra Cluster: Automation scripts would incorrectly not be able to request history alarms using a property value filter with wildcards or regular expressions [ID 44873]

Up to now, it would incorrectly not be possible for automation scripts to request history alarms from a Cassandra Cluster database using a property value filter with wildcards or regular expressions.