107 |
Message
|
[server_address=%s]: Connection Terminated to ActiveMQ |
Severity
|
Informational
|
Type
|
Raise
|
Description
|
Informs JMS publisher connection to ActiveMQ service is down. JMS publisher terminated/stopped.
|
Action
|
Ordinarily, no action required.
However, if connection to JMS (ActiverMQ service) is down for extended periods of time, then, verify if ActiveMQ service status
and evaluate overall system health, i.e., verify services are up, disk partitions have free space, system is not CPU starved,
not swapping, not I/O bound, etc.
|
108 |
Message
|
[server_address=%s]: Connection Established to ActiveMQ |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Informs JMS publisher connection to ActiveMQ service is up. JMS publisher started, or, if connection lost, connection established.
|
Action
|
No action required.
Note, however, that if JMS connection is bouncing often, then, this could be an indication of a system-wide issue, so evaluate
overall system health, i.e., verify services are up, disk partitions have free space, system is not CPU starved, not swapping,
not I/O bound, etc.
|
301 |
Message
|
[message_error=%s]: Error initializing JMX MBean |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Failure during creation of JMX bean for TIP statistical reporting, which can be linked to JMX services failure to initialize,
lack of system resources, or incorrect configuration of JMX host/port.
|
Action
|
Verify overall system health, i.e., services are up, disk partitions have free space, system is not CPU starved, not swapping,
not I/O bound, etc. Attempt system restart, if problem persists, contact tech. support.
|
303 |
Message
|
[jmx_bean_name=%s]: JMX MBean registration failed |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Failure during establishment of JMX bean for event spout (PG, Router, etc) operations. This means no operations to/from spouts
will be possible via JMX.
|
Action
|
Attempt system restart, if problem persists, contact tech. support.
|
304 |
Message
|
[jmx_bean_name=%s]: JMX MBean registration succeeded |
Severity
|
Error
|
Type
|
Clear
|
Description
|
Success in establishing JMX bean for event spout (PG, Router, etc) operations.
|
Action
|
No action required.
|
405 |
Message
|
[connection_usage=%s][server_address=%s][state=%s][message_error=%s]: Connection error |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Error during attempt to communicate (or establish connection) to CCE server (PG, Router, etc). This can be caused by:
-
Destination host/port not accepting connections.
-
Too many messages (from server) pending processing by Live Data.
-
Write to closed (by far-end) connection.
|
Action
|
As follows:
-
Verify CCE server host/port configuration is correct, and port is open on host fire-wall.
-
Verify CPU availability to Live Data. If problem persists contact tech. support.
-
No action required for attempt to write to closed (by far-end) connection.
|
407 |
Message
|
[server_address=%s][tip_missed_heartbeats=%d]: Heatbeat Missed on TIP connection |
Severity
|
Warning
|
Type
|
Clear
|
Description
|
Missed one heartbeat to CCE server is not uncommon, and it is typically linked to a busy system. Live Data uses heartbeat
to track the health of a CCE connection, and if too many (configurable) heartbeats are lost it will close and (attempt) to
re-open the connection.
|
Action
|
No action required.
|
501 |
Message
|
[properties=%s]: TIP Controller Stopped |
Severity
|
Critical
|
Type
|
Raise
|
Description
|
Indicates spout communication controller, responsible for CCE message processing, has terminated/stopped. This can happen
on following scenarios:
|
Action
|
On failover: determine failover root cause and correct it. Start with overall system health, i.e., services are up, disk partitions
have free space, system is not CPU starved, not swapping, not I/O bound, etc. If no reason is found, or reason cannot be corrected,
contact tech. support. For all the other reasons: no action required.
|
502 |
Message
|
[tip_client_seggrp=%s][tip_client_app_seqnum=%s][properties=%s]: TIP Controller Started |
Severity
|
Critical
|
Type
|
Clear
|
Description
|
Indicates spout communication controller, responsible for CCE message processing, has started.
|
Action
|
No action required.
|
503 |
Message
|
[server_address=%s][message_error=%s]: Error in TIP Controller Processes |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates protocol error due to:
-
Request/response timeout
-
Unrecognized message format
-
Failure in message decoding
-
Failure in message processing
|
Action
|
Protocol request/response timeouts are not uncommon and are typically linked to communication failure due to far-end port
disconnect (i.e., attempt to write to a closed socket). For all other failures, contact tech. support.
|
505 |
Message
|
[message_error=Connection-%s]: Invalid TIP Configuration for |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Configuration to CCE server is incomplete, or invalid. Port is not numeric in range 1-65565, host name contains invalid characters,
etc.
|
Action
|
Verify Live Data configuration and restart system.
|
507 |
Message
|
[server_address=%s][tip_client_seqgrp=%s][tip_server_seqgrp=%s]: TIP Sequence Group Mismatch |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates data loss during last CCE connection switch-over. PG/Router unavailable simultaneous to a communication loss to
PG/Router corresponding side (A or B).
|
Action
|
No action required.
The system will issue a new sequence group, and request a snapshot to re-synch its internal state with that of the failed
component.
|
509 |
Message
|
[server_address=%s][tip_client_seqgrp=%s][tip_server_seqgrp=%s]: TIP Message Gap between client and server |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates data loss during last CCE communication. PG/Router crash simultaneous to a communication loss to PG/Router corresponding
side (A or B).
|
Action
|
No action required.
The system will request a snapshot to re-synch its internal state with that of the failed component.
|
511 |
Message
|
[server_address=%s]: TIP Controller switching Active Side |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Indicates spout communication controller has detected an active connection (to CCE server) is down, and that the standby connection
to configured PG/Router will become active (switch-over). The PG/Router communication was severed, or the CCE server is no
longer running.
|
Action
|
Verify CCE server (PG, Router, etc) health.
|
512 |
Message
|
[server_address=%s][tip_client_seqgrp=%s][tip_server_seqgrp=%s]: TIP Message Synchronization Successful with TIP Server |
Severity
|
Error
|
Type
|
Clear
|
Description
|
Indicates that during a switch-over (from active connection to standby) the sequence number received by standby is in ascending
order, and that it can be released from CCE server side. PG/Router unavailable, but Live Data state is not affected by it.
|
Action
|
No action required.
|
515 |
Message
|
[server_address=%s][operation_type=%s]: TIP Protocol Request Failure |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Request to CCE server has failed, or timed out.
|
Action
|
No action required.
|
517 |
Message
|
[server_address=%s][message_error=%s][operation_type=%s]: TIP Protocol Errors |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Protocol error during response processing (of a request to CCE server) due to response to an invalid, or inexistent, pending
request id. This can be linked to an expired request, as well as, an inexistent request.
|
Action
|
No action required.
|
521 |
Message
|
[server_address=%s][message_error=%s]: TIP Heartbeat Failure |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reached allowed number of heartbeat losses to CCE server. This will disconnect all CCE servers (PG, Router, etc) associated
with a given CCE side (A or B). A switch over to other CCE server side (A or B) will be automatically initiated.
|
Action
|
Determine network integrity between Live Data and CCE servers on failed side (A or B). Determine if CCE server (PG, Router,
etc) is up and running.
|
523 |
Message
|
[server_address=%s][message_error=%s]: Error in TOS Client |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates TOS (related to Live Data Cluster node state) protocol error.
Possible error causes are:
|
Action
|
Verify Live Data configuration, restart system.
|
601 |
Message
|
[server_address=%s][message_error=%s]: Warning in TOS Client" |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Unable to send message to assess Live Data Cluster state on far end. Triggered by failure to receive cluster node state via
JMS (ActiveMQ/Netbridge in this case), which can be caused by cluster node crash, network failure, ActiveMQ crash, or Netbridge
crash.
|
Action
|
Verify network integrity between Live Data cluster nodes. Verify ActiveMQ service is up, and cluster nodes configured for
fail-over.
|
603 |
Message
|
[server_address=%s][message_error=%s]: TOS REQUEST RESPONSE latency alert. |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Request/response round trip time to cluster nodes is greater than configured seconds. This can be linked to network latency,
or sluggish response from cluster node far end.
|
Action
|
Verify network integrity between Live Data clusters nodes. Verify overall system health on all Live Data cluster nodes.
|
607 |
Message
|
[server_address=%s][missed_heartbeats]: Heatbeat Missed on TOS connection |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Missed one heartbeat to CCE server is not uncommon, and it is typically linked to a busy system. Live Data uses heartbeat
to track the health of a CCE connection, and if too many (configurable) heartbeats are lost it will close and (attempt) to
re-open the connection.
|
Action
|
Verify network integrity between CCE server and Live Data.
|
609 |
Message
|
[server_address=%s][message_error=%s]: TOS Heartbeat Failure |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reached allowed number of heartbeat losses to CCE server (PG, Router, etc). This will cause a disconnection followed by reconnection
to CCE servers (PG, Router, etc).
|
Action
|
Verify network integrity from CCE server (PG, Router, etc) to Live Data. Verify CCE server (PG, Router, etc) is up and running.
|
703 |
Message
|
[operation_error_desc=%s]: Camel Service Error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Failed to open connection to JMS (ActiveMQ) or to publish to indicated topic. Configuration to JMS (ActiveMQ) is incorrect,
or service is down.
|
Action
|
Verify Live Data configuration. Assess overall system health, i.e., services are up, disk partitions have free space, system
is not CPU starved, not swapping, not I/O bound, etc.
Note that, in case of ActiveMQ service down, the Live Data cluster will fail-over to standby node.
|
801 |
Message
|
[server_id=%s]: Spout failed to load UCCE Configuration |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Error during load of Live Data configuration from AWDB. Dataset name pointing to AWDB, incorrectly configured in CUIC. location.
|
Action
|
Using CUIC interface assure data set name for CUIC is correctly configured with address to AWDB. Ensure AWDB system is up
and running. If problem persists, contact tech. support.
|
802 |
Message
|
[server_id=%s]: Spout loaded UCCE Configuration |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Live Data configuration, from AWDB, loaded to local memory.
|
Action
|
No action required.
|
805 |
Message
|
[message_error=%s]: JMS Command Spout failed to Initialize |
Severity
|
Error
|
Type
|
Raise
|
Description
|
JMS command spout failed to initialize. Scenario JMS (ActiveMQ) communication was not possible via Camel, and an exception
was thrown.
|
Action
|
Verify Live Data configuration, and restart system. If problem persists, contact tech. support.
|
807 |
Message
|
[message_error=%s]: JMS Command Spout failed to Close |
Severity
|
Error
|
Type
|
Raise
|
Description
|
JMS command spout failed to close JMS connection.
|
Action
|
No action required.
|
809 |
Message
|
[server_id=%s][tip_client_app_seqnum=%d][tip_server_app_seqnum=%d]: Invalid Application Sequence Number Received |
Severity
|
Error
|
Type
|
Raise
|
Description
|
During communication with CCE server, message sequence number was not in increasing order, and/or presented a gap. Possible
data loss.
|
Action
|
Gather logs and contact tech. support.
|
813 |
Message
|
[server_id=%s][message_error=%s]: Spout runtime error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates event spout runtime error:
-
Zookeeper connection object could not instantiate or connect.
-
Establishment of JMS command queue listener failed.
-
JMX configuration could not be written to Zookeeper.
|
Action
|
Verify Zookeeper service is up and running, and restart system. If problem persists, contact tech. support.
|
815 |
Message
|
[server_id=%s]: Spout lost connection to Zookeeper |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Connection to Zookeeper was severed, which might be linked to a Zookeeper down/terminated. Note that a Zookeeper disconnection
will cause a Live Data cluster failover.
|
Action
|
Zookeeper is central to Storm and, as such, to Live Data; verify service is up and running. If disconnections are frequent,
contact tech. support.
|
816 |
Message
|
[server_id=%s]: Spout Connected Successfully to Zookeeper |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Connection to Zookeeper established.
|
Action
|
No action required.
|
819 |
Message
|
[server_id=%s]: Spout deactivated |
Severity
|
Critical
|
Type
|
Raise
|
Description
|
Event spout is deactivated due to a Live Data Cluster failover. One of the central components to Live Data is down. Central
components are: Zookeeper, ActiveMQ, and (pairs of) CCE servers (PG/Router).
|
Action
|
Verify overall system health, i.e., services are up, disk partitions have free space, system is not CPU starved, not swapping,
not I/O bound, etc. Verify network to PGs and Routers are healthy. Verify PGs and Routers are up and running. If problem persists,
contact tech. support.
|
820 |
Message
|
[server_id=%s]: Spout activated |
Severity
|
Critical
|
Type
|
Clear
|
Description
|
Event spout transitioned to active state due to startup, or Live Data Cluster failover.
|
Action
|
No action required.
|
902 |
Message
|
[server_id=%s][value=%s]: Spout UCCE Configuration - Deployment Type Modified |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Deployment type change from UCCE to PCCE, or vice-versa. This WILL impact the feature set available in Live Data, and require
environment reconfiguration. The expectation is, in fact, that environment adjustments (to/from UCCE to PCCE), took place
before changing deployment type.
|
Action
|
No action required.
|
905 |
Message
|
[server_id=%s][value=%s]: Spout UCCE Configuration - Spout end point configuration error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates Live Data EndPoint configuration is present, but data is inconsistent (values missing from tables, incomplete data,
etc.).
|
Action
|
Correct Live Data EndPoint Configuration.
|
907 |
Message
|
[server_id=%s][value=%s]: Spout UCCE Configuration - Spout TOS end point configuration error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Live Data configuration is incomplete/incorrect and TOS protocol host/port is in error.
|
Action
|
Correct Live Data EndPoint Configuration.
|
1401 |
Message
|
[message_error=%s]: Error connecting to Zookeeper |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Zookeeper connection failure. Possibly Zookeeper service is down, or hung. This will cause Live Data to fail during startup,
or to failover if already up.
|
Action
|
Verify overall system health, i.e., services are up, disk partitions have free space, system is not CPU starved, not swapping,
not I/O bound, etc. If problem persists, contact tech. support.
|
1403 |
Message
|
[message_error=%s]: Error communicating with Zookeeper |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Attempt to write to, or read from, Zookeeper failed, but the error does not characterize Zookeeper to be down. The operation
is attempted a few times before the error is reported, so it is very likely Zookeeper connection will be restarted in short
order (which will trigger a Live Data Cluster failover).
|
Action
|
Verify overall system health, i.e., services are up, disk partitions have free space, system is not CPU starved, not swapping,
not I/O bound, etc. If problem persists, contact tech. support.
|
1011 |
Message
|
[operation_error_desc=%s]: Zookeeper Connection Error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports error/exception during attempt to connect or operate against Zookeeper instance. Depending on error, a failed connection
to Zookeeeper is going to be declared, and Live Data Cluster failover will be triggered.
|
Action
|
Verify Zookeeper is up and running. Verify overall system health. If problem persists, contact tech. support.
|
1014 |
Message
|
[value=%s]: Cluster state update |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Indicates Live Data Cluster state.
|
Action
|
No action required.
|
1017 |
Message
|
[message_error=%s]: Error in cluster operations |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Indicates failure during Cluster Peer spout initialization. Also informs failures during TOS request/response and cluster
spout stop.
|
Action
|
Verify overall system health. Verify network integrity between Live Data and CCE servers (PG, Router, etc). Verify CCE servers
(PG, Router, etc) are up and running. If problem persists, contact tech. support.
|
1023 |
Message
|
[operation_error_desc=%s]: Cluster Heartbeat subscriber start failed with cause |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Cluster Peer Spout failed to initialize JMS (ActiveMQ) subscriber for exchanging of cluster node state, which means cluster
failover will not be functional. This can only happen during system startup. JMS (ActiveMQ) configuration is incorrect, or
service is down.
|
Action
|
Verify Live Data configuration, verify ActiveMQ service is up and running, and check overall system health. If problem persists,
contact tech. support.
|
1024 |
Message
|
Cluster Heartbeat Subscriber started successfully |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Cluster Peer Spout successfully connected and subscribed to JMS (ActiveMQ) in order to exchange cluster node state.
|
Action
|
No action required.
|
1025 |
Message
|
[operation_error_desc=%s]: Cluster Publisher start failed with cause |
Severity
|
Warning
|
Type
|
Raise
|
Description
|
Cluster Peer Spout failed to initialize JMS (ActiveMQ) publisher for exchanging of cluster node state, which means cluster
failover will not be functional. This can only happen during system startup. JMS (ActiveMQ) configuration is incorrect, or
service is down.
|
Action
|
Verify Live Data configuration, verify ActiveMQ service is up and running, and check overall system health. If problem persists,
contact tech. support.
|
1026 |
Message
|
Cluster Publisher started successfully |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Cluster Peer Spout successfully connected and can publish to JMS (ActiveMQ) in order to exchange cluster node state.
|
Action
|
No action required.
|
1101 |
Message
|
[operation_error_desc=%s]: Cluster state machine encounters an error |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Cluster State Machine is in a state from which received event is invalid!
|
Action
|
Gather logs and contact tech. support.
|
1107 |
Message
|
Cluster state machine activates RTR/PG spouts |
Severity
|
Informational
|
Type
|
Clear
|
Description
|
Indicates Cluster State Machine has satisfied all conditions to allow for event spouts to connect to CCE servers (PG, Router,
etc).
|
Action
|
No action required.
|
1109 |
Message
|
Cluster state machine deactivates RTR/PG spouts |
Severity
|
Critical
|
Type
|
Raise
|
Description
|
Indicates Cluster State Machine has detected a condition under which event spouts are not allowed to be connected to (or should
disconnect from) CCE servers (PG, Router, etc).
|
Action
|
No action required.
|
1201 |
Message
|
[descr=%s]: ActiveMQ connection state down |
Severity
|
Critical
|
Type
|
Raise
|
Description
|
Indicates ActiveMQ transitioned to down/disconnected state. This will trigger a cluster failover.
|
Action
|
Verify ActiveMQ service is up. Verify network integrity between cluster nodes. Verify overall system health.
|
1202 |
Message
|
[descr=%s]: ActiveMQ connection state up |
Severity
|
Critical
|
Type
|
Clear
|
Description
|
Indicates ActiveMQ transitioned to up/connected state.
|
Action
|
No action required.
|
1301 |
Message
|
[descr=Side-%s]: NetBridge connection state down |
Severity
|
Critical
|
Type
|
Raise
|
Description
|
Indicates ActiveMQ Netbridge transitioned to down/disconnected state. This might trigger a cluster failover depending on TOS
request/response to far-end cluster node.
|
Action
|
Verify ActiveMQ service is up. Verify network integrity between cluster nodes. Verify overall system health.
|
1302 |
Message
|
[descr=Side-%s]: NetBridge connection state up |
Severity
|
Critical
|
Type
|
Clear
|
Description
|
Indicates ActiveMQ Netbridge transitioned to up/connected state.
|
Action
|
No action required.
|
20101 |
Message
|
[db_object_type=%s][mesage_error=%s]: Error attempting operation with CCE Database |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports database (typically AWDB) runtime error, informing object attempting DB access, and description of failure cause,
including SQL query (where appropriate). Failure during connection to AWDB tables to memory, failure to access Hibernate element,
failure creating DBSession.
|
Action
|
Determine if AWDB is available, and verify Live Data configuration and CUIC, specifically where it pertains to AWDB connection
information on datasource tab in CUIC.
|
20103 |
Message
|
[message_error=%s]: Error attempting to connect to CCE Database |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports database (typically AWDB) access error during retrieve/update elements, via Hiberate. Execute SQL query, retrieve
column value.
|
Action
|
Determine if AWDB is available, and verify Live Data configuration and CUIC, specifically where it pertains to AWDB connection
information on datasource tab in CUIC.
|
20107 |
Message
|
[message_error=%s]: Error attempting to read local address from CCE database for Cassandra connection |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports failure during retrieval of configuration element which would allow Live Data a connection to Cassandra DB.
|
Action
|
Verify Live Data configuration.
|
20201 |
Message
|
[db_ver_expected=%s][db_ver_read=%s]: CCE configuration database version mismatch |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Informs current version found in AWDB is not what Live Data expects to see. This indicates a schema mismatch. Live Data cannot
proceed since no data from AWDB can be retrieved. This condition is detected during Live Data startup only.
|
Action
|
Gather logs, and contact tech. support.
|
20304 |
Message
|
Error creating Cassandra Database connection |
Severity
|
Informational
|
Type
|
Raise
|
Description
|
Reports failure, and cause, during connection attempt to Cassandra DB. Reports Cassandra configuration values used for Cassandra
connection.
|
Action
|
Verify Live Data configuration, and assure Cassandra DB is up and running. No action required.
|
20305 |
Message
|
[message_error=%s]: Error interacting with the Cassandra Connection Pool |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports failure during attempt to obtain connection, from Cassandra connection pool. Most likely connection pool is depleted
and new connections to Cassandra DB are not possible.
|
Action
|
Ensure Cassandra DB is up. Verify in Live Data configuration points to Cassandra correctly, and that connection pool is large
enough.
|
20307 |
Message
|
[message_error=%s]: Error interacting Cassandra database |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports failure during attempt to read/write to Cassandra DB, and, if available, a description of the problem.
|
Action
|
Ensure Cassandra DB is up, verify Live Data configuration points to Cassandra correctly.
|
20309 |
Message
|
Error reading AWDB Configuration from Cassandra |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Reports failure, and cause, while reading AWDB Config from Cassandra.
|
Action
|
Verify Live Data configuration, and ensure Cassandra DB is up and running, and AWDBConfig is present using cli "show live-data
aw-access"
|
20401 |
Message
|
[message_error=%s]: Error trapped on expected exception |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Live Data trapped an exception to which it has no recourse other than report and proceed.
|
Action
|
Gather logs and contact tech. support.
|
20402 |
Message
|
[message_error=%s]: Exception on CCMDB Connection Pool |
Severity
|
Error
|
Type
|
Raise
|
Description
|
Live Data exception while performing operation on CCM Database connection pool
|
Action
|
Gather logs and contact tech. support.
|
10703 |
Message
|
Tick handler caught a runtime exception |
Severity
|
Error
|
Type
|
Clear
|
Description
|
The tick handler threw a runtime exception. Scenario Unknown.
|
Action
|
Contact tech. support.
|
10705 |
Message
|
interval processing threw an exception |
Severity
|
Error
|
Type
|
Clear
|
Description
|
The interval processor threw an exception. Scenario Unknown.
|
Action
|
Contact tech. support.
|