ServiceNow CIS-EM MCQs ServiceNow CIS-EM TestPrep ServiceNow CIS-EM Study Guide ServiceNow CIS-EM Practice Test ServiceNow CIS-EM Exam Questions
killexams.com
Certified Implementation Specialist - Event Mangement
https://killexams.com/pass4sure/exam-detail/Servicenow-CIS-EM
What is the role of the 'Processing Order' in event processing rules within ServiceNow Event Management?
It defines the sequence in which rules are applied
It determines the event source
It categorizes events for reporting
It sets the notification frequency
Answer: A
Explanation: The 'Processing Order' defines the sequence in which event processing rules are applied. This is important for ensuring that rules are executed in the correct order to achieve the desired outcomes.
In a smart city initiative's ServiceNow, Event-to-Incident rule auto-creates for severity > 3 traffic events if source = "iot_traffic" and additional_info['congestion_index'] > 8 via JSON, bound to cmdb_ci_traffic_light with a GlideAggregate on em_event count > 5 in 10 minutes, excluding weekends via gs.dateGenerateInstance().getDayOfWeek() != 1 &&
!= 7. Which criteria trigger?
Bound to cmdb_ci_traffic_light
Congestion_index > 8
Event count > 5 in 10 minutes
Not weekend day
Answer: A,B,C,D
Explanation: The rule qualifies severity > 3 IoT traffic from "iot_traffic" source, parses congestion_index > 8 from JSON for severity, binds to cmdb_ci_traffic_light for infrastructure, aggregates em_event > 5 in 10 minutes for pattern detection, and filters non-weekends with gs.dateGenerateInstance().getDayOfWeek() to focus operational hours.
When configuring event management in ServiceNow, which parameter in the JSON payload is essential for identifying the source of an event?
Severity
Metric
Node
Timestamp
Answer: C
Explanation: The "node" parameter in the JSON payload is essential for identifying the source of an event. It specifies which CI the event is related to and helps in pinpointing the issue.
Integrating with pre-built ServiceNow Store connector for Splunk via IntegrationHub, events flood due to unthrottled HEC tokens. What spoke actions and ETL maps control volume for Event Management?
Configure token expiry in HEC with rate limit 100 events/min in the spoke connection.
Use ETL map with filter condition additional_info.event_count > 50 to batch aggregate.
Set flow throttling to 500ms delay per event in designer properties.
Enable dedupe in spoke with key 'splunk_sid' + timestamp for HEC duplicates.
Log throttled events to custom table em_throttle_log for monitoring.
Answer: A,C,D
Explanation: HEC token with 100/min limit caps Splunk pushes, preventing floods. Flow delay of 500ms spaces ingestion sustainably. Dedupe on splunk_sid + timestamp eliminates HEC retries, optimizing for EM processing.
In the context of ServiceNow, what is a key advantage of using the MID Server for Discovery?
It facilitates secure data collection from external sources.
It allows for the creation of custom dashboards.
It automates the incident management process.
It provides a user-friendly interface for administrators.
Answer: A
Explanation: A key advantage of using the MID Server for Discovery is that it facilitates secure data collection from external sources. This capability is crucial for maintaining an accurate and up-to-date CMDB.
Nested JSON from API: {"cluster": {"nodes": [{"id": "N1", "health": "degraded", "metrics": {"cpu": 95}}]}}. To flatten metrics to KV with encryption, script?
var node = payload.cluster.nodes[0]; additional_info['cpu'] = node.metrics.cpu; var encHealth = encrypt(node.health);
Use JSONPath $..metrics.cpu for extraction
Validate nesting depth <5 to avoid parse errors
Set severity from health: degraded=3
Bulk event creation for multi-node arrays
Answer: A,C,D
Explanation: Direct access flattens nested metrics to KV. Depth validation prevents stack overflows. Health-to-severity maps cluster events.
A company is preparing to implement ServiceNow Event Management. They need to activate the Event Management plugin. What is the correct command to activate the plugin in the ServiceNow instance?
plugin.activate('com.glide.itom.event_management')
activate.plugin('com.glide.itom.event_management')
glide.plugin.activate('com.glide.itom.event_management')
enable.plugin('com.glide.itom.event_management')
Answer: C
Explanation: The correct command to activate the Event Management plugin in ServiceNow is `glide.plugin.activate('com.glide.itom.event_management')`, which ensures that all necessary features for Event Management are available.
You are tasked with creating a business rule that logs the details of events when they are created. Which of the following methods should you use to access the event's `message` field in the business rule?
`current.message;`
`current.message.getValue();`
`current.getValue('message');`
`current.getMessage();`
Answer: A
Explanation: The correct way to access the event's `message` field is simply using
`current.message;`, which directly retrieves the value of that field.
A healthcare system processes patient monitoring events with additional_info as
{'device_id': 'MED-456', 'patient_id': 'PAT-789', 'metric': 'heart_rate', 'value': 120, 'timestamp': '2025-10-05T14:30:00Z', 'ci_link': 'cmdb_ci_medical_device:sys_id_xyz'}. Association to cmdb_ci_medical_device must trigger impact on cmdb_ci_service_auto for critical care services. What advanced features enable this?
Use Regex in CI Field Matcher: current.cmdb_ci =
/cmdb_ci_medical_device:(\w+)/.exec(additional_info.ci_link)[1] to extract sys_id
Enrich the event with gs.now() relative to additional_info.timestamp for time-based severity adjustment in the transform
Set up Alert Management Rule with condition JSON.parse(additional_info).value > 100 to bind and propagate via 'Affects::Affected by' relations
Configure a custom property evt_mgmt.additional_info.parse_depth=3 to handle nested timestamps in JSON
Answer: A, C
Explanation: Regex in the CI Field Matcher extracts the sys_id from the ci_link string in additional_info, enabling direct association to the medical device CI. The Alert Management Rule condition parses the JSON value and promotes only high heart_rate events, binding them and propagating impact through 'Affects::Affected by' cmdb_rel_ci relations to critical care services.
In a scenario where you need to monitor AWS resources, which key metric should be monitored using AWS CloudWatch to ensure optimal performance?
CPU Utilization
Network Latency
Disk I/O
Memory Usage
Answer: A
Explanation: Monitoring CPU Utilization is critical for assessing the performance and health of AWS resources, allowing for proactive management.
During a merger, an organization merges two ServiceNow instances, requiring Discovery to repopulate the CMDB with reconciled CIs from both sources while maintaining event correlation integrity. Which advanced steps facilitate this without data loss?
Use the 'cmdb_ci_service.reconcile' transform map to merge business service CIs based on the 'correlates_with' attribute from pre-merger event rules.
Run the 'Identify Duplicate CIs' job with the 'duplicate_ci_threshold' set to 0.8 to flag and merge overlapping infrastructure CIs before re-running Discovery schedules.
Export event rules from the source instance via XML, importing them with updated CI identifiers using the 'sys_update_xml' loader, preserving mapping logic.
Configure the MID Server failover group with 'mid.server.failover.priority=1' for the primary cluster to handle reconciliation probes without interruption.
Answer: A, C
Explanation: The cmdb_ci_service.reconcile transform map uses correlates_with
attributes derived from existing event rules to merge business services accurately, retaining correlation paths. Importing event rules via XML with sys_update_xml ensures mappings to CIs are preserved, avoiding recreation of complex identification logic tied to Discovery-populated data.
When handling payload data in ServiceNow, where is the raw data typically found in an event record?
event_type
payload_data
additional_info
event_source
Answer: C
Explanation: The raw data from events is typically found in the `additional_info` field, which contains key-value pairs relevant to the event.
A telecom's 5G base stations send events via MID Server, type=operational, severity=5, node=cellTowerID, description= signal strength JSON. Rule set default order 140 sets strength_dbm=JSON.dbm, but em_alert_aggregate groups tower-wide during spectrum interference. Which signal configs?
Dictionary strength_dbm integer, range -140 to 0 for validity.
Aggregation group_type='Text based', keywords 'interference' + towerID.
Order 130 rule: if strength_dbm <-100, set severity=2 for interference priority.
Bind to cmdb_ci_base_station, use 'Affects::Affected by' for spectrum CI.
Answer: B, D
Explanation: Text keywords + ID bucketing isolates interference in em_alert_aggregate. Base station binding with affects relationships correlates spectrum, optimizing default without range or priority changes.
A system administrator is reviewing the configuration of the Event Management engine and notices that the event processing time is increasing. What could be a potential cause?
The engine is processing too few events.
The correlation rules are too simplistic.
The MID Server is underutilized.
There are too many events being processed simultaneously.
Answer: D
Explanation: An increase in event processing time can be caused by too many events being processed simultaneously, leading to performance bottlenecks in the Event Management engine.
Correlation_id 'net-uuid-333' events not linking in Impact Tree due to missing rel_type in cmdb_rel_ci. Force UUID gen and tree refresh. Which commands?
Event Mapper: Set additional_info.net_uuid = gs.generateGUID() for net source
Fix Script: Update cmdb_rel_ci set rel_type_id = 'depends on' where parent=net_ci
gs.eventQueue('impact.refresh', {id: 'net-uuid-333', full: true})
Property em.impact.auto_refresh = true
Answer: B,C
Explanation: Updating rel_type fixes dependencies. Event queue refreshes tree for UUID. Mapper ensures future linking, property enables auto but manual is precise.
What is a common challenge faced when using Discovery data for event management?
Too many events being processed at once
Insufficient documentation on event rules
Lack of user access to event dashboards
Inconsistent naming conventions for configuration items
Answer: D
Explanation: A common challenge when using Discovery data for event management is inconsistent naming conventions for configuration items. This inconsistency can lead to difficulties in correlating events with the correct CIs.
A company wants to extract specific fields from a JSON payload received from a third- party application. Which transform script function should be used to parse the JSON and retrieve the "status" field?
getFieldValue()
transformJSON()
parseJSON()
JSON.parse()
Answer: D
Explanation: The `JSON.parse()` function is used to convert a JSON string into a JavaScript object, allowing you to access specific fields like "status" in the payload.
Azure Sentinel events via API have encrypted claims in token field. Rules decrypt with Azure key vault integration script, filter for 'threat_level' == 'high', map to alert.threat. Secure handling?
Script: var claims = azureDecrypt(current.token, vault_url, client_id); alert.threat = claims.threat_level == 'high' ? claims.details : null;
Filter: current.source == 'sentinel' AND token.length >0; use vault_call with timeout 5s.
Property 'em.azure.vault_timeout' = 5000ms; fallback claims = {threat: 'medium'}.
Audit rule: logDecryptAttempt(current.sys_id, success); if fail, quarantine event.
Answer: B, D
Explanation: The Filter validates token presence and sets vault call timeout to prevent hangs. The audit rule logs attempts, quarantining failures to track security issues without alerting on invalid data.
What is the role of the "Event Correlation" feature in ServiceNow Event Management?
To automatically close incidents based on event status
To assign events to specific teams for resolution
To generate reports on historical event data
To group related events into a single alert for easier management
Answer: D
Explanation: The Event Correlation feature groups related events into a single alert, simplifying management and allowing teams to focus on resolving the underlying issues.
KILLEXAMS.COM
Killexams.com is a leading online platform specializing in high-quality certification exam preparation. Offering a robust suite of tools, including MCQs, practice tests, and advanced test engines, Killexams.com empowers candidates to excel in their certification exams. Discover the key features that make Killexams.com the go-to choice for exam success.
Killexams.com provides exam questions that are experienced in test centers. These questions are updated regularly to ensure they are up-to-date and relevant to the latest exam syllabus. By studying these questions, candidates can familiarize themselves with the content and format of the real exam.
Killexams.com offers exam MCQs in PDF format. These questions contain a comprehensive
collection of questions and answers that cover the exam topics. By using these MCQs, candidate can enhance their knowledge and improve their chances of success in the certification exam.
Killexams.com provides practice test through their desktop test engine and online test engine. These practice tests simulate the real exam environment and help candidates assess their readiness for the actual exam. The practice test cover a wide range of questions and enable candidates to identify their strengths and weaknesses.
Killexams.com offers a success guarantee with the exam MCQs. Killexams claim that by using this materials, candidates will pass their exams on the first attempt or they will get refund for the purchase price. This guarantee provides assurance and confidence to individuals preparing for certification exam.
Killexams.com regularly updates its question bank of MCQs to ensure that they are current and reflect the latest changes in the exam syllabus. This helps candidates stay up-to-date with the exam content and increases their chances of success.