6) NETCONF edit workflow (candidate vs running)
You’re automating configuration changes on an IOS-XE device that advertises the :candidate capability. You want to make multiple changes atomically and only apply them when validated. What is the correct NETCONF sequence?
A. edit-config (target=running) → commit
B. edit-config (target=candidate) → discard-changes → commit
C. edit-config (target=candidate) → validate → commit
D. lock running → edit-config (target=running) → unlock
Detailed explanation:
When a device supports the candidate datastore, best practice for multi-change transactions is to stage edits in candidate, optionally validate them, and then commit to make them active. This provides an atomic, safe workflow: if validation fails, no changes hit running.edit-config directly to running (A or D) applies changes immediately and reduces safety. discard-changes (B) removes pending candidate edits and would not be followed by commit in a normal success path. Many platforms also support :validate so you can catch schema or semantic errors before committing.
7) gNMI subscription choice for near-real-time counters
You need continuous, low-latency interface counter updates (e.g., octets, errors) from IOS-XE using OpenConfig. Which gNMI subscription is best?
A. ONCE to /interfaces/interface/state/counters
B. POLL every 60 seconds to /interfaces/interface/state/counters
C. STREAM (ON_CHANGE) to /interfaces/interface/state/counters
D. STREAM (SAMPLE) every 10 seconds to /interfaces/interface/state/counters
Detailed explanation:
gNMI supports ONCE, POLL, and STREAM modes. For time-series telemetry, you want a STREAM. Interface counters are monotonically increasing numeric values that don’t always trigger a change event per increment; ON_CHANGE can miss intermediate granularity or behave inconsistently depending on platform semantics.STREAM (SAMPLE) with a defined sample interval (e.g., 10s) provides predictable cadence and is widely used for NMS/TSDB ingestion. ONCE (A) is a single snapshot. POLL (B) reintroduces client-driven polling and higher request overhead compared to a server-pushed stream.
8) Cisco DNA Center task follow-up flow
You POST an operation to Cisco DNA Center’s intent API that returns a taskId. Minutes later, you want the final result and any detailed log. What’s the correct follow-up sequence?
A. GET /dna/intent/api/v1/task/{taskId} → if progress shows a fileId, then GET /dna/intent/api/v1/file/{fileId}
B. GET /dna/intent/api/v1/network-device/task/{taskId} → GET /dna/intent/api/v1/logs/{taskId}
C. GET /dna/system/api/v1/task/{taskId} only; results are embedded in resultSet
D. Re-POST the same request body with taskId to retrieve output
Detailed explanation:
In DNAC, long-running operations are asynchronous. The initial POST returns a taskId, which you poll via GET /dna/intent/api/v1/task/{taskId} to obtain status. When complete, the task’s progress often includes a fileId that points to detailed output or artifacts. You then retrieve that with GET /dna/intent/api/v1/file/{fileId}.
Other paths in B/C/D don’t represent the standard intent/task pattern. Re-POSTing (D) would trigger a new task, not fetch results.
9) Handling rate limits & pagination (Meraki-style REST)
Your script lists organization clients and occasionally hits API rate limits. You also need to fetch all pages. Which strategy is most robust?
A. Ignore rate-limit headers; just sleep a fixed 1 second and loop until success
B. Read Retry-After (or backoff headers), exponentially back off, then continue; for pagination follow the Link header (rel="next") until absent
C. Increase page size to the maximum and assume one request will always succeed
D. Add parallel threads to reduce overall time and overwhelm the limit quickly
Detailed explanation:
A resilient client must honor server-provided backoff signals. Many Cisco APIs (e.g., Meraki) expose headers that indicate remaining requests and when to retry. Respecting Retry-After (when present) and using exponential backoff with jitter reduces thundering-herd effects and avoids bans.
For pagination, standards-based APIs include an HTTP Link header with rel="next". Keep following the next URL until it disappears to guarantee complete enumeration. Fixed sleeps (A) are brittle; maximizing page size (C) doesn’t remove rate limits; parallelizing (D) typically makes limits worse.
Tip (Python
requestssketch):
import requests, time, random
url = first_page_url
while url:
r = requests.get(url, headers=auth_headers, timeout=30)
if r.status_code == 429: # rate limited
retry = int(r.headers.get("Retry-After", "1"))
time.sleep(retry + random.uniform(0, 0.5))
continue
r.raise_for_status()
process(r.json())
# follow RFC5988-style Link header
next_link = None
link = r.headers.get("Link", "")
for part in link.split(","):
if 'rel="next"' in part:
next_link = part[part.find("<")+1:part.find(">")]
break
url = next_link
10) Choosing the right OAuth 2.0 grant (Webex integration)
You’re building a backend service (no user present) that posts operational alerts into a Webex space. Which OAuth 2.0 flow fits best?
A. Authorization Code Grant with refresh tokens
B. Implicit Grant from a single-page app
C. Client Credentials Grant
D. Device Authorization Grant
Detailed explanation:
For machine-to-machine (server-to-server) APIs where no resource owner is interacting at runtime, the Client Credentials grant is designed for the job. Your service authenticates with its client_id/client_secret to obtain a Bearer token and uses “application” permissions granted to the integration.
Authorization Code (A) is ideal when acting on behalf of a user after interactive consent. Implicit (B) is deprecated for new designs and targets in-browser clients. Device Code (D) helps TVs/IoT with limited input, still involving a user. For headless alert posting, Client Credentials is the clean, least-privilege choice.