Self-ServeAISupport
Turned a legacy migration into an AI support tool, saving Cisco $15M annually.
Cisco's customer support was drowning in repetitive and trivial tickets that customers could theoretically resolve themselves. Rather than migrating the broken legacy system, I made the case to rebuild it as a purpose-built AI self-serve tool.
The result deflected 3% of all support cases, saving $15M annually in operational costs.
Pivoting a broken enterprise migration
Cisco's support model had almost no functional self-service. What little existed was too cumbersome to be useful, so customers defaulted to live agents, driving ticket volumes and operational costs steadily upward. When Cisco launched its $1B CX Cloud platform, the default plan was to lift and shift the decade-old case manager straight into it, preserving all the same friction on newer infrastructure.
I walked leadership through the full self-service experience, mapping where it broke down and how long it took before users gave up for live support. Business analysts projected $10–25M in annual savings if we rebuilt from scratch. Scrapping the migration meant discarding existing work and reversing a settled plan. I made the case it was worth it and secured SVP funding to do it.

Smaller scope, faster progress
The default ask was a "Generalist" AI assistant spanning the entire platform, which meant aligning PMs and engineers across 10 distinct modules. That's years of political gridlock before a single pixel ships.
I scoped the project down to a "Specialist" tool covering only the support-intake flow. Narrow enough to move fast, valuable enough to prove the concept. The generalist vision wasn't abandoned. A specialist tool integrates into a broader AI assistant via routing, making this an additive first step rather than a competing one.

Solving the "garbage-in" generative AI trap
My first assumption was that we needed a better chat interface. Research quickly killed that: support engineers, the receiving end of every customer issue, cited "lack of customer context" as their number one bottleneck. The interface wasn't the problem. The inputs were.
No matter how capable the LLM, vague inputs produce vague answers. I scrapped the chat concept entirely and reframed the design problem: the interface needed to extract accurate context from users before the AI could help them.

Rigid checklists over open-ended flexibility
Open-ended chat puts the burden of context entirely on the user, and enterprise admins under pressure give minimal input. I replaced freeform input with a structured checklist capped at the top 3 suggestions by LLM confidence. Users lost the ability to describe their situation in their own words. The LLM gained structured context it could actually use.
Each checked item was silently appended to the session context. When the user hit "Regenerate," they were unknowingly building the LLM a progressively richer prompt, producing more specific answers each round without any extra effort on their part.

Closing the AI liability gap
Cisco's internal LLM, "Sherlock," was already highly capable, but locked behind engineers who manually approved every output to prevent network-breaking hallucinations. It couldn't reach customers without a safety layer.
I designed two UI circuit breakers: a hard-stop banner that routed users to a human ticket when hardware replacement was detected, and a dead-end fallback that gracefully terminated generation when suggestions ran out. Both states ended the AI experience before the user got a resolution. That was the cost. The alternative was hallucinations reaching a production network.

Transparent citations bought emotional trust
Enterprise admins are skeptical by default. Accuracy alone doesn't move them. Even a correct answer gets dismissed without visible proof of where it came from.
The tool was restricted to Cisco's verified internal documentation. Rather than hiding this, I surfaced it as bi-directional, Perplexity-style footnotes that mapped every suggestion back to its source. Making the sourcing visible proved the system wasn't hallucinating. Median trust scores improved from 2/5 to 4/5 in moderated usability studies run with UX research.

Impact
Saving Cisco $15M/year
Business analysts conservatively projected $15M in annual operational savings, achieved by deflecting 3% of support cases through the AI self-serve tool. Rebuilding from scratch instead of migrating meant discarding existing work and reversing a settled plan. But it let us intercept the problem at source: getting users to resolve issues themselves before a ticket was ever opened.
Next Project

Cisco No-Code Filter Builder
Three years of complaints, fixed in two months. 92% faster task completion.