What Building ACES Taught Me About GenAI
Aroon Jham Founding Member & Co-chair
Blog Details
Published on: 13-March-2026

If your sellers need a training manual to use your AI tool, the AI tool may be the problem.

One thing I’ve learned from building GenAI workflows is this: the best AI is invisible inside the workflow.

ACES-taught-About-GenAI

A lot of GenAI tools are already smart enough. That is not the problem. The problem is that too many of them still expect the user to know how to prompt them, steer them, interpret them, and stitch the output into actual work. What actually works is designing AI so the user barely has to think about the AI at all. That is the difference between a clever demo and a usable system.

I saw this firsthand while building ACES (Agentic Customer Enhanced Signals).

The original problem was straightforward, but painful: sales reps had to manually research a company, catch up on recent news, connect those signals to the right Thomson Reuters products, come up with smart door-openers, and prepare for objections. It was slow, inconsistent, and a poor use of seller time. But the burden was not felt equally. A seasoned rep could often lean on experience. A newer rep had no such luxury. That is part of what made ACES powerful. It did not just save time. It helped level the playing field.

ACES was built to orchestrate that entire workflow. It automated search, established product fit with evidence, generated smart outreach questions, and surfaced likely objections based on patterns from prior customer conversations. But the most important design choice was not the prompts or even the models. It was the experience.

For reps, ACES lived inside Microsoft Teams. They entered the company name and website. Done. For SDRs, ACES surfaced smart questions directly inside the Salesforce lead object within a Salesforce campaign. No extra tool. No context switching. No “go learn this new AI interface.”

That mattered.

In version 1, we built ACES with CrewAI. It gave us a fast start, but real-time research on a single company could take up to 20 minutes. That is a lifetime for a busy seller. In version 2, we rebuilt the orchestration with LangGraph. With 30+ agents working in sequence, tighter flow control and state management became essential. We also reduced token consumption by 40% and enabled batch processing, so sellers could submit multiple companies at once and simply get an email when the research was done.

That changed the experience from “wait for AI” to “delegate the work.”

And that is the bigger lesson: the value of GenAI is not just in generating content. It is in orchestrating work behind the scenes so well that the user experiences progress, not technology.

Results speak louder than architecture diagrams: ~90% reduction in research time, 4,000+ companies researched in 3 months, and stronger outreach quality based on seller feedback.

CrewAI helped us start fast. LangGraph helped us scale. But the real unlock was designing the system so the AI disappeared into the workflow.

That, to me, is what good GenAI looks like.