Case Study

AI Shopping Assistant

Designing conversational AI that serves forty million users. Navigating the tension between guidance and autonomy, using machine learning to refine personas, and establishing trust principles for AI-mediated commerce.

Timeline 2024-Present
Role Design Director
Company Digikala
Focus AI Product Design, Research Strategy
AI Shopping Assistant
40M

Monthly users served

47→6

Persona refinement via AI

Mix method

Research

20+

Product teams coordinated

The Challenge

E-commerce promised unlimited choice. Instead, it created decision paralysis. At Digikala, Iran's largest online retailer serving 40 million monthly users, we watched customers spending excessive time browsing but struggling to commit to purchases. The friction wasn't technical. It was cognitive.

Traditional product discovery (filters, categories, search) works when users know what they want. But increasingly, customers needed guidance, not just navigation. They wanted recommendations that understood context, preferences, constraints. They wanted conversation, not taxonomy.

The opportunity was clear: use AI to help users navigate complexity. The challenge: do it without manipulating choice, eroding trust, or creating dependency. We needed AI that enhanced human judgment rather than replaced it.

Rethinking Personas

We started with what we thought we knew. The existing persona framework had 47 distinct customer segments. Years of research accumulation without consolidation. Not actionable. You can't design for 47 mental models. You can barely design for four.

Rather than arbitrary consolidation, I proposed using AI to solve an AI product design problem. We applied clustering algorithms to behavioral data from millions of user sessions: browsing patterns, search queries, purchase timing, category preferences, price sensitivity, support interactions.

The algorithm identified six primary archetypes representing over 90% of behavioral variance. Smart Delegator (trusts AI, values efficiency). Context-Driven Shopper (needs situational guidance). Hesitant Analyst (requires detailed info, fears regret). Visual Explorer (browses by aesthetic). Digital Immigrant (needs simplified assistance). Curated Explorer (seeks editorial recommendations).

These weren't demographic segments. They were behavioral patterns that could shift contextually. The same user might be Smart Delegator for groceries but Hesitant Analyst for electronics. This shaped our design: the AI needed to adapt to mode, not just user.

Designing the Conversation

The technical challenge was substantial. Integrating language models with our product catalog, managing context across multi-turn conversations, ensuring accuracy at scale. But the design challenge was harder: how do we create conversation that feels helpful rather than manipulative?

Three core principles. Transparency over persuasion (AI explains reasoning, doesn't just present conclusions). User agency over optimization (final decisions stay with users; AI suggests but never insists). Graceful uncertainty (when AI doesn't know, it says so clearly).

These manifested in specific patterns. Every recommendation included visible reasoning ("Based on your interest in outdoor cameras, highly rated for night vision"). Every suggestion enabled drill-down (users could see which features drove the recommendation, adjust those parameters). Every interaction preserved exit paths (users could return to traditional browse/search anytime).

We confronted a fundamental tension: how much initiative should AI take? Too passive becomes a glorified search box. Too aggressive feels pushy. We resolved this through mode detection. AI inferred intent from context (exploratory vs. focused shopping) and adjusted conversational stance. Exploratory mode: gentle suggestions, broad recommendations. Focused mode: direct assistance, specific alternatives.

Building Trust

Trust became our primary constraint. In traditional e-commerce, users trust the platform to represent products accurately. In AI-mediated commerce, they must also trust the AI's recommendations. Different design approach required.

Layered transparency mechanisms. Confidence indicators (when uncertain, AI said so explicitly). Source attribution (recommendations linked to specific features, reviews, purchasing patterns, not algorithmic black boxes). Correction paths (easy ways to tell AI it misunderstood, with visible learning). Privacy controls (clear data usage, granular opt-outs).

The most controversial decision: pricing and inventory visibility. Product teams wanted AI to optimize for margin and inventory clearance. Subtly steer users toward high-margin or overstocked items. I pushed back hard. The moment users perceive AI serving business interests over theirs, trust collapses irreparably. Bright line established: AI optimizes for user goals, full stop. Business optimization happens through other mechanisms.

Accessibility in Dynamic Interfaces

Conversational interfaces present unique accessibility challenges. Screen readers handle traditional UI well but struggle with dynamic, streaming responses. We developed custom ARIA patterns ensuring the assistant worked seamlessly with assistive technologies.

All features met WCAG 2.1 AA. Users could navigate entirely via keyboard. Visual feedback paired with text alternatives. Loading states communicated progress clearly, not just visually. Not compliance theater. Recognition that AI interfaces, precisely because they're conversational, must work for users who can't see visual cues or use pointing devices.

Deeper accessibility question: does AI assistance itself create barriers? If power users become dependent on recommendations, do we reduce their platform fluency? If occasional users only interact through AI, do they lose ability to browse independently? Questions without clear answers, but acknowledging them shaped our design. We always preserved traditional navigation alongside AI assistance.

Measuring What Matters

We measure success through multiple lenses. Task completion rates (functional effectiveness). Sentiment scores (satisfaction). But also unintended consequences. Are users becoming overly dependent? Are we reducing product discovery diversity? Are certain segments systematically under-served?

Early results suggest AI helps without constraining. Users who engage with recommendations still browse 30% of shown products before deciding. Indicates consideration rather than blind acceptance. Product discovery diversity remained stable. Users aren't collapsing into filter bubbles. Satisfaction highest among users combining AI assistance with traditional browsing. Lowest among those using either exclusively.

These patterns reinforce our philosophy: AI should augment exploration, not replace it. The goal isn't efficiency maximization. It's judgment support.

Organizational Challenges

Launching AI products at scale requires orchestrating multiple teams: ML engineering, product, design, legal, support, marketing. I established cross-functional rituals. Weekly design critiques with ML engineers reviewing generated responses. Biweekly alignment with product leads prioritizing features. Monthly reviews with legal and trust teams assessing risk.

Hardest challenge: establishing design authority over AI behavior. Engineers naturally gravitated toward optimizing model performance metrics (accuracy, response time, coherence). But user experience depends on subtler qualities: tone, pacing, knowing when not to respond. I continuously advocated for design judgment having equal weight with technical metrics. Required speaking ML engineering's language (understanding precision/recall tradeoffs, latency constraints, model architecture) while translating into user experience terms.

Looking Forward

This project continues evolving. We're exploring multimodal interactions (voice, text, visual search). Developing better support for complex decisions spanning multiple sessions (gift-giving, large purchases). Investigating cultural fluency (understanding not just language but social context).

The fundamental question remains: how do we use AI to enhance human judgment rather than replace it? This shapes not just product features but the entire relationship between technology and commerce.

We're designing for a future where AI mediates many digital interactions. Getting power dynamics right (ensuring users retain agency, maintaining transparency, building sustainable trust) matters far more than technical sophistication. Companies understanding this will build products people choose to use. Those that don't will build products people tolerate until better alternatives emerge.