Leading human-centred design, empowered by AI

Overview
As Design Lead for Capgemini Invent's Innovation Lab, I led our exploration into how Generative AI (GenAI) and Agentic AI can empower human-centred design.
This involved exploring (1) how to design AI tools and (2) how to integrate them into design workflows.
Key highlights:
- Created a prototype for AURA (Automatic Resource Assistant). This AI tool turns long, complex documents into clear, concise content.
- Launched a Designathon for over 40 designers. This event encouraged safe, hands-on experimentation with AI tools like v0, ChatGPT and Copilot.
- Published an official Capgemini article (opens in new tab) on Agentic AI’s potential to transform global citizen services.
Key Facts
- Project duration: 1.5 years
- Team size: 8 designers and researchers
- Role: Design Lead – crafting GenAI prototypes, designathons, and thought leadership
Challenges
- Communicating the value of speculative AI proofs-of-concept (PoCs)
- Learning whilst doing because AI is a fast-moving field
- Cross-functional collaboration with engineering whilst balancing other projects
Approach
- Working with proxy users to define AI use-cases
- Design AI interfaces with a focus on transparency, trust, and usability
- Run immersive AI designathons to rapidly upskill multidisciplinary teams
Problem
This case study focuses on AURA. Change management consultants needed a faster, more engaging way to communicate complex HR policies. AURA sought to do just that.
User pain points
- Lengthy, jargon-heavy HR policies slowed delivery
- Manual distillation of key messages consumed hours
- Inconsistent outputs reduced engagement across channels
Business challenges
- Slow turnaround for critical change communications
- Difficulty maintaining tone and clarity across formats
- Low visibility of key updates among employees
Opportunities
- Automate summarisation of complex documents
- Standardise outputs to improve clarity and reach
- Free up consultant time for higher-value work
Persona: Rani – the Change Management Consultant
Rani helps organisations handle change. She often turns long HR and policy documents into engaging formats, like emails, intranet posts, and posters. She needs a way of speeding this up without losing accuracy.
Receiving
Rani is sent lengthy workplace policy documents from clients. They can be hundreds of pages long, packed with complex language and legal terms.
Extracting
She reads through each document in detail, manually highlighting and copying relevant sections into a separate working file for later rewriting.
Rewriting
Rani rewrites the extracted content in plain language, adjusting tone, structure, and length to match the intended audience, often rechecking for compliance.
Review
She proofreads, formats, and finalises the summaries before sharing them with colleagues or clients. The whole process can take several days per document.
"These policies are too long for busy employees to digest."
"I need summaries that are both accurate and engaging."
"I wish tailoring content for different audiences were effortless."
"It's finally done. That took longer than it needed to."
Process
The AURA prototype was developed through an iterative, evidence-based approach. We began with model selection, testing Claude Sonnet against alternative LLMs using parameters such as context length, summarisation accuracy, tone control, and processing speed. This ensured we chose the most reliable model for distilling lengthy HR and policy documents into clear, engaging formats.
Scoping & research
- Mapped challenges faced by consultants in summarising lengthy HR and policy documents through interviews and workflow analysis.
Model evaluation
- Tested Claude Sonnet and alternative LLMs using parameters such as summarisation accuracy, tone control, and processing speed.
System prompt
- Primed the model with sample documents and a tailored system prompt to ensure accuracy, avoid hallucinations, and adapt tone to different formats.
Prototyping
- Explored both chat-style and agentic document-pane interfaces using Figma and API integrations to assess usability and efficiency.
Testing & iteration
- Gathered feedback from consultants and clients, refining UI, tone handling, and summarisation quality for real-world adoption.

The architecture diagram shows how AURA connects to MongoDB, routes prompts through model services, handles auth, and logs activity for audit and safety. We primed the model with sample documents and applied a carefully crafted system prompt to maintain accuracy, avoid hallucinations, and tailor outputs to channels such as emails, posts, and posters.

I explored two interface concepts: a chat-style interface for conversational refinement and an agentic design with a document viewing pane for side-by-side reading, annotation, and summary generation. Rapid Figma prototypes and AI API integrations allowed us to test both paradigms quickly and make evidence-based design decisions.

Shows prompt presets, draft output, and lightweight edits with guardrails. Designed to keep users in control while speeding up first-draft creation.

Prioritised clarity and control: tighter labels, better empty states, simplified layouts, and consistent patterns across generate, review, and publish steps.

Sessions surfaced pain points around trust and explainability. We added clearer consent, source attribution, and reversible actions to build confidence.
Solution
We built a focused MVP that proves value without UI complexity. We selected Claude Sonnet for its long context window and reliable summarisation, primed it with representative HR and policy documents, and used a targeted system prompt to control tone and format.

The document viewing pane and chat interface were removed from scope due to technical complexity and time. Summarisation ran behind the scenes, and users simply downloaded channel-ready outputs such as one-pagers, intranet posts, or poster copy. Fine-tuning or in-app editing was not included in the MVP.
This validated the core outcome fast: accurate, consistent summaries that reduced manual effort and improved clarity, ready for future iterations that can add a viewer, highlights, and conversational refinement.
Results
The AURA proof of concept explored how RAG-powered AI could automatically summarise lengthy documents for public and private sector users. Through research, prototyping, and stakeholder testing, we validated the core value: AI could deliver accurate summaries behind the scenes, removing the need for a complex in-app viewer or editing tools at MVP stage.
While the project was paused before launch due to shifting priorities, it delivered clear learning on technical feasibility, user expectations, and MVP scoping. These insights have since informed other AI initiatives and strengthened our approach to designing responsible, human-centred AI tools.
Conclusion
From concept to proof-of-concept, AURA showed how Retrieval-Augmented Generation could transform dense HR and policy documents into concise, engaging formats.
While the MVP was simplified and never shipped, the process delivered valuable internal learnings, reusable design patterns, and a clearer view of the technical and ethical considerations for deploying AI in client contexts.
Choosing the right model for the job
AURA began with a broad exploration of LLMs and embeddings, testing both open-source and proprietary models for accuracy, speed, and cost. We learned that model selection must be driven not just by technical benchmarks, but by the complexity of the documents, the domain language, and the required factual precision. This is equally relevant in a manufacturing or 'Design & Make' context, where the assistant may need to interpret lengthy technical manuals, engineering specs, or safety standards without losing nuance.
Designing components for AI-powered experiences
Unlike conventional products where content is static, AI outputs are probabilistic. This meant designing for uncertainty — with clear loading states, confidence indicators, and mechanisms for users to refine queries. In AURA, this took the form of persistent query history, expandable answer sections, and inline citations. These patterns are directly applicable to manufacturing-focused tools like Autodesk Assistant, where iterative questioning and quick validation of source material are essential.
Balancing automation with human oversight
In designing AURA, we found that AI is most valuable when it accelerates human decision-making, not replaces it. For summarising complex policy or technical documentation, we built in transparency features like source linking, full-document context views, and export options so users could verify outputs. The same principle applies in manufacturing workflows — whether checking compliance data or interpreting CAD-related standards, the human-in-the-loop remains essential for quality and accountability.