AI Studio
Empowering Enterprise Search with Configurable AI Document Classification

Overview
Modern enterprise search isn’t just about retrieving documents — it’s about understanding and organising knowledge at scale. At Workplace AI, we identified a growing need for customers to manage how their data is classified and surfaced according to tailored business logic. AI Studio was conceived as a centralised platform to help AI developers and subject matter experts configure, manage, and refine document classification models and the resulting insights that power smarter search and discovery.
In essence, AI Studio is the classification backbone of Workplace AI’s enterprise search experience — enabling organisations like Anglian Water and Aiimi to extract structure from complex document sets and surface more relevant results.
Product: Workplace AI (Enterprise AI/SaaS)
Role: Lead UX/UI Designer (Research, Design, Prototyping)
Duration: 8 Sprints
Team: Product leadership, engineering leads, stakeholders, SMEs
Strategic Problem
Enterprise clients rarely view search as a standalone feature — they see it as a gateway to organisational knowledge. To build trust and deliver value, Workplace AI needed a way to let customers control how AI understands and categorises their data. This is where document classification becomes strategic:
-
Customers needed certainty and visibility into how documents are labelled.
-
AI models needed configuration and tuning controls rather than opaque outputs.
-
Classification results had to be meaningful, controllable, and continuously improvable.
However, this ambition collided with reality: the scope and complexity of the application was significant — it was larger than most features we had designed to date — with deep technical dependencies and a broad range of user needs (from technical AI developers to business SMEs).
This required a deliberate and phased strategic design and delivery plan.
Scope
Given the scale and risk involved, we broke the project into three strategic phases:
Phase 1:
-
Core functionality for configuring taxonomies and classification models
-
Surfacing classified results within Enterprise Search
-
Basic management of classification rules (add/edit/remove)
Phase 2:
-
Advanced controls for model enhancement
-
Cluster labelling features
-
Analytics dashboards for performance monitoring
Phase 3:
-
A correction area where subject matter experts (SMEs) could review and validate classification changes suggested by users
This phased roadmap enabled us to focus design and development effort on maximum value features first while managing complexity incrementally.
Research & Discovery
The project started with deep discovery work — a critical decision given the domain complexity.
Discovery Workshops
I ran multiple 3-hour discovery sessions with users, stakeholders, and engineers to:
-
Map out all functional requirements
-
Uncover technical dependencies
-
Understand customer workflows and personas
-
Identify risks and unknowns
We then grouped and prioritised requirements using functional clusters and voting exercises, establishing a clear delivery sequence that informed our phased plan.


Competitive & Standards Research
Parallel to workshops, we explored industry tools and standards to derive best practices around classification UIs — especially in complex admin applications where clarity and control are non-negotiable.
Lo-Fidelity & Prototyping
Ideation & Wireframes
With prioritised requirements, I translated user stories and “How Might We” statements into sketch concepts and low-fidelity wireframes. This included:
-
Initial structure of classification configuration screens
-
Taxonomy builders
-
Model parameter controls
I worked iteratively with the UX team using techniques like Crazy 8s to explore variations and ensure we weren’t building premature visual assumptions.

User Testing
We conducted regular user testing sessions — one-to-one interviews, focus groups, and surveys — tailored to different personas. These sessions helped validate assumptions and reveal usability gaps.
Key early insights included:
-
Users needed clear segmentation between model controls and result outputs
-
Terminology had to reflect real job tasks rather than pure AI jargon
-
Feedback loops (e.g., if a classification was wrong) needed to be actionable and rewarding
Iterative Refinement
Feedback was categorised by persona and function, enabling targeted adjustments that kept design aligned with real workflows rather than aesthetic priorities alone.

Design & Delivery Collaboration
I thrive on teamwork and value my co-workers input and feedback. I hosted multiple reviews and regroups during each stage of design, especially during this final stage. These sessions included stakeholders, developers and the UX team. Feedback and construction criticism was encouraged. Feedback was always measured against the original user story and requirement.
Once wireframes were validated, I moved into responsive hi-fidelity design using our existing design system — adding components where necessary. These designs captured real patterns for:
-
Desktop, tablet, and mobile resolutions
-
Visual hierarchy for complex configuration screens
-
Interactive behaviours like hover states and transitions
I hosted frequent design reviews with stakeholders, developers, and UX peers to stress-test assumptions and ensure alignment with technical realities and sprint priorities.

Handover
Before development, we hosted 3-Amigo sessions (prod manager + lead dev + me) to walk through flows, anticipate implementation constraints, and shape dev tickets. I then authored detailed Jira stories with flow diagrams and interaction specs.

Impact & Outcome
Business Impact
AI Studio became a cornerstone feature of Workplace AI, enabling clients to:
-
Tailor document classification logic
-
Surface richer search insights
-
Improve search relevance over time based on feedback mechanisms
Since launch, multiple new subscriptions were closed based on clients selecting Workplace AI specifically for AI Studio capabilities — a tangible proof of product relevance.
User Impact
Users — especially those in knowledge management roles — reported:
-
Faster identification and categorisation of documents
-
Increased confidence in search results
-
More efficient workflows for data governance
Enabling users to correct classifications not only improved accuracy but also created a feedback loop that enhanced long-term AI performance.
Reflection & Lessons Learned
Strategic Planning is a Force Multiplier
Breaking the project into phases was critical — it reduced risk, prioritised value, and prevented feature bloat from overwhelming users and developers alike.
Discovery is Not Optional
Given how domain-specific enterprise classification is, early stakeholder alignment and technical understanding defined what was feasible, not just desirable.
Think in Terms of People, Not Screens
Designing for SMEs and technical users is as much about language, trust, and workflow as it is about interface. Aligning taxonomy and terminology with user mental models made a huge difference.
Collaboration Drives Quality
Regular cross-functional sessions and feedback loops ensured alignment and identified edge cases early — reducing costly rework.
Continue Exploring...
SaaS Product Feature Design
Improving the way users manage their assignments through an intuitive notifications experience.
SaaS Product Feature Design
Research and design exploration for an intelligent insights panel in Workplace AIs Enterprise Search.

