PROJECT CLAP:
Supporting students from
classroom to career
Creating a common tool to foster transparent collaboration between ecosystem actors, allowing them to support and engage youth in their transition from school to work.

Overview
CLAP is a charity program that aims to help Hong Kong’s youth discover their passions and abilities by exploring multiple pathways to a fulfilling adulthood. By engaging students, non-engaged youths, teachers, social workers, parents, and employers, CLAP widens the discourse about the definitions of “success”, “work” and “talent”.

Problem
How might we improve and enable collaboration among the CLAP’s user types  (students, staff, social workers, and researchers) to accomplish their respective goals?

Solution
Design and build a sustainable, cross-disciplinary ecosystem to support the needs of CLD program users.
――――
Discovery
Receiving the baton
Most planning work was complete, including product vision alignment with stakeholders, and epic timelines. Due to existing relationships, client communication was smooth from the start, and maintained thereafter. The calendar below highlights regular client meetings to showcase WIP work and receive ongoing feedback.​​​​​​​
project timeline

Rough week-by-week calendar of staff onboarding, epics and main activities. This was the design-team-only version, and does not include the simultaneous work from BAs and Dev teams, following an agile workflow.

Playing catch-up
As a project a decade into making, the ecosystem is robust and content-heavy. I trusted my teammates to supply me with the right information at the right time, and learning details naturally over time (i.e. client guidance documents, brand guidelines, glossaries, etc.). For now, I focused on understanding stakeholder visions and purpose of the project. 
As you can see from this Information Architecture below, it's a very complex system. 
sample IA zoomed out

Sample IA, zoomed out

sample IA zoomed in

Sample IA, zoomed in

To ensure we didn't lose sight of the bigger picture, we set some design principles for ourselves. This way, we would know how to prioritize to make design decisions in case blockers arise later.​​​​​​​
design principles for CLAP

​​​​​​​When it comes to accessibility, we set a standard for ourselves of meeting at least WCAG 2 Level AA of accessibility performance, knowing that designers have a responsibility to be as inclusive as possible, allowing equal opportunities instead of creating barriers that restrict them. At the same time, implementing these best practices will dually serve to improve the usability of the site overall. 

――――
WIREFRAMING
Building blocks
Using the IA and user task requirements, we started low-fidelity wireframing, epic by epic. The order of epics were decided according to priority and efficient execution (for example, it makes sense to complete the home page last as its purpose was to be dashboard of the website's other features), while taking into account the ease of tech implementation. These decisions were also informed by the expertise of the dev team and solution architect, then finally set into motion by our PM.
brainstorming activity/event states

Not only did events have varying types (one-time/recurring, private/public, offline/online, capped/open event etc.), the status in which an organizer wold view it inside their account also varied (draft, published, live, ended, completed).

Our design lead quickly executed our first epic for login and account setup flows, which are standard for web portals of this kind. Immediately after, we tackled one of the biggest epics, CLD Activities. This epic was the core of this program, from creating career learning activities and counselling sessions with students, to executing all related event management tasks for admins. 

Before jumping into screen level layouts, we wrote out all event attributes to think through the structure of information first.
sticky notes of decided event status codes
Eventually, after rounds of brainstorm, discussion and deliberation, we came to a consensus about what these event states would be, based on what made the most sense for the primary users, teachers and students.
This would later become a guide for how we created event cards and what information, filters, and order might show up on them.
events landing page wireframes

Wireframing process - in which organized chaos and rough drafts are a must. Using Miro, we could comment, leave virtual sticky notes, and annotate on each others' sketches, which was great for collaboration.

Why start low-fi
The value of being non-precious with sketches ideas means that we can rapidly dump out all our raw ideas without judgement and overthinking, helping to fuel our creativity massively.
The next step would be to start looking through the sketches with a critical lens to identify what's working or not. The screen we end up with is often a collage of all the best parts from different team members. After more iteration it becomes the refined, intuitive screen that we strive for. 
wireframes draft three

Wireframes for CLD activities - final draft.

To create higher fidelity screens, we set up grids with appropriate margins and gutters, fit to different screen sizes, and setting some general design element rules for ourselves — for example, we used multiples of 8 for all component sizes, and rounded corners must follow similar mathematical rules. This way, the objects on our frames would all snap perfectly to the grid we created. As standard, we of course applied our principles of design such as proximity, balance, contrast, and white space.
We didn't have a new design system in place yet for our completely new product, so we had to build and define as we go. Therefore, consistency was always the top priority. This way, component codes can be easily reused for engineers’ efficiency, and it is also easier for users to recognize design patterns and digest information.
events landing page high fidelity screens
Working with design systems
We used AntDesign as our design system for the project, where we extracted basic modular pieces, then customized components as needed. While using pre-set components can seemingly come with many advantages (i.e. much faster start, without tediously building from scratch), we realized as we got to more complex epics that the system's existing components did not always suit the needs that we had for screen designs. As such, to mimic the art direction of an existing system and create a new component that is visually consistent takes some thinking and crafting. 
――――
HIGH FIDELITY SCREEN DESIGN
Getting unstuck: team brains > one brain
Reaching high-fidelity required lots of thinking, experimentation, and communication. It was a great opportunity to fully utilize Figma's collaboration tools, such as cursor chat, audio call, and leaving comment threads. This made it easier for our team - which was distributed across different continents and timezones - to still feel like are working together regardless of time and space. It was important for us to be flexible and accommodate each other as much as possible.
Figma cursors for collaborative chatting
An extra shot of brain juice, please!
Many of our screens had such complex requirements and flow, so the benefits of multiple brains in a team really came through. At times we would hone in at the interaction of a component to figure out how to best execute an action, or have productive debates over how the user would react to the elements on a page.

Here's an interesting UX problem:
UX problem: adding participants checkbox dropdown

When creating an event, organizers can add participants. But since these events can vary in size/scale, we explored ways to either add participants in bulk (i.e. by form/grade, classes, or individual students) or via individual search, or a way to merge both. At the more detailed interaction level, we had to consider what happens when "All" is checked? How do the selected participants show up afterward? Would there be a separate list you can click and edit immediately or do the names show up in the search bar itself? What happens if there are too many names,  would the interaction be a scroll or another popup? Are already-selected names removed from the search list or stylized to disable clicking? Does the invite list update immediately? etc. etc...

Some situations forced us to return to the drawing board. In other words, zoom back out to figure things out before diving back into high-fidelity. For instance, we had to rewrite user task flows in one of our epics, Hong Kong Benchmark*, because we gained new information after a client meeting.
Once we clearly outlined the new task flow, it became much easier to think about which user tasks needed to be completed, where there were dependencies, where each user's role came into play, and how we might indicate offline/online parts of the evaluation process. 
Being able to more clearly articulate user goals allowed us to transform these into logical, ordered user flows where we could more easily design user experiences for. 
user task flows

*Hong Kong Benchmark is essentially a regulatory system that committee members of the CLAP program devised to measure progress, capture data for research, and ultimately continually improve the program on an annual basis.

――――
REFLECTION
Choose your battles. You can't win them all.
As much as we're taught to empathize with the user, we need to also do the same for our teammates and colleagues. Everyone has their roles and responsibilities to fulfil, and as a team, we need to care about each other to make the magic happen.​​​​​​​
fun screen designs and illustrations

These are some examples of fun features and colourful additions that I played around with while drafting screens. Knowing that our users would be young students, I knew that they would respond well to these types of graphics, so I wanted as much as possible to advocate for a more exciting experience for them. Unfortunately, you can't win them all.

Change is the only constant
After several epics, we developed a smooth workflow and learned each others' skillsets to create great team dynamics... but we still hit a bump on the road: Miscommunications with Dev and BA teams led us to backtrack on approved screens—we received feedback that many of our UI elements were impossible for our overstretched development team to implement.
We first flagged this as a process issue and our leads addressed it in the next sprint retro. While it is of course frustrating when designs are rejected after our hard efforts, this became a key learning moment: as designers, we empathize with users, but it's equally important to empathize with teammates. Everyone has roles and constraints—understanding each other builds stronger collaboration. 
Our friction and differences ultimately made us stronger because of our shared goal. Each sprint, we worked better together than the last.

Balancing trade-offs
Personally, there were a lot of fun and friendly UI designs that I thought would make the platform much more exciting for our student users, but had to scale back and focus on which functions are most important. As one of my mentors said, is it really that important to have a nice-to-have weather widget on the student's dashboard, versus fighting for a resume maker that can save students the learning curve of crafting one from scratch? The answer seems clear.

At the end of the day, as long as we make design decisions that are built off of user-first principles, we can rationalize the necessary trade-offs that will result, because we are always advocating for the best user experience. 

Refine, reflect, repeat. One of our weekly "shake-outs" - design team reflections where we shared our positives, areas to improve, and gave each other suggestions on how to be better. Overall just a lot of encouragement and support :)

The retrospective changes we had to make, weeks after screens were initially approved made it clear that there had been a lot of internal communication gaps throughout the project. 
While there are things such as changes to staffing and other factors that can't be controlled, we knew there were a lot of areas we could improve. For example, despite hosting frequent all-team sync-ups, we should have picked up that the irregular silence from our colleagues was perhaps an indication of confusion and/or being overwhelmed. 
It was valuable to hear the voices of tech team members about what it is that they care about the most, and what can help to make each others' lives easier in the future (i.e. going into the right level of detail to help dev teams understand the flows and interactions on the screen). In the next project I get to lead, I now know to focus on lifting up my entire team to achieve our common goal.
I'm already looking forward to it.

What's next?
We plan to test the prototype with real users and get their feedback. We want to understand what is working, what is not, and how we can continually iterate the product as to refine it to the needs and minimize pains for them. Getting real voices and seeing users interact with our prototype will be revealing and beneficial for improving the product even more!
I'm looking forward to seeing how the project will unfold, but for now, here's a reflection from me about my learnings:
project learnings

Other projects I've worked on:

Back to Top