UX DESIGN CASE STUDY
Google CS First:
Learning Modality Overhaul
The lesson instruction modality is being updated from a video-based modality to one that is directly within the Scratch for CS First project editor. This “in-editor” modality uses multimedia and interactive elements for a more engaging learning experience.
Context
CS First (CSF) is a program within Google.org that provides a project based learning computer science (CS) curriculum to elementary to middle school students. Their mission is to provide equitable CS education opportunities to classrooms in under-served communities. Millions of students have enrolled in 100+ countries.
Scratch is designed and developed by the non-profit Scratch Foundation. They champion free block-based programming editors to help kids express their ideas and creativity through coding. They currently have 134M+ users with 163M+ projects.
Google is a partner of Scratch, and uses a custom version of their code editor, called Scratch for CS First. Future mentions of this variant will be shortened to just “Scratch”.
Problem space
The majority of CS First’s curriculum is video-based. Students watch instructional videos in one tab while having Scratch in another. This meant constant browser tab switching and remembering steps in their short-term memory. A new modality was explored in conjunction with an introductory unit (nicknamed “drawer”), that presented the instructions via text within the Scratch UI to alleviate this.
Previous usability and classroom tests revealed that students struggled in a different way: the drawer had low discoverability, some students struggled and were disinterested in the amount of text, and engagement decreased. The CS First team formulated a plan and working group to tackle the next series of prototypes for a Back-to-School launch.
Team & role
CS First is led by program managers (PgM) and is supported by learning and development (L&D) designers, and a horizontal platforms development team (product managers (PM), software engineers (Eng), and user experience (UX) designers (UXD) & researchers (UXR)). The working group assembled for this project consisted of 1 PgM (Karen Parker), L&D (Josh Caldwell, Sean Narcisse-Spence), 1 PM (Dara Rogoff), 2 Eng (Aaron Dodson, Ben Henning), 1 UXD (myself, Dan Ostrowski), and 2 UXR (Jamie Benario, Monica Chan).
My role was the primary UX and visual designer, splitting time with another product. The team had a challenging goal ahead: author a new lesson while simultaneously building and testing the features to support it. We were excited, as it was atypical for us to have more than one of each job function!
Requirements gathering
The project kicked off with a white paper that defined the problem space, compiled previous research, as well as defined new goals and metrics. The UX team contributed with an accelerated design sprint: starting with team icebreakers, how-might-we questions, reviewed user goals and journeys, and concluding with affinity mapping to explore potential solutions. The L&D team drafted a plan to adapt the Use-Modify-Create (UMC) learning framework within this project’s context.
Meanwhile, the researchers drafted a plan to do multiple rounds of usability testing and a classroom test. Last but not least, the managers and engineers planned out the resourcing needed to build out fully functional prototypes for use during the evaluative process.
User definition
This next version of our in-editor instruction model is focused primarily on the learning experience of 5th graders with a 3rd grade reading level. They would be taught in the classroom and use lower resolution Chromebooks. Ideally, students should be able to:
📚 Understand the lesson’s expectations and instructions
✋ Have the right amount of support that is helpful and not frustrating
👨🎨 Express their creativity, have fun, and a positive sentiment about engaging with the lesson
Meanwhile, the teachers should be able to:
👨🏫 Teach the lesson without CS knowledge
👩💻 Have students complete the lesson mostly independently
✋ Provide support to students who are stuck
Design process
Before starting the iteration cycles, I worked on defining the high-level agnostic aspects of the design process.
Principles
At the start of any project, the excitement can cause any designer to spread themselves too thin on too many ideas that are out of scope. To counteract that, principles were established to ground and guide us:
🎊 Celebrate and recognize success
🎮 Feel like a game, not homework
🟦 Aim for visual simplicity
💬 Use short, simple language
“Fun” and “whimsical” are descriptions that we aspire to achieve with CS First. Design concepts up-leveled to these and had hypotheses for our researchers.
Design systems
Scratch’s ethos and aesthetics are very scrappy, colorful, and DIY. They want kids to be boundless in their creativity and experimentation. Meanwhile, CS First is a Google product, consisting of lessons that are guided and linear to directly teach coding concepts.
One of the first hurdles is striking a balance between two different aesthetics and philosophies. CS First is built from the legacy Google Material 2 (GM2) design system. Unlike Material 3, GM2’s cool 😎, younger sibling that has Material You in its DNA, GM2 is less flexible with customizability. Nonetheless, as the saying goes, limitations breed creativity and we used these differences to create appropriately intentional visual contrast, while still aligning rudimentary UI interaction patterns between design systems.
Building off legacy systems is an inevitability in the working world. My design partner (Dan) and I scoured old files for GM2 components, forking and updating them to use newer Figma (our primary design tool) features. We also rebuilt the Scratch UI (from scratch 😁) to have more versatility than using cropped .PNG files. This required negotiating for more time up-front, with the value proposition that reusable Figma components would increase the agility of iterating in the upcoming steps.
In the past, I worked on CS First as a solo designer. This collaborative effort required me to learn Dan’s organizational and mental model within Figma. Was it always perfectly synchronized between us? Nope. But we learned and shared our best practices. Despite my affinity for them, the pragmatic objective here was not to create a new design system. Ultimately, it was about being on the same page and quickly delivering mockups with consistent quality.
Information architecture
DRAWER TO SIDE PANEL LAYOUT
The Scratch UI is very busy. Powerful, but busy. We knew students struggled with the existing instruction “drawer”. They either didn’t notice it, or intentionally moved it out of the way. From the design sprint, the team consensus was following the precedent set by Duet AI (now Gemini) in Google Workspace.
The instructions were relocated to a persistent side panel, and elevated higher in the visual hierarchy. Feature parity was also achieved, with the functionality now adapted to the new layout. Although improved, Scratch’s colorful UI still dominated the informal squint tests.
CONTENT CONSIDERATIONS
At this point, L&D were still drafting the lesson copy and our content management system (CMS) was limited to plain text in the panel. This meant having to put off mapping out granular details at the step level. Some things were certain, and that was having expositional text, tasks, and evaluative questions (multiple choice, free response). The types of multimedia content (text, video, GIFs, etc) were also not yet finalized, but the panel had variants that accounted for larger media and graphics and dynamic button text to provide wayfinding context clues.
With the first research test coming up, the designers and engineers collaborated on implementing progressive disclosure to hide unrelated parts of Scratch’s UI, depending on the lesson step. Not all UI elements were easily hidden, so the UXDs prioritized them and we met in the middle.
Research plan
This project had fantastic resourcing and planning for UX research, allowing the team to have four rounds of usability testing, and one round of classroom testing. The researchers had the added challenge of needing to evaluate both the features and content at same time.
The one-on-one, remote usability tests were conducted by both an external vendor and our UXRs. The classroom test was to be in-person at a Title I school. The teacher had the agency to teach the lesson their way, while some of the team acted as observers.
Prototype 1 & 2
The central theme for these prototypes was Attention.
Objectives
🧠 Reduce cognitive load of students
⏱️ Reduce time on non-coding activities
👀 Nudge back to the instruction panel
Findings
👀 Better focus than old “drawer” layout
💻 Smaller viewport struggles
Design iterations
A-TTEN-TION!
The central design theme around the first two prototypes was directing student attention. The first prototype test demonstrated that students had a much better time with the new “panel” layout. However, despite designing for lower resolutions, we found that different browsers and zoom levels were disruptive, leading to the dreaded scrollbars within scrollbars, cutting off the visibility of UI elements! Additionally, even with the initial simplification of the Scratch UI, there were still too many elements that competed for student attention. This led to students who didn’t know how to move to the next step.
SOLUTIONS
I updated the styling to elevate the panel in the visual hierarchy. This was a fine line to balance— instructions are necessary but once students’ muscle memory is developed, this added emphasis contributes to the overall visual noise. Alongside this change was a more subtle addition: a transitional swipe animation (after clicking Next) to cue that the currently text-heavy panel advanced to the following step. Many concepts were ideated and I classified them as such to help communicate different aspects to the team:
💥 Disruption: how much will it affect attentional focus in the UI?
1️⃣ Exclusivity: can we combine this with another concept?
📬 Messaging: how do we present the validation messages?
🗺️ Wayfinding: how do students get directed to the right place?
📰 Feedback: how do students find out that they didn’t finish?
Prototype 3 & 4
The central theme of these prototypes was Validation.
Objectives
☑️ Add answer validation feedback from tasks
🖼️ Improve instruction panel fidelity
📖 Have instructions be read and understood
Findings
👀 Panel attention was no longer an issue
😁 Students showed visible joy and excitement!
📕 They only read the tasks and not expositional text
🚧 Validation implementation needs more polish
🆕 More scaffolding needed to introduce new Scratch UI
Design iterations
Prototype 3 introduced a major element: task validation. Before this point, the instruction panel felt inert with its lack of interactivity and plain text. The addition of evaluative (multiple choice and free response) questions helped, but it was impractical to currently account for all potential answers for free response. Students received the same feedback “toast” message those, with positive reception when the toast fits their response, but a confused reaction when it doesn’t.
OPEN EXPLORATIONS
The third round of testing was before the winter holiday code freeze, so the top priority was getting validation properly functioning on the engineering side. This allowed myself and Dan to spend time freely ideating, openly sharing with the team concepts, to see what sticks. We compiled all our ideas in a comprehensive vision deck that detailed the mockups and rationale. Some exploratory features were ideas that addressed pain points but were deemed out of scope. These were put in the backlog for a rainy day (e.g. modular tutorial, shareable badges/achievements, in-lesson assistant, expansion of the user journey beyond the Scratch UI).
Before Prototype 4, we were able to push out a CMS update that allowed richer visuals. This was absolutely huge. It addressed the issue of not being able to architect and style at the Step level. This allowed me to really get into the nitty gritty of the panel, which we had more developmental agency over (as opposed to the alterations in the Scratch fork).
CONTENT TYPES
One brewing ideation was clearly delineating the content. We all know the reality is no one wants to “read the manual”. Seeing plain text next to a playground of color-coded code blocks? I know where my interest would remain! (The shiny stuff.) The content was classified into the these types from both an info and visual point-of-view:
Read this: context-establishing and expositional text
Do this: validated actionable tasks (and self-reflection tasks)
Answer this: validated multiple choice and free response questions
This really brought the panel to life, which also had the added benefit of providing some implicit attention. The consistent styling of each “content type” should help students anticipate and intuit what they’d have to do next.
ANIMATIONS FOR MICRo-INTERACTIONS
Animations worked well as a way to communicate intent and draw attention without adding visual noise. It was also a way to improve user delight via being fun and whimsical!
Within the “Do this” section of the instructions, we changed the tasks from a numbered list to a checklist. These automatically validated tasks gained a pulsing glow and a checkmark that would “pop” to indicate completion. This transient animation was visible enough in the peripheral vision, that it would draw eyes back to the next task.
A flippable, skeuomorphic representation of vocabulary cards was added to help signal the beginning of a new activity within the lesson. This used the fullscreen variant of the panel, allowing time for the student to not be distracted by the Scratch UI.
So far, the animations used for the prototypes were very functional over form. In the future, CS First’s suite of fun character designs could be utilized to embrace the principle of making this feel more like a game rather than homework!
FINAL PREPARATIONS
Approaching the final test, L&D now saw all the features and drafted a near final version of the lesson about debugging code. For collaboration, we used a spreadsheet to align on the goals, content, and interactivity of each step. Using this info, I created an end-to-end interactive Figma prototype complete with animations, branching paths, and high fidelity UI. This provided a unified feature and content source of truth to have our final review and bug bash.
Classroom testing
The lesson was taught in a live classroom to replicate a real-world scenario.
Objectives
🏫 Evaluate feature and content readiness in real-world context
✔️ Measure lesson completion and participation
😍 Document student emotions and reactions
Findings
🎉 No major usability issues
📈 80% vs 50% completion from Prototype 1
👩💻 Students were engaged with the lesson, excited, discussed amongst themselves and completed learning objectives
👩🏫 Content type delineation very useful for teachers to support their students
Design iterations
See the next section, “Final design”.
Final design
After dozens of concepts, explorations, and iterations we finally brought to life a considerable update to the in-editor learning modality in CS First. I was thrilled to see no major usability issues that blocked the lesson’s completion and enjoyment. The smaller issues were then ironed out. The increased panel fidelity, animations, and validation superseded the original goals of the stop-gap design solutions, so those were removed.
Meanwhile, L&D continued to reduce the amount of text and streamline the overall narrative and storytelling of the lesson. Lastly, illustrations, visual aids, and graphics were created and finalized to add the final bit of polish and achieve product excellence.
The Debugging lesson was complete! Subsequently, it was added to CS First’s Preview mode. This mode acts as a way for teachers to beta test new material before officially bringing it to their classrooms.
Lessons learned
I was very fortunate to work on this project with a clear strategy of having multiple UX research rounds, and to have dedicated engineering time to build a fully functional prototype! It granted me cycles to purely ideate and explore rather than figuring out improvised techniques to workaround Figma’s prototyping limitations. I got to learn and use a new skill of motion graphics / animations which were very fun to pull off! Being able to communicate the behavior and specs to engineers will be another handy tool for the toolkit in the future.
Throughout the project, it remained challenging to balance both the styling and information architecture of a legacy Google Material UI and Scratch UI. Clear communication in the product triad (PM, Eng, UX) was critical! Clear documentation of the components and what we learned will be a key part of the foundation of future work. It was apparent that alignment and clarity grew in importance with a larger team, both within and between functions. Having a consistent presentation style and deliverable format helps the team understand, reference, and retrospectively review the pros and cons of different designs. This was important, as an integrated product team is still relatively new to the CS First program.
Lastly, I’ve learned more about communicating stop-gap and more “prototype” solutions while other teammates worked. We may all have the same vision, but the amount of work for a particular function is rarely equivalent within a task. The early attention design work was over-indexed and over-designed, as once we added answer validation the concepts explored were nullified. However, it’s still valuable practice and those solutions can be saved for posterity.
I had a fantastic time working on this project and learning from the experts on my team. Seeing and working on the lesson every day, it can feel very normalized, “just another day at the office”. However, when I saw the students get excited from coding and solving problems, exclaiming that they can now build games for their friends… it made all the struggles and discarded wireframes worth it! -🌳