
Designers needed a faster, lower-friction way to involve customers in product decisions. I created Lighthouse as a lightweight, repeatable model for ongoing discovery and evaluation: short remote studies, direct customer sign-up, a dedicated Knowledge Center, newsletters back to participants, and a visible loop between product questions and customer input. The goal was not just to run more studies, but to make customer learning more regular, more accessible, and easier for teams to act on.

The challenge
Customer insight was valuable, but too dependent on one-off opportunities and individual effort.
The real problem was not lack of interest in research. It was the lack of a standing mechanism for getting timely user input into product and design work.
There were only so many occasions to engage directly with end users, and that meant teams were often working without a reliable feedback loop. The question became: how do we make customer insight continuous enough to matter, without turning it into a heavy, slow research program? Lighthouse was built to solve that exact problem by creating a regular cadence of evaluations and discovery opportunities that customers could join remotely, quickly, and repeatedly.

My approach
Design the system around participation, not just method.
If the experience of contributing is clumsy, the insight model never scales. I shaped Lighthouse as a research operating model with some key principles:
Moderated usability testing (60–90 minute sessions)
Studies had to be short, remote, and worth doing.
Sign-up had to be simple.
Findings had to be fed back to participants.
Outputs had to be visible enough internally that teams could actually use them.
The initiative offered everything from brief opinion surveys to end-to-end clickable prototype evaluations, all designed to fit into a 5–15 minute window. I also made sure the model created continuity: participants could sign up once, receive future studies directly, access the Knowledge Center, and review newsletters from completed work.
Lighthouse studies were designed to show the range clearly, and quickly. Lighthouse 1 focused on mental models around hierarchy and navigation. Lighthouse 2 tested how much information belonged in a next-generation feed at a glance. Lighthouse 3 explored understanding of professional icons and the value of immediate visual responses in communication. The point was not just variety, it was proving that teams could answer both large structural questions and smaller, high-leverage UI questions through the same lightweight system.


The impact
Lighthouse made customer research easier to access, easier to repeat, and easier to use. That changed the quality of product learning.
Instead of waiting for occasional large studies, teams now had a more durable route to user input. Customers could participate on their own time, development teams got direct and relevant input, and participants received feedback on the studies they contributed to. The result was not just more research activity, it was a clearer system for turning customer voice into ongoing design and product improvement. In practical terms, Lighthouse helped Zebra move closer to a model where the voice of the customer could be present at more decision points, not just the biggest ones.