Business goal: Refine Lexmark’s printer UI to meet new technology and consumer expectations and increase consumer belief in product.

Design goal: Understand and redefine current user experience to meet new and existing user needs.

Problem: Long-life cycles and failing usability practices meant that users were stuck with interaction issues for years. Not knowing what the user expectations were for touchable print screens.

Contributions: I created wireframes, UX flows, testing scenarios, low to mid-fidelity prototypes, and a high-fidelity (fully functioning) Flash™ prototype. My prototypes defined interactions, microinteractions, flows, and layouts. The write-up and images below will focus on the designs I contributed to and the interactions I influenced. (I have since removed the hi-fi flash prototype.) The video walkthrough is at the end of this post.

Contributors: Four other team members and a graphic designer for the high-fidelity designs also contributed to the UX.

Lexmark HomeScreen

Lexmark’s Printer UXUI Update


Figure 1 shows an earlier generation of the Copy function. Note: The previous generation of products used a larger screen size—similar to a 7-inch tablet. The new version in this write-up was closer to a 4-inch phone. As a prototyper, I was hired by Lexmark during the update to the next generation.

figure 1

I didn’t influence the initial UI design of the Copy feature (figure 2). The design team had already planned some visual designs before my hiring. These are the sections that I influenced: 

    • Defined how the user would get back home—A/B tested the use of a back arrow icon versus the home icon; 
    • Designed the increment/decrement micro-interaction feedback—prototyped, tested, and tweaked based on input; 
    • Defined the drop list animation and interaction—prototyped, tested, and tweaked based on input;
    • Designed the “Copy From” and “Copy To” interaction—prototyped, tested, and tweaked based on input;
    • Edge Erase (see below);
    • I also had additional, less quantifiable, input throughout the Copy feature.

figure 2

Edge Erase

Below (figure 3) is the original Edge Erase screen. We discovered that users were perplexed by the layout, the interaction, and, more generally, what Edge Erase did. The primary feedback was that the right-side controls felt disconnected from the image on the left, and the component below the picture meant nothing to the users—relating to the right-side affordances.

figure 3

For Edge Erase, I created wireframes, built multiple low-fidelity prototypes (for team interaction purposes), and later made two actionable prototypes (for testing purposes). The two prototypes were team agreed-upon design explorations. (Note: The lead graphic designer and I have a friendly disagreement about who created the new Edge Erase design—settling it here, we both did. As is typical in collaboration, files pass back and forth, and riffs happen.)

Early on, I noted that user feedback showed that users an overt cause and effect between the right-side controls and the left-side image. They required seeing the relationship so that whenever the user changed something on the right, a corresponding effect had to be shown on the left.

For example: if a user selected the “Top” section, then the top part of the image also needed to be somehow altered. (Implementation note: The user should have been able to choose either right or left and affect a change, but we didn’t implement it that way.)

    • Post initial user testing, I inserted the two designs (figure 4 & 5) into a Hi-Fi prototype (see video). 
    • These prototype designs were pitted against each other in an AB test—using discreet controls in the prototype to toggle between the two versions.
    • Note: I created the Hi-Fi prototype for a more extensive end-to-end usability test.

figure 4

figure 5

After we synthesized the data, we discovered (something we saw in the tests) that the “lock” design (figure 5) worked best—however, with one caveat: the lock confused people. They thought it meant it was unchangeable—when it was closed—rather than seeing it as a constraining indicator. Tech lingo of this kind is something we have to test. It’s shorthand, and we must test our words, prototypes, and imagery. Users can help us not get locked into our designer’s head and can unlock other ways of thinking.

    • Though we had the foundation of the design figured out (figure 5 with alterations), we still had more questions: which icon best conveys constraint, do the users need labels (figure 6 & 7—we had alt text for screen readers), is there a better way to show top, right, left, bottom, and do the users view the controls under the image correctly;
      • We learned that with the arrow indicators on the image’s selected side, we could exclude the labels;
      • That a + shape pattern best connected the user to the image’s regions;
      • That a chain was more recognizable as a constraining metaphor;
      • And finally, the arrow and the increased white space helped the user see what would happen to their copied image/document.

figure 6

figure 7


The original fax design (figure 8) had several issues. Namely, it was designed for 10-12 % of the populace first, the layout had some confusing patterns, and it was a pretty bland design. The first issue becomes a significant problem when users (primarily righties) need to see the input number while using their dominant hand. (Typical behavior is to hold the faxable items in the non-dominant hand while inputting with their dominant hand.)

The second issue became a problem because of the proximity of the “next number” button to the input buttons. The user needed to input in one place, see the input number in another place, then choose to add another number from somewhere completely different. It was discombobulating for the user.

figure 8

In our initial exploration, our team attempted to maintain the original design to cut corners on time—figure 9 shows a semi Hi-Fi layout. Based on usability studies and A/B testing, we later concluded that we needed to swap the interaction several iterations later (figure 11).

figure 9

Fax experienced several revisions and prototypes before we landed on the final version. The video below shows a low-fi prototype (Figure 10) to test our team’s theory about moving the number entry to an entirely new screen. (NOTE: I created this prototype to present the concept rather than for it to be used by an actual user. Below are the steps a user would have had to follow.

    1. Start at “New Number”;
    2. Press “Enter Number”; 
    3. Tap the “Enter” button; 
    4. Tap the drop list; 
    5. Tap “Edit List”;
    6. Tap “Done”;
    7. Tap the greyed (5434597) with the blue highlight;
    8. Tap the “X”;
    9. And finally, tap “Done” again.

figure 10

The result of this prototype was that the team agreed it was clunky and too many steps for a user whose primary goal was to send simple fax. This rapid prototype saved us much time, cost, and energy, enabling us to focus on a better design. Through rapid and mid-fi prototypes like the one above, we could explore and vet multiple ideas and quickly iterate them. Figure 11 shows the final design.

figure 11


Regarding updating the Email feature, we opted not to deviate widely from the previous version (figure 12). The differences here are primarily in display size—figure 12 is a 10-inch screen, whereas figure 13 is a 4.3. The color, content, darkness, etc., options are on a second screen by clicking on the right arrow. The changes made include: 

    • adding more contrast to the colors
    • email input interaction
    • the rest remained much the same as the previous version.

figure 12

figure 13

From the beginning of the Email update, we had a pretty good idea of our direction. I created some wireframes, but we mostly hashed it out in design discussions. Our principal focus was to update the address input interaction (figure 14). Once the user clicked “Recipient” on the main Email screen, the interface changed to a near-fullscreen keyboard.

figure 14

To input an address, the user needed to use the keyboard (figure 15—this scenario focused on non-address book entry) and tap the Return button. Several early concepts placed the Return button within the keyboard. This design was based on a typical keyboard design. Because of that, we thought it was the logical placement; however, testing proved it was unacceptable—users struggled to locate it. Observation and questioning showed that the “Return” button was too far removed from the input visual for the user.

figure 15

After exploring where to place the Return button, we needed to determine the iconography users would best connect to “Return.” Due to limited space, we couldn’t use a string—localization made it untenable for 4.3 and 2.4 screen sizes. Several concepts and user inquiries later, we settled on the green return arrow. Adding in a microinteraction that I defined, we helped the user understand that their address was saved. Several designers thought it was overkill, but A/B testing proved that the user needed the extra hints. Once the user tapped Return, the address rolled up, leaving a ghost hint of the address entered (figure 16), then we displayed a success notification (figure 17).

figure 16

figure 17

To see their list (figure 18), the users needed to know to click on the blue button area. The blue color alone didn’t work; however, once I added the white drop arrow and we added the glass effect, the success frequency rose dramatically. Afterward, we tested if the users required an explicit close button—we explored the “X” close button, but users thought it would remove their list items. This confusion made sense because there was a grey “X” in the drop-list. Ultimately, we opted for the green checkmark—usability testing showed that the users agreed with this approach.

figure 18

When the users finished editing their list, they needed to accept the addresses and close the keyboard. With this closed, users returned to the main Email screen (figure 19), where they could further confirm their recipient’s list and continue to finish their email configuration.

figure 19


If we are creating interaction design, then we must interact with the design. I believe in rapid prototyping early and often. Through this process, I proved the validity of rapid prototyping to my team, and we saw an increase in successful usability tests. This process and project showed it’s better to prototype one interaction and learn from it—than to design 100 wireframes. The reason is that prototyping and the subsequent prototype interaction remove the UI and Interaction designers from their imaginations and place us in the material world where we can physically experience what is right and wrong with our designs.  Prototypes don’t need to be great and glorious to be effective; they just need to function well. Prototypes can be as simple as wireframes or paper prototypes. However, they need to inspire interaction—preferably within a similar environment to the final product (e.g., tablet prototype for tablet interaction, etc.). In prototypes, designers become more like users and less like Imagineers. Until Flash was killed off, I had the prototype available for your interaction. However, that is no longer safe to maintain. Once we got far enough with the prototype and the developers could implement the new designs, we abandoned the prototype and focused on development-grade user tests on actual devices—as is fitting for a prototype.

Prototyping & Interaction Design