Lexmark’s Printer UX/UI Redesign

Lexmark’s printer UX/UI was poised for a significant transformation. With specific business and design goals in mind, the stage was set for a comprehensive redesign.

Business goal:

Refine Lexmark’s printer UX/UI for future-ready development, utilize the newest technology, design for current and future user experience expectations, and increase reliability.

Design goal:

To understand and refine the current user experience, gain insight to user expectations around printer touch interfaces, and create a flexible design system.


The challenges were clear: long-life cycles, an outdated coding base, failing usability practices, and unknown user expectations for touchable printer screens.


  • Competitive analysis
  • Interaction pattern analysis 
  • UX flows
  • Wireframes
  • Low to mid-fidelity prototypes
  • High-fidelity (fully functioning) Flash™ prototype
  • Some testing scenarios
  • Some test moderation


  • Four other team members
  • Two graphic designers (high-fidelity designs)
  • Intern oversight
  • Junior UX designer oversight
  • Dev leads (for tech direction)




My prototypes defined interactions, microinteractions, flows, and layouts. The write-up and images below will focus on the designs I contributed to and the interactions I influenced. (I have since removed the hi-fi flash prototype.) The video walkthrough is at the end of this post

Lexmark HomeScreen

Project summary

The Journey:

Research & Analysis: The initial steps involved a deep dive into competitive analysis and interaction pattern analysis. This groundwork laid the foundation for the subsequent redesign process.

Prototyping & Testing: Emphasis on rapid prototyping was evident. Prototypes, ranging from low-fidelity to high-fidelity, were developed and tested, ensuring that design concepts were validated and refined based on user feedback.


The Transformation:

Design Evolution: The redesign process was marked by several iterations, especially for features like “Copy”, “Edge Erase”, “Fax”, and “Email”. Each iteration was guided by user feedback and usability testing, leading to more intuitive and user-friendly interfaces.

Visual Highlights: To see the visual progress of the project review the “in-depth” section below.

The Trouble:

Overview: The primary issue that we ran into was low expectation from users for the display screen to be touchable. Beyond that each section (Edge Erase, Email, Fax) had weak areas that perplexed the user and made it difficult to complete their tasks. 

Edge Erase: The components didn’t align to the user’s mental model. They needed a clear cause and effect between the right-side controls and the left-side image.
Fax: The design required users to input in one view, see the input number in another, and then choose to add another number from a different location—the disconnect was palpable.
Email: Users needed to know to click on the blue button area to see their list. The blue color alone didn’t work. Users thought the “X” would remove their list items, leading to confusion.

Collaboration: Designers have opinions they believe are right because they are right within their experience. This meant that we often had to create prototypes to appease a pov even if the majority was in disagreement. While it’s good practice to explore multiple ideas and test variants, it can get out of hand. We grew through this.

The Impact:

Results: During the redesign of the Edge Erase feature, a key insight from user feedback led to a design change that ensured a clear cause-and-effect relationship between controls and image. This adjustment resulted in a significant 90% error reduction, highlighting the impact of user-centric design decisions.

This project also led to more rapid prototypes which led to quicker fails and greater successes in half the time of the more full-blown flash prototypes.

Reflection & Future Directions:

Lessons Learned: The journey underscored the importance of user feedback and the iterative design process. The success of the redesign was a testament to the value of placing the user at the center of the design process.


Through collaboration, user feedback, and a focus on usability, the Lexmark printer UX/UI redesign was more than just an update; it was a step forward in user-centric design, setting the stage for future projects.

In-depth exploration:


Figure 1 shows an earlier generation of the Copy function. Note: The previous generation of products used a larger screen size—similar to a 7-inch tablet. The new version in this write-up was closer to a 4-inch phone. As a prototyper, I was hired by Lexmark during the update to the next generation.

figure 1

figure 2

I didn’t influence the initial UI design of the Copy feature (figure 2). The design team had already planned some visual designs before my hiring. These are the sections that I influenced: 

    • Defined how the user would get back home—A/B tested the use of a back arrow icon versus the home icon; 
    • Designed the increment/decrement micro-interaction feedback—prototyped, tested, and tweaked based on input; 
    • Defined the drop list animation and interaction—prototyped, tested, and tweaked based on input;
    • Designed the “Copy From” and “Copy To” interaction—prototyped, tested, and tweaked based on input;
    • Edge Erase (see below);
    • I also had additional, less quantifiable, input throughout the Copy feature.

Edge Erase

Below (figure 3) is the original Edge Erase screen. We discovered that users were perplexed by the layout, the interaction, and, more generally, what Edge Erase did. The primary feedback was that the right-side controls felt disconnected from the image on the left, and the component below the picture meant nothing to the users—relating to the right-side affordances.

figure 3

For Edge Erase, I created wireframes, built multiple low-fidelity prototypes (for team interaction purposes), and later made two actionable prototypes (for testing purposes). The two prototypes were team agreed-upon design explorations. (Note: The lead graphic designer and I have a friendly disagreement about who created the new Edge Erase design—settling it here, we both did. As is typical in collaboration, files pass back and forth, and riffs happen.)

Early on, I noted that user feedback showed that users an overt cause and effect between the right-side controls and the left-side image. They required seeing the relationship so that whenever the user changed something on the right, a corresponding effect had to be shown on the left.

For example: if a user selected the “Top” section, then the top part of the image also needed to be somehow altered. (Implementation note: The user should have been able to choose either right or left and affect a change, but we didn’t implement it that way.)

    • Post initial user testing, I inserted the two designs (figure 4 & 5) into a Hi-Fi prototype (see video). 
    • These prototype designs were pitted against each other in an AB test—using discreet controls in the prototype to toggle between the two versions.
    • Note: I created the Hi-Fi prototype for a more extensive end-to-end usability test.

figure 4

figure 5

After we synthesized the data, we discovered (something we saw in the tests) that the “lock” design (figure 5) worked best—however, with one caveat: the lock confused people. They thought it meant it was unchangeable—when it was closed—rather than seeing it as a constraining indicator. Tech lingo of this kind is something we have to test. It’s shorthand, and we must test our words, prototypes, and imagery. Users can help us not get locked into our designer’s head and can unlock other ways of thinking.

    • Though we had the foundation of the design figured out (figure 5 with alterations), we still had more questions: which icon best conveys constraint, do the users need labels (figure 6 & 7—we had alt text for screen readers), is there a better way to show top, right, left, bottom, and do the users view the controls under the image correctly;
      • We learned that with the arrow indicators on the image’s selected side, we could exclude the labels;
      • That a + shape pattern best connected the user to the image’s regions;
      • That a chain was more recognizable as a constraining metaphor;
      • And finally, the arrow and the increased white space helped the user see what would happen to their copied image/document.

figure 6

figure 7


figure 8

The original fax design (figure 8) had several issues. Namely, it was designed for 10-12 % of the populace first, the layout had some confusing patterns, and it was a pretty bland design. The first issue becomes a significant problem when users (primarily righties) need to see the input number while using their dominant hand. (Typical behavior is to hold the faxable items in the non-dominant hand while inputting with their dominant hand.)

The second issue became a problem because of the proximity of the “next number” button to the input buttons. The user needed to input in one place, see the input number in another place, then choose to add another number from somewhere completely different. It was discombobulating for the user.

In our initial exploration, our team attempted to maintain the original design to cut corners on time—figure 9 shows a semi Hi-Fi layout. Based on usability studies and A/B testing, we later concluded that we needed to swap the interaction several iterations later (figure 11).

figure 9

figure 10

Fax experienced several revisions and prototypes before we landed on the final version. The video below shows a low-fi prototype (Figure 10) to test our team’s theory about moving the number entry to an entirely new screen. (NOTE: I created this prototype to present the concept rather than for it to be used by an actual user. Below are the steps a user would have had to follow.

    1. Start at “New Number”;
    2. Press “Enter Number”; 
    3. Tap the “Enter” button; 
    4. Tap the drop list; 
    5. Tap “Edit List”;
    6. Tap “Done”;
    7. Tap the greyed (5434597) with the blue highlight;
    8. Tap the “X”;
    9. And finally, tap “Done” again.

The result of this prototype was that the team agreed it was clunky and too many steps for a user whose primary goal was to send simple fax. This rapid prototype saved us much time, cost, and energy, enabling us to focus on a better design. Through rapid and mid-fi prototypes like the one above, we could explore and vet multiple ideas and quickly iterate them. Figure 11 shows the final design.

figure 11


Regarding updating the Email feature, we opted not to deviate widely from the previous version (figure 12). The differences here are primarily in display size—figure 12 is a 10-inch screen, whereas figure 13 is a 4.3. The color, content, darkness, etc., options are on a second screen by clicking on the right arrow. The changes made include: 

    • adding more contrast to the colors
    • email input interaction
    • the rest remained much the same as the previous version.

figure 12

figure 13

figure 14

From the beginning of the Email update, we had a pretty good idea of our direction. I created some wireframes, but we mostly hashed it out in design discussions. Our principal focus was to update the address input interaction (figure 14). Once the user clicked “Recipient” on the main Email screen, the interface changed to a near-fullscreen keyboard.

To input an address, the user needed to use the keyboard (figure 15—this scenario focused on non-address book entry) and tap the Return button. Several early concepts placed the Return button within the keyboard. This design was based on a typical keyboard design. Because of that, we thought it was the logical placement; however, testing proved it was unacceptable—users struggled to locate it. Observation and questioning showed that the “Return” button was too far removed from the input visual for the user.

figure 15

After exploring where to place the Return button, we needed to determine the iconography users would best connect to “Return.” Due to limited space, we couldn’t use a string—localization made it untenable for 4.3 and 2.4 screen sizes. Several concepts and user inquiries later, we settled on the green return arrow. Adding in a microinteraction that I defined, we helped the user understand that their address was saved. Several designers thought it was overkill, but A/B testing proved that the user needed the extra hints. Once the user tapped Return, the address rolled up, leaving a ghost hint of the address entered (figure 16), then we displayed a success notification (figure 17).

figure 16

figure 17

figure 18

To see their list (figure 18), the users needed to know to click on the blue button area. The blue color alone didn’t work; however, once I added the white drop arrow and we added the glass effect, the success frequency rose dramatically. Afterward, we tested if the users required an explicit close button—we explored the “X” close button, but users thought it would remove their list items. This confusion made sense because there was a grey “X” in the drop-list. Ultimately, we opted for the green checkmark—usability testing showed that the users agreed with this approach.

When the users finished editing their list, they needed to accept the addresses and close the keyboard. With this closed, users returned to the main Email screen (figure 19), where they could further confirm their recipient’s list and continue to finish their email configuration.

figure 19


If we are creating interaction design, then we must interact with the design. I believe in rapid prototyping early and often. Through this process, I proved the validity of rapid prototyping to my team, and we saw an increase in successful usability tests. This process and project showed it’s better to prototype one interaction and learn from it—than to design 100 wireframes. The reason is that prototyping and the subsequent prototype interaction remove the UI and Interaction designers from their imaginations and place us in the material world where we can physically experience what is right and wrong with our designs.  Prototypes don’t need to be great and glorious to be effective; they just need to function well. Prototypes can be as simple as wireframes or paper prototypes. However, they need to inspire interaction—preferably within a similar environment to the final product (e.g., tablet prototype for tablet interaction, etc.). In prototypes, designers become more like users and less like Imagineers. Until Flash was killed off, I had the prototype available for your interaction. However, that is no longer safe to maintain. Once we got far enough with the prototype and the developers could implement the new designs, we abandoned the prototype and focused on development-grade user tests on actual devices—as is fitting for a prototype.

Prototyping & Interaction Design