(Usability) Adventures in Job Hunting: When Minutes Count

Searching for work opened my eyes to an area of user experience I’d never thought much about before: the online experience of finding and applying for jobs.

While searching one day, I found two jobs at XYZ Company* which looked like good opportunities. So I clicked the career link on their web site to apply.

XYZ Company’s job application process invites applicants to create a profile which hiring managers can view in connection with their job application. The profile is essentially a résumé, including contact information and work experience. Because I’d applied for a different job previously, my profile was there but outdated.

So I started by updating my profile. That’s when things got interesting. I’d made a few changes, and then noticed that there was a button to attach a LinkedIn profile. Had I done that before? I didn’t remember for sure, and there was no indication on the page as to whether the LinkedIn profile was actually attached. So I clicked the button just to be safe. When I finished attaching (or reattaching?) my LinkedIn profile and returned to my profile page, all my recent changes had been lost.

After retyping my changes, I reviewed the work experience section. A recent job was missing, but the only place to add it was at the end of the work experience list, which put it out of order (work experience was listed most recent first). There was no way to reorder the items, so my two choices were to put the job out of order, or insert a blank job entry at the bottom, and then copy and paste everything down one item so I could insert the latest job at the top. Because I wanted my profile to be accurate and well-organized I opted for the latter, which took considerable time and required double-checking to make sure I hadn’t made errors as I laboriously copied and pasted several dozen fields.

The next task was to update the cover letter, which oddly was part of the profile. My old one was there, but it wasn’t relevant to my new job. But then I realized I could only have one cover letter, and I was considering applying for two jobs. There was no option but to write a generic cover letter, which was definitely not ideal.

The final usability glitch was that the button at the bottom of the profile screen was labeled Submit. Did it mean I was submitting the application for a certain job, or just saving my changes? It wasn’t clear, and I never got the chance to find out. After spending several hours wrestling with the online form, I returned to the job listings and found that between the time I started the online application and the time I checked, the job had been removed from the list of open positions.

Did usability issues make a difference here? Well, possibly XYZ Company missed out on an employee who would have been a perfect fit for their job. I definitely lost out on the opportunity to apply. Usability made the difference here, where minutes counted.

*The names of the guilty have been changed to protect possible future job opportunities.

Apples vs. Apples

Many web forms have two buttons at the bottom of the page: Submit and Cancel (or their equivalents). A user who accidentally clicks Cancel when she meant to click Submit could unintentionally lose all her form data—frustrating for any user.

In an online discussion on this topic, a poster suggested that the primary action should still have a button, but the secondary should be changed to something completely different, like a link, to avoid this type of confusion.

Would this reduce the number of times users click something they don’t mean to click? Possibly, although as mentioned in The Problem of Proximity, I still accidentally clicked the option I didn’t want out of a button and link pair because they were close together.

The trouble is that changing one button to a link introduces a new set of problems. When we mentally process elements on a page, we tend to see things in terms of their functions or category. Buttons at the bottom of a form to fall into the category of “possible actions I can take on this page.” When one option appears as a button but the other appears as a link, it looks like the two options are not in the same category. In addition, hyperlinks have a semantic meaning: they are supposed to link to related information. The fact that they can be scripted to behave like buttons doesn’t mean that it makes sense, or that doing so provides a good user experience.

As an analogy, consider the confusion that would result at a traffic intersection if the stop signal were a red light, but the go signal were a green sign. Stop and Go are possible actions at an intersection, and drivers expect them to be different variations on the same type of control: a red light or a green light.

And there are times I do want to take the secondary action. In that case, I don’t want the option to look drastically different. I want to be able to tell at a glance that it’s one of the possible actions I can take on this page.

Here’s a real-life example from familysearch.org. When I create a source, I see a bright blue button labeled Save at the bottom of the page. Next to it are two other options presented as links. For quite a while, I didn’t even realize those options were there because my eye was drawn to the large blue Save button and I subconsciously ignored the things-that-are-not-buttons when considering actions to take on this page.

But even later, when I realized that the Save and Attach option was available and would save me time, I still found myself forgetting to click it because it didn’t grab my attention and didn’t look like a button. I had to force myself to create the habit of clicking something that didn’t look like it should be clicked to process the form—definitely not an optimal user experience.

Google did a good job with their Insert Image dialog box. The primary button is clearly highlighted, but the secondary option is still the same type of control: a button.

In other words, they’re both apples. There’s no need to make one look like an orange, especially when it’s confusing to the user.

Design Bloopers #2: Tablet Power Management

I love my Evo tablet—it has revolutionized the way I manage my life. Like most tablet owners, I try to maximize battery life. One of the main ways I do so is by keeping the screen on its dimmest setting when I’m indoors.

But my tablet has an odd quirk: when it alerts me that the battery level has reached critical, it simultaneously adjusts the screen brightness to the highest setting! At the very point when my tablet most needs to conserve battery power, the tablet changes my setting to drain the battery even more quickly.

Was this intentional? I doubt it. I can’t imagine a developer coding a battery alert to include a power drain. So it was probably caused by a unexpected line of code somewhere. But once again, it points to the importance of testing designs in realistic scenarios to make sure they work as expected and are free of glitches. User testing adds to the cost of the project; but failing to do user testing costs even more.

Ambiguity—the Enemy of Clear Design

Design can send confusing signals when elements on the screen seem to contradict each other, or when they don’t follow expected standards.

When I go to turn on wireless on my laptop, I get this screen:

At first glance, it appears my wireless is already on: the most prominent feature on the screen (next to the bright red icon which turns out not to be an icon, but only a graphic) is the glowing green button labeled Radio On. But were I to assume the wireless is on, I’d be wrong: the current status is indicated by text under the word Status immediately to the left.

Furthermore, when I click the icon labeled Radio On, I get this screen:

This one’s even more confusing: I’d intended to turn the wireless on, and after doing so, I get a button that says Radio Off. However, it’s still glowing green as opposed to grey. I can’t tell immediately if the state hasn’t changed, or if wireless really is on, even though it says Radio Off.

To add to the confusion, if I accidentally hover my mouse over one of the other icons at the bottom of the screen, the green glow moves. I haven’t changed wireless status, but now the button in the main section looks even more like wireless is off.

It turns out the green glow simply indicates which button is active when one presses the Enter key on the keyboard. And that brings up the final confusion: the green wireless tower on the far left looks like a graphic because of its placement on the screen and its two-dimensional style. But it turns out that it also indicates wireless state: green if it’s on, grey if it’s off (see the first image above).

Is this screen impossible to figure out? Of course not. After using it a few times and experimenting a little, I know what the buttons do and how to read the current state. Essentially I filter out everything on the screen as noise except for simple text under the word Status. If I want to change the status, I click the button to the right, ignoring its color and text.

But should it have been that hard? Should I have had to puzzle over the elements on the screen or mentally filter out confusion? Probably not.

The examples below show how the screens could be simplified to eliminate much of the confusion. If wireless is on, the following screen could be displayed:

And if it’s off, this screen could be displayed:

Key improvements:

  • The new screen title is clearer.
  • The icon-that-isn’t-an-icon has been removed.
  • The state is unambiguously indicated with the tower icon and accompanying text.
  • The possible action to take is clearly indicated by adding a verb: e.g., “Turn Radio Off” instead of just “Radio Off.”
  • The confusing green glow on the buttons has been eliminated. If I were actually going to put this screen into production, I’d take the time to make one final improvement: I’d use a white glow on the buttons to indicate which is active for the Enter key, so it was clear the glow had nothing to do with wireless state.

Removing ambiguity can go a long way toward making a design more usable.