Skip to content

The Responsibility of Web Design

Accept cookie screen, desert/camels, alt universe

We are creating the world we will live in. This is as true in the digital space as it is in the wider world of which it is a part. There are many aspects to the world we access via our phones, tablets, and computers (an order of preference reversed from a decade ago). For one thing, we need to make the online world as accessible as the physical world. Just as we install in buildings and similar physical spaces braille signage to help those with sight impairments, we need to employ in web spaces logical keyboard navigation, alt text for images, textual transcripts for videos, and sufficient contrast between text and background colors, just to name a few things required to meet web content accessibility guidelines (WCAG).

This online world is shaped by marketing, SEO, user tracking and data collection, and many factors having little to do with good design (though they can lead to bad design). But while UX and UI designers may not have a hand in the programmatic or business decisions that shape a digital product, UX determines how users will move through an environment and UI provides the options they will have there. As designers we exert a measure of power over our users and, as Uncle Ben pointed out to Peter Parker, with power comes responsibility. No, that will not be the last nerd reference. In fact, here comes another.

In the beginning …

In the beginning Tim Berners-Lee created the World Wide Web and, by extension, the website. Now the website was formless and without design, darkness was over the surface of the WWW, and the Spirit of the W3C was hovering over the face of the internet. People discovered that HTML tables could be used for formatting (rather than for, say, tabular data) and ugliness ensued. And Håkon Wium Lie said, “Let there be CSS,” and there was web design. Designers saw that CSS was good, and they separated form from content. Designers called the user’s experience of a website “UX,” and the elements provided by the designer they called “UI.” And there were web standards, and there was browser adoption—the first day.

Perhaps our trip is getting off to a bad start. “Blasphemy is a bad beginning for such a journey,” as King Feisal (Alec Guinness) says to T. E. Lawrence (Peter O’Toole) in David Lean’s 1962 film Lawrence of Arabia, after Lawrence has said he is taking fifty of Feisal’s men “to work your miracle” (that is, conquering Aqaba by crossing the Nefud desert, which “cannot be crossed” because it is “the worst place that God created”). Suffice it to say that Berners-Lee wanted to shift the print-based paradigm which forced users to look up a text’s references in footnotes and bibliographies and then go to other books or periodicals to read them, often requiring a trip to a library. He envisioned an environment of computer-accessible texts linked to each other—the new paradigm. All the reader need do was click (or, eventually, tap) to follow a chain of relation. Thus, the web envisioned building paths from people to content, from content to other content, and—via content—from people to each other.

The paths joining people were not always direct. You still needed to find a contact form or email link to reach out to someone. Though we didn’t realize it at the time, this was Web “1.0”. Web 2.0 offered a more direct path, social media: you could air your opinions via tweets and share posts and pictures with “friends.” The next paradigm was born, borders would topple, and kumbaya would follow. Unfortunately, things didn’t work out that way. Human nature threw pits of quicksand into those paths and created ecosystems that were not so much marketplaces of ideas as cage matches of vitriol. Mentally conjure a version of the Star Trek (TOS) episode “The Alternative Factor”: “Trapped forever with a raging madman at your throat until time itself came to a stop? For eternity. How would it be?” Sounds like as good a definition of Twitter as any. Humorist James Lileks once called Twitter “a portable box of imaginary friends,” but that was back in 2013, just before the rot really set in.

Thankfully the web is not limited to social media. In any case, the number of consequential social media platforms is small enough that most of us will never design for them. However, we will design for contexts consequential to user experience, demanding design decisions from us that enhance or degrade user experience. One such decision is how we deal with the ubiquitous cookie consent window—consent being the operative word. Hence, our first rule: don’t trick your visitors. The power of design to leverage human perception can be used to draw attention to a particular control. If we are ethical designers, we will use that power to direct our visitors to do what is in their own best interests, not the interests of our advertisers, sacrificing the goodwill of our users for the sake of supporting ad data collection.

Making cookie options (or any options) explicit is the sine qua non for providing users with the means to give consent. In point of fact, your visitors do not have consent if you give them a false choice or no choice at all. If your cookie “consent” window hides options (not for purposes of clean design but to deliberately obfuscate) or tricks people into accepting all cookies, then it may fulfill the letter of providing a choice but completely subverts the spirit of that choice. (This is not the place for a deep dive into consent window design; for that see Nicat Manafov’s article on unethical cookie consent windows.) It is not ethical to obscure cookie consent controls, and it is questionable at best to have each cookie control turned on by default; leave them off (except for necessary cookies) and let the visitor turn them on. (As a visitor I always turn on the analytics or “performance” control to help my counterparts at other websites.) And it is blatantly unethical to purposefully draw attention—through color, typography, or other tricks of design—to the “Accept All Cookies” button and away from the “Confirm My Choices” alternative. Yes, if the user is careful he may avoid this trap, but a trap it still is.

A less noxious but still obnoxious trick is tagging a story with a headline that promises one thing but takes you to something completely different—usually a story broken up into innumerable short pages, connected by “next” buttons. If the visitor is really interested, she may power through numerous pages to find a match for the headline. Sometimes this will pay off but at the expense of time wasted, though most users will give up long before hitting pay dirt. This may be tangentially related to what Jared Spool meant by “information scent,” but the scent in this case is flatulent. Thus, our next rule: engage your visitors but don’t ensnare them. In addition to eschewing the endless “next” button, also avoid trapping your users by having the “back” control send them to additional content instead of letting them leave. UX design should be about helping people find what they need, not leading them down rabbit holes; UI design should assist this, not distract users away from the goal that led them to your site.

How do we turn this around?

If you work for an employer that mistreats its website visitors, perhaps you need to work elsewhere—which, admittedly, may be easier said than done. If you have some clout with your employer, advocate against linking to promoted stories at sites that use questionable tactics; these sites may not be part of your website but linking to them reflects badly on you. But first and foremost, user-centered design means empathizing with your users, doing unto them as you would want a designer to do unto you, giving them real options (cookie-related and otherwise), and guiding them to make choices that are in their own best interests. It does not mean getting into their heads to distract and deceive them.

JAO logo icon