The Design of Everyday Things

I’ll note down the key ideas of “The Design of Everyday things” by Don Norman, as I read them. And my own views of them. You can procure a copy at Amazon, or at other bookstores. (No commissions received btw).

Rule 1: Machines should be designed for people

…and not people designed for machines.

Designers may insist that users follow a set of rules - logical to the designer, but perhaps bizarre and opaque to users. If those rules are not followed perfectly: the product fails. Confusion, frustration, and perhaps even death may result. Users are blamed typically blamed, for not RTFM.

In reality, perhaps designers should be blamed for not designing products for people, rather than people themselves:

“The Sabbath was made for man, not man for the Sabbath. (Mark 2:27)

Case Study 1: TV Remotes

Do you know how to use your washing machine, or microwave oven? Seems over complicated.

Got a new TV. It came with a remote control. I wanted to turn up the volume. The “UP” arrow actually did something else. I tried again and again, repeating the same thing, or perhaps slightly different things: pressing random buttons like a monkey trying to operate a Boeing 747 - all to no avail. Finally, I slammed the remote down in frustration.

Sure, some exec in LG could say, “Ben, just read the manual.” But, is it my fault that I’m too much of a monkey to work it all out? Or is it the designer’s fault for not making that functionality intuitive enough?

Update: Found out how to use the remote: the button is actually a new type of “button” - it is part switch, and part button. Increasing the volume involves you “flicking” the button “up”.

Case Study 2: Manager’s insistence:

My manager thought a computer program I was working on should be designed in a particular way, and I objected. His caprice, and penchant for not listening to problems / solutions, coupled with an irascible temper, and a tyrannical approach to solving problems could only result in situations akin to the Titanic steaming headlong into an iceberg:

“I”m paying your wages! You work for me!”

Yes sir. Three bags full, sir. We must obey orders without question. Whether it be filling the Hindenburg with hydrogen, or assailing Pearl Harbour, or invading Iraq on the flimsiest of pretexts, or cruising across the Atlantic at full speed, through an ocean littered with icebergs - all must be acquiesced to. It was the same here:

“Build the computer program like this,” says the boss.

“Sir,” I replied, “that might make perfect sense to you. But for a junior detailer, these rules are too complicated. It might not work as expected. We both know - even our most senior staff do not read documentation. In the end, I will receive a barrage of support calls due to supposed defects, people will lose confidence in the program, and it will not be used. I recommend you change the design, so that it is simpler.”

The boss then suggested that we emulate CATIA, which would simply explode without warning, like a grenade still in your hand, if users did not follow the rules.

I reluctantly acceded to his demands: the program was a complicated one - even for me to understand. I was happy…until the support calls started rolling in:

“There’s a bug here.”

I routinely forgot how the program worked, so I was forced to read my own documentation. Unfortunately, it did not make as much sense to me as when I originally wrote it. And 98% of the time, the program would in fact be functioning as designed. The culprit usually was the user not following instructions. Instructions LOL? It’s much easier to ask for support. Precious time was wasted, supporting a bad original design.

The key point: machines should be designed for humans, having regard to:

(i) their abilities, 
(ii) their needs, and 
(iii) their psychology / behaviour.

Case Study: Designing for Behaviour

Nobody reads car / IPhone manuals. Any designer who presumes that a human being is going to wade through reams of a poorly written manual ought to be shot. Consequently, we have to create products which do not require someone to read a manual.

Or we must render the entire machinery inoperable but for the manual being read. Take for example, a BASH Shell, or the i3 window tiling manager, or git. These are complex tools, which necessitate reading the manual. The audience is different…and even then, the manual will not be read.

Case Study: Designing for Errors

We all make mistakes. Consequently, product design must foresee and allow for this. Bad designs, do not.

The Fundamental Principles of Interaction

(a) Create Great Experiences It’s gotta be a great experience. Human beings are emotional. If you get the design wrong, they may feel: frustrated, angry, confused. If you get it right, they it should feel: satisfaction, a feeling of mastery / control. Pander to those emotions.

(b) Discoverability

  • (i) what does it do?
  • (ii) how does it work?
  • (iii) what is possible?

This comes from applying 5 psychological principles:

  • (1) Affordances,
  • (2) Signifiers
  • (3) Constraints
  • (4) Mappings
  • (5) Feedback
  • (6) The Conceptual Model

(a) Affordances

This is the relationship between a “thing” and a “user”.

Case Study: Doors. Doors afford:

(i) opening / closing (ii) the ability to sequester people from the outside / inside.

The describe what is possible.

The perceived affordances of objects are signifiers:

(b) Signifiers

Signifiers shows what actions are possible and how they should be done. If you cannot perceive it, then they are useless.

Examples:

  • A metal plate on a door (to show which side can be pushed.)
  • Or dumped furniture on a nature strip (it might show others that this is a valid dumping ground.)

(c) Mapping

We have a conceptual understanding of the world. Use these conventions / conceptual understandings to create tools which are instantly understandable, and whose operation can be naturally mapped. e.g.:

  • steering wheel: moving right - turns the car right.
  • pressing “up” increases the volume.

Try to map things according to what people feel is natural.

(d) Feedback

  • Without feedback, people won’t know what’s happening.
  • People must perceive the feedback (i.e. don’t hide the super important ones, or make visual clues for blind people etc. ). It must be apposite.
  • Prioritize feedback: unimportant feedback must not be obtrusive, important ones need to be prioritized.

(e) Conceptual Models

So how does everything work? The designer has one understanding, the user another, which may derive from previous concepts / products, or user manuals (they never read them), or sales pamphlets - these understandings form a: system image.

Key points:

  • A product may be great, but without a good conceptual model, it will fail.
  • The conceptual model must allow for the easy rectification of mistakes (whether due to a user’s misunderstanding of the design model, or a genuine human error) - and communication/feedback is the key to this.

When there is poor feedback (to correct mistakes).

The Design Challenge

The more complex the product, the greater the design challenges you will face. The conceptual model becomes harder to master.

Often, there are many competing interests in the design of a product:

(i) it must be simple, yet powerful, (marketing) (ii) it must be reliable, (customer support) (iii) perhaps it must be cheap - (marketing) (iv) the design must be good (to avoid support calls) - (support) (v) we must be able to actually manufacture it, reliably and cheaply - (engineering) (vi) it must look / feel aesthetic - otherwise it won’t sell - (sales)

Customers may, when purchasing a product:

(i) consider how it makes them feel, on the sales floor, (ii) but when they take it home, they might be much more concerned with its utility.

There are competing interests that often pull against each other. Good design makes trade offs, or solves one or more of the above.

The Psychology of Everyday Actions

Product usage requires two problems to be solved:

  • how it works (the gulf of execution) and
  • What they’ve done, and
  • what happened (the gulf of evaluation)

(Interestingly, when people can’t get something to work, e.g. a filing cabinet, they blame themselves: “oh, I’m no good with mechanical things”. In reality, maybe it’s a bad design? Especially if the design has no provision for when something goes wrong (e.g. the cabinet does not open)).

The Seven Stages of Action

Actions require:

(a) Execution, and (b) Evaluation (both entail understanding (how it works) and expectations (i.e. if the alarm works correctly, it should go off)).

The seven stages of an action:

  1. Goal (form the goal)
  2. Plan (the action)
  3. Specify (an action sequence)
  4. Perform (the action sequence)
  5. Perceive (the state of the world)
  6. Interpret (the perception)
  7. Compare (the outcome with the goal)

They may be created by: events, or by subsequent goals.

Don makes a side note: (i) most innovation is derived from incremental enhancements of existing products, and new products are derived from (ii) rethinking existing goals (tweaking them to slightly new goals).

The Human Mind

  • We have memories: declarative memory (e.g. what is your phone number?) vs procedural memory: (i.e. the house you lived three years ago - did the front door have a door knob? Was it on the right or the left?)

  • We have emotions and cognitive ability. The two are intertwined. There is considerable research to suggest that our cognitive mind is used to justify our actions after the fact, not before. The human mind also has sub-conscious processes at work.

  • Positive states of mind: good for creative thought, but not particularly good at getting things done. Negative emotional state: good for getting things done, finishing tasks: but too much of it and we get tunnel vision.

Visceral Brain

Base-level. Like when you see a snake about to strike your heel: you immediately feel fear, and react instinctually. It’s not something you feel like you particularly control. Aversion and repulsion are a part of this. Make your products emotionally pleasing, and people will like it.

Behavioural Brain

Key take out: every action comes with an expectation (positive or negative). The key is to give feedback in response to an action - especially if something goes wrong.

Reflective Level

This is where deep assessments / contemplation and deep learning or understanding occur. Causal elements lead to different emotions: guilt, praise, rejoicing etc.

Design products understanding all aspects of the human mind

  • All three work in tandem.
  • Sometimes, a positive reflective experience, can outweigh a cumbersome behavioural experience. Similarly, an excellent behavioural experience, when combined with a bad reflective experience, could make your product a dud.

The Seven Stages of Action and the Three Levels of Processing

People as story tellers

  • People are naturally predisposed to look for causes and effects, and to form explanations and stories. These conceptual models need not be correct. (An example is given of people turning the thermostat to the max. because they think it will heat the room quicker, or on tapping the pedestrian crossing more often because they think it will turn the lights on faster).
  • Usually the causes are complex - perhaps a series of unfortunate events happened (for whatever reason), and the assignment of blame in a particular instance is not necessarily justified - but that doesn’t stop us from trying.

Blaming the wrong things

  • If something doesn’t work, people usually try again. Or they try to fix things according to their own conceptual understanding, and eventually, if they don’t get good feedback, they assign blame, based on the “perceived causal relationship” of the perceived action, and the perceived result, according to their understanding.

Blaming themselves

For example, my car started rusting. I thought it was my own fault for this: I parked it outside, so naturally it would rust (this was incorrect: my father on seeing the car asked me:

Ben, isn’t this a new car? Why is it rusting like this? It shouldn’t rust so quickly, even though it is parked outside.

I said, I thought so too, now that he mentioned it. Then he asked: “when did you notice it starting to rust?” I said some years before. He said I should put a warranty claim in.

I responded: “I am outside the warranty period”. Here again was a false understanding: warranty claims can be made, anytime, for systemic design flaws.

If people blame themselves, though the product is at fault, nobody reports it. It’s a conspiracy of silence.

Blaming due to environment vs person

  • When we make a mistake, we blame our environment. But others’ - no they blame YOU directly.

Here is the perfect example:

When I was a kid, I was very good at baseball. I tried out for some high-level teams. The trials were held at night. This was the final try-out. The team had basically been selected. I had performed well enough to get through. But not this one night. I picked up the ball and attempt to throw it: naturally it missed the mark, and was potentially even dangerous.

The coach called a team huddle:

“I can’t risk my players getting injured”

It was a thinly veiled jibe, castigating and humiliating me publicly. From his perspective: I was a poor and dangerous prospect, who couldn’t throw. From my perspective: my poor throwing ability on this occasion was due to the bitterly cold temperatures. It was so cold, I could barely hold the ball. I couldn’t stretch my fingers. It actually hurt to my move fingers: how could I be expected to throw accurately? I remembered finding a tap nearby, filled with cold water - perhaps a little above freezing. And while taking a drink, a few drops fell on my hand. And I remarked, that it was actually soothing, it felt warm on my hands. I proceeded to hold my hands under the tap, though I knew it was bitterly cold. Perhaps around 4 degrees or so, perhaps less. I attributed the blame to my environment. The coach attributed it to the player.

Naturally I was not selected. Perhaps I should have been?

When things go wrong: the environment is blamed. When things go well, the cause is attributed to the brilliance of the individual.

Learned Helplessness

  • If people fail repeatedly, then they rationalize that they can’t do it, and it’s impossible. They give up, and may feel despressed.

Design Philsophy:

1. Don’t blame users because they didn’t use your product properly. i.e. 2. Take user difficulties as a prompt to improve product design. 3. Eliminate all error messages - instead, guide users as to what they should do:

  • Don’t impede progress; don’t make them start over again.

Blaming yourself - falsely

Watch out! When there are problems in a design, people blame themselves. Consequently, they do not report anything.

Seven Fundamental Principles of Design

(Corollary of the 7 stages of action):

  1. Discoverability: can you easily discovery what actions are possible?
  2. Feedback: Full / continuous information about the results of actions and the current state of a product or service.
  3. Conceptual Model: The design must project a good conceptual model; which allows for easy discoverability, and evaluation of results. This gives users a feeling of being in control, and understanding.
  4. Signifiers: Allows discoverability, and good feedback.
  5. Mappings: Good relationship between controls, layout etc.
  6. Constraints: Providing physical, logical, semantic, and cultural con- straints guides actions and eases interpretation.

Knowledge in the Head and in the world

  • Do you know exactly how a $0.50 coin looks like? Most people don’t, but you don’t need to know the exact details. This is possible due to four reasons:
  1. Knowledge is both in the world, and in the head: what you can do, and how, results from what you perceive, and what you already know - a combination. i.e. You see a button, and you already know that buttons are meant to be pushed - now you can perform a complex task.
  2. Great precision is not required: 50% is good enough. The button need not be perfect.
  3. Natural constraints exist in the world: it is very hard to erect level 2 of a building, before level 1 is installed. This restricts behaviour and actions.
  4. Constraints - constraints exist. Whether cultural, or personal constraints. e.g. Asian users will be averse to using their left hand, especially with food. So if you are designing a food based product which requires two hands - you might have some trouble in Asia. Constraints can also be personal.

Case Study: The Introduction of Coins

In France, a coin was introduced, which was similar to an existing coin. This confused people and caused outrage. Why? Because existing consumers had pre-learned behaviours on how they recognised coins.

Constraints Simplify Memory

If you only need to choose two things, out of a possible universe of 500 things, then you don’t need to remember much. This philosophy lies at the heart of the adage:

If you tell the truth, you don’t need to have a good memory.

The truth is ingrained, naturally. Physical restrains, memory restraints, enable us to tell the truth with significantly greater accuracy, than to concoct a fabrication.

Also consider itinerant performers: they memorize thousands of words, and they recite it perfectly: word-for-word. How do they do this? By natural constraints.

e.g. Bolts need threaded holes with the appropriate dimensions - you needn’t actually worry about it till you need to. If it’s wrong, you will know. Fundamentally, the philsophy of Rails is that there are a sensible set of default configurations which obviate the need to explicitly specify everything. This allows you, the developer, to be more productive, at the cost of having to build everything the “Rails Way”.

Memory is Knowledge in the head:

  • People don’t have great short-term memories: especially if they are forced to remember many things, especially if complicated and without a mnuemonic. Examples were given in the security industry: to make something super secure, that impeded everyday activity: people found solutions which completely disregarded security in the first place.

Long term memory:

  • Is inaccurate, it could possibly change, retrieval involves thinking (which could change the memory).
  • Arbitrary memory is difficult (i.e. remember a list of things), but organised/structured memory is easier to remember: i.e. a story, or rhyme, or music - adopting ideal coneptual models can help.
  • For most things, approximations will suffice. e.g. approximate temperature.
  • Pilots are able to easily remember complex information: (i) by writing things down as required, by (ii) by entering it into equipment (so they don’t need to remember), or by (iii) meaningful phrases.

Trade off between: knowledge in the world, and knowledge in the mind

  • Knowledge in the world needs to be interpreted - too much of it, and we will lose the ability to interpet it. i.e. too much clutter etc. If you move something, or if it looses interpretation - then people won’t know what to do.
  • Knowledge in the mind needs to be maintained, or we will forget.

We often combine the two. You don’t need to remember mum’s phone number: because your phone will tell you. All you have to do is remember that your phone will give you access to that. You don’t need to spell, because a spell checker will help you. Is this good or bad? You don’t need to hunt for food, because that’s why Macca’s exists.

Natural Mappings

  • It’s easy to turn on the wrong stove-top burner. Likely because of a design flaw. Mappings should be so commensensical that it does not require elaboration.

Watch out, some conceptualisations / metaphors vary depending on how you look at something. i.e. should the camera be through your eyes, or from above? What makes more sense, might vary between people and cultures. Or plane displays showing the tilt of the plane. Should the plane bank left, when it turns left, or should the horizon rotate, while the plane remains horizontal?

Knowing what to do: Constraints, Discoverability and Feedback

  • If there are no instructions, how is someone meant to use something? A good design will make obvious: (i) good conceptual models based on similar things in the world, (ii) good constraints to limit behaviour (before even trying them).

An example is given of a lego set. People are meant to put it together, with no knowledge of what it is. There are 15 parts. Most of the parts can only go in one place. A good design makes it obvious which part goes where - preventing people from trying every possible permutation. Other parts make it obvious, due to cultural conventions, which way a part should go: i.e. a head should go on the lego man’s shoulders, and it should face forwards, rather than backwards: good signifiers will allow this to occur almost naturally, even if there are no logical constraints. The entire set looks somewhat like a motorcyle - based on likely observations from being around traffic. I suppose, if the same model were given to an Amazonian tribe, with no knowledge of vehicles, they might struggle to conceive what it is they are trying to put together.

Four Kinds of Constraints: Physical, Cultural, Semantic, and Logical

  • e.g. think about batteries: they have physical constraints, but the batteries commonly in use (AA / AAA) allow you to place them in the wrong direction? Why does this happen? Because, mostly, it is not obvious which direction is the “right” one, and which direction is the “wrong” one.

  • Cultural norms provide strong constraints.

  • Semantic: e.g. Helmets go on people’s heads. Where else would it go?

  • Logical: if there is only one piece left in a jigsaw, even though you originally did not know where to put it, by force of elimination, the only logical place is the last one.

Applying Affordances, Signifiers, and Constraints to Everyday Objects

A discussion of everday items, and how they could be improved follows in this section.

Important principle: (i) identify what people want to do, and (ii) make it super easy for them to do it (with minimal work), (iii) if they want to do something slightly different, do not constrain them. e.g. have a simple search functionality, while also allowing for an advanced search functionality, for more pressing needs.

Constraints that Force the Desired Behaviour

(i) Physical Limitations

e.g. starting a car with a key - it cannot be done without a key.

(ii) Interlocks

e.g. you cannot open a microwave oven without first turning it off.

(iii) Lockins

e.g. Union memberships. Have you tried quitting from the CFMEU (yeah, I said it. Go ahead and knee cap me :P) They make it very difficult.

(iv) Lockout

e.g. babies cannot open dangerous bottles. You should not be able to delete critical files without clear warnings.

Human Error? No, Bad Design

An industrial accident happens. Perhaps a bridge collapses, or a concrete panel falls 50 floors onto someone’s head. An inquiry or Royal Commission is held. With much fanfare, after 10,000s of man-hours of testimony, sobbing families, and expert opinions (putative ones), a scape-goat is found. An accident has happened, and someone must be lynched. Human error is the culprit, and the person bearing the wrath of the state-run kangaroo court is usually the unfortunate victim who happens to be the most proximate to the cause of the accident. In this case, probably the carer of poor Ann Marie Smith. Someone under paid, under trained, who might have had the misfortune of assuming that the DHS was actually taking care of Ann Marie. The state run health bureaucracy i.e. the DHS is usually exonerated of any wrong doing, despite it happening, again and again, on its watch.

After the blame is placed, the solution is found: (i) more training - perhaps a licensing scheme administered by the state, or a training regime etc. (ii) more regulation, (iii) more compliance (or compulsory insurance) and lastly (iv) fines and criminal punishment are imposed on a nearby perpetrator. Justice is served. Poor Ann Marie is avenged, the world rejoices - and moves on. Blame is placed everywhere, except where it is due. If criminal neglect is to be placed, why aren’t the ministers and senior bureaucrats hauled off to jail?

The real culprit, is usually bad design. A poorly designed health services scheme: badly incentivised, burdensome compliance, zero monitoring, and worst of all: services and prices determined by the state. Who would have thought, that bad outcomes would result?

Human beings are known to sometimes accidentally press the wrong switch. They are unable to concentrate for for 12 hours straight, especially if they are routinely interrupted by surroundings. They are known to fall asleep. They are to remember, complex procedures, never used before in a pressing emergency situation. And perhaps, to secure their own positions, they would want it complicated, where extensive training is needed, in order to make themselves indispensable. Why would you build a nuclear reactor, where the consequences are fatal and long lived (perhaps 100,000s of years), which require super human abilities?

Root cause analysis

  • When you find out something went wrong, try to find out why it went wrong.
  • Try to prove and disprove your theory.

Also watch out, when something goes wrong, it is usually due to many causes. And if the problem is human error, try to find out what caused that human error.

Follow the 5 whys, as promulgated by Toyota, when investigating faults. If the error was human error, try to redesign the system to mitigate for this, or to minimize the negative consequences. Far too many investigations stop when they realise it was human error. It ought to be continued.

I will quote an anecdote, directly from the book:

The tendency to stop seeking reasons as soon as a human error has been found is widespread. I once reviewed a number of accidents in which highly trained workers at an electric utility company had been electrocuted when they contacted or came too close to the high-voltage lines they were servicing. All the investigating committees found the workers to be at fault, something even the workers (those who had survived) did not dispute. But when the committees were investigating the complex causes of the incidents, why did they stop once they found a human error? Why didn’t they keep going to find out why the error had occurred, what circumstances had led to it, and then, why those circumstances had happened? The committees never went far enough to find the deeper, root causes of the accidents. Nor did they consider redesigning the systems and procedures to make the incidents either impossible or far less likely. When people err, change the system so that type of error will be reduced or eliminated. When complete elimination is not possible, redesign to reduce the impact.

It wasn’t difficult for me to suggest simple changes to procedures that would have prevented most of the incidents at the utility company. It had never occurred to the committee to think of this. The problem is that to have followed my recommendations would have meant changing the culture from an attitude among the field workers that “We are supermen: we can solve any problem, repair the most complex outage. We do not make errors.” It is not possible to eliminate human error if it is thought of as a personal failure rather than as a sign of poor design of procedures or equipment. My report to the company executives was received politely. I was even thanked. Several years later I contacted a friend at the company and asked what changes they had made. “No changes,” he said. “And we are still injuring people.”

As with most things, the problem is a “people problem”.

  • A common cause is time stress, and fatigue.
  • Sometimes people deliberate break procedures. Why? Likely business exigencies require them to do so. The chance of errors are small, and they are usually rewarded for delivering on time, and are punished for delivering late.

Two Types of Errors: Slips and Mistakes

A taxonomy of errors. An error is only discovered after the fact. Nobody cares that you didn’t know about it beforehand. You’re still gonna get the blame. Secondly, sometimes errors are perceived - and they need not be true. Sometimes they are outright dishonest.

  • A slip: is when you intend to do something, but for some reason, you do something else. For example, I intend to hit the pin 200 m away with my 4-iron with a piercing 3-yard draw. Usually, I flub it side-wise, the ball rolls along the ground, perhaps no more than 50 yards. This is a slip. The intent was well formulated, but the action was not executed in manner conducive to the results hoped for. In the case of most golfers, they do not actually know what action is required in order to produce the said results. Their efforts are necessarily futile. There two types of “slips”: (i) memory slips and also (ii) action slips.

  • A mistake: this is when your intentions / plans are actually incorrect. e.g. there is a fire. And you say to yourself: “the way to put out a fire is to douse it with petrol”. The intention is wrong, although the execution might be flawless.

Memory slips

I have often gone out to dinner with friends, and sometimes, my sister, and noticed that they routinely forgot their wallets. I had often wondered: “how can you forget your wallet at the exact moment it is required?”. After reading Don Norman, I have realised that this is an example of a “memory based slip”. The intention - that of bringing your wallet - was there. Except before leaving their homes, they simply forgot. In these cases, the consequences are relatively innocuous (I would pay for the dinner and nobody dies etc.), but when designing something on an industrial scale, the results could be fatal. Norman seems to be advocating that systems should be designed to minimize these types of mistakes.

Action Based Slips

I wanted to delete a file. Except I deleted the wrong one. I had wanted to delete the other one. Something stuffed up. Not sure why, or how.

Case Study: Errors in Building and Construction

The following can be safely skipped. It is largely a polemic about Australian builders.

In our industry, builders look for reasons not to pay: “the drawings are wrong,” they might claim. In reality, the drawings might be correct, but the installation, not so. There was one occasion where a builder refused to pay - unless I came out on site, so he said. So I made a 3-4 hour excursion to the site, and discovered that the drawings were in fact correct, but the builder, due to sheer incompetence, flipped the beam the wrong way. Of course it didn’t install! (I do expect basic minimum levels of competence when dealing with professional tradespersons. In this case, the mistake was inexcusable.) I rectified the mistake (by flipping the beam), and then charged the builder $500 for my troubles - and even this he refused to pay. Though, to my eyes, the refusal was the the worst a builder could come up with, with the least bit of common decency - even amongst a trade where common fraud is de-rigueur. Truly, an Australian builder (who is usually also a union man), will not be permitted to practice his trade unless he is white tagged with an official certificate of fraud, legally entitling him to practice his chosen trades of: (i) incompetence, (ii) deceit, (iii) shoddy workmanship, (iv) phoenixing, and lastly (v) income tax evasion. When he lies, he speaks his native tongue…..what could be more Australian?

Capture Slips

  • e.g. You start counting: 8,9,10….Jack, Queen King. They happen because you are doing a familiar exercise but do not realise when they diverge. When designing a system, this either has to be made very clear, or the activities must be completely different, rather than be made to design.

Description Similarities Slip

  • (This has almost happened to me: throwing sweaty clothes into the toilet, instead of the laundry.)

Memory Lapse Slip

  • e.g. at an ATM: forgetting your card and walking off with your money. Nowadays, the ATM forces you to retrieve your card first, and only then will you get your cash.

  • Many cock-ups can occur: failing to do all steps of a routine, repeating steps (unnecessarily), forgetting what you are doing, or meant to be doing, forgetting the outcome of an action etc.

Mode Errors

  • When you have common controls used for different “modes”. e.g. Airbus had the same controls used to set: (i) angle of descent, and (ii) rate of descent. If the mode is not clear, and you make a mistake, thinking you are setting the angle, instead of the rate, the consequences could be fatal. Norman suggests you avoid modes, but if you cannot do so, make the mode being used extremely obvious.

Classification of Mistakes

(i) skill based, (ii) rule based, and (iii) knowledge based mistakes exist.

Skill based: is when you know your job inside-out, but you make a slip.

Rule based: when something outside the ordinary happens, but it is a known phenomena. i.e. If the lights are not working, do XYZ.

Knowledge based: a situation is novel. You have to come up with a solution.

Diagnosis is critical. Otherwise the wrong problem is solved.

Examples of Rule Based Problems:

  1. The situation is wrongly interpreted, triggering the application of the rule. i.e. “Amadou Diallo reached into his pocket to pull out his wallet to identify himself” is interpreted by the NYPD as a “black guy, likely serial rapist, reaches into his pocket, (obviously) to draw a gun”. Rather than questioning their assumptions, the NYPD quickly unloaded 41 SHOTS into Diallo. A perfect example of a falsely interpreted situation. If Diallo had drawn a gun, nobody would question the NYPD applying lethal force - a rule that is reinforced by rigorous training.

  2. The correct rule was enforced, but the rule was faulty. e.g. When a building was burning, the exits were barred. This was because people would routinely leave, before paying for their drinks. The rule possibly needed emending to deal with emergency situations.

  3. The correct rule is invoked, but the outcome is incorrectly evaluated. Consider the RBA, they love printing that sweet moolah. At the end of a printing spree, they might point to their flawed inflation records and say: “see – no inflation here!”. Or as is the case with Powell’s printing spree with the Fed, “Oh, price rises are not due to inflation - they’re due to supply-chain problems”. Or perhaps we can talk of Bush’s war in Iraq: it is common knowledge that the intelligence was flawed. The RBA proceeds with its “quantitative easing” program, and a needless, expensive war is entered into. Incorrect evaluations are especially common in government bodies: the evaluation is deliberate manipulated for political reasons, or for exculpatory or blaming reasons.

When a committee is investigating a problem - they have the luxury of knowing that something bad happened. On the flip side, a worker, might see hundreds of small issues, and leave them unattended. Of course, one of them might blow up - and then all the obvious signs, pointing to the problem will serve to hang the worker. If war-like footing was placed on every indication, nothing would get done.

Memory lapse mistakes: you’ve checking the status of something. But you forget what you were doing, and the initial task never gets done (perhaps you were interrupted.) The only solution is a system which is designed so that, even if you forget, or get interrupted, it is easy enough to rectify.

Social, institutional and economic pressures: i.e. a junior pilot will be lothe to correct a senior pilot, even if he knows that what the senior pilot is doing is dangerous; pressures on people to deliver results in the face of extreme time pressure, or where there is a great chance of loss. Overcoming social pressures are very difficult.

Solving Mistakes: memory lapse errors, or possibly the wrong steps involved: can be effectively reduced with a humble and effectively created checklist.

Error Reporting: considerable social pressure against this, as well as economic, in addition to sheer laziness of people in documenting errors, to enable systems to be redesigned. Toyota allows workers to halt the entire production line if an error is discovered. Experts converging and keep asking “Why? Why? Why?” until the true cause is found. If it is discovered that errors were not reported, everyone involved gets punished. Another Toyota principle: poka-yoke, involves creating tools (or jigs) to make errors very difficult.

Explaining away mistakes: e.g. was that loud bang a gun-shot, or a car back-firing? Is Freddie careless, or was he overworked? Is this break down a serious problem, or just an isolated incident? This seemed the case just yesterday: a student was waiting for his instructor, but he was waiting a long time. Why was this the case? I (mistakenly) assumed that it was because the instructor was delayed, and instructors most always are delayed. In actuality, there was a mistake in the booking (or more likely, a slip). Cock-ups might compound cock-ups, creating a perfect storm.

Solutions

(A) Design for error: Systems should be designed for error. Some suggestions:

Some general ideas:

  • Sensibility checks.
  • Make it possible to reverse actions—to “undo” them—or make it harder to do what cannot be reversed.
  • Make it easier to discover errors, and to correct.
  • Don’t treat the action as an error; rather, try to help the person complete the action properly.

Mistakes arise from: (i) ambiguous and (ii) unclear information about the state of a system, lack of a good conceptual model, and inappropriate procedures which in turn cause: choosing the wrong goal / plan or erroneous evaluation and interpretation.

(B) Design for interruptions: where was I again? When I resume, I might resume the wrong thing, or pick up at the wrong place.**

Adding Constraints to Block Errors:

  • Don’t make it easy for people to delete files.
  • In cars: it’s not easy to put the car in reverse when you’re driving down the freeway at: 100 kph. Also it’s not particularly easy to mix up the wrong fluid type in cars, by design. Liquids are designed differently, and placed in different parts of the car.

Confirmation and Error Messages: Watch out: People can still make slips and mistakes. For example, if I’m expecting a warning message, prior to deleting a file, I might simply override the message, when in reality, I want to delete a file just not that one. Too many warning messages, and it becomes a habit: you ignore it out of routine, without even consciously thinking about it. It will be like the boy who cried wolf: now, when a nuclear accident occurs, people ignore all the warning signs, and the committee of bureaucratic morons will decry the ineptitude of the scientists who “ignored” all the warning signs. Secondly, once you delete the file, if you realise that you’ve made a mistake, what can now be done? Possible solutions:

  • Make things prominent, even to a dummy.
  • Make operations reversible.

Sensibility Checks: What if someone makes a mistake and is trying to do something patently absurd? e.g. Some X-ray machines allow humans to apply a dose x1000 times the normal radiation, and they blindly follow instructions. Why should it be even permitted? Secondly, it should make it difficult to do, and also, it should not easily be over-riddable.

Minimizing Slips:

  • if you have identical controls, you’re gonna get description-similarity errors.
  • if you have modes that are not particular clear, you’re gonna get mode errors. (this is where the same controls are also used for different modes).
  • if your equipment requires intense concentration, it’s gonna be a huge problem, because someone will stuff this up.
  • if you don’t have reminders for infrequent procedures: you’ll get capture errors.

Swiss Cheese Model: How Errors Lead to Accidents:

  • There is usually more than one cause: all the slices have to be lined up in order for an error to occur. To minimise the chances: (i) add more slices of cheese, (ii) reduce the diameter of the holes: i.e. checklists and better design.

Human limitations:

  • i.e. requiring people to continuously monitor something (they will forget).
  • i.e. requiring people to do all the steps / processes perfectly (they miss one, or forget, or do the wrong one at the wrong time)
  • i.e. to do boring / mundane tasks with precision. (they’ll fall asleep, or they’ll miss)
  • i.e. put knowledge to use the technology in the world, and not in peoples’ heads. Non-experts should be able to use it, probably just as well as experts.
  • Use constraints: natural and artificial, physical / logical, semantic, cultural. Maps things sensibly. Use forcing functions (i.e. validations) to prevent you from proceeding, but also allow for a back-door if that might be required.
  • Consider: execution and evaluation gulfs. Visibility is important for both. (i) Feed-forward is important for execution (i.e. what might happen), and (ii) when evaluating: provide feedback that is clear.

Design Thinking

Rule 1: Design products for human beings (who have strengths and weaknesses: i.e. they are forgetful etc.)

Rule 2: Never solve the problem you are asked to solve. Why? Because that is usually never the actual problem, but a symptom. Keep asking questions until you find the true cause. Why? x5. Find the right problem, and then find the right solution. The double diamond approach to solving problems: expand the problem statement to eventually converge on the true problem; and then expanded on the solution set, to find an effective solution.

Human Centred Design involves the following four stages:

  1. Observation
  2. Idea Generation (ideation)
  3. Prototyping
  4. Testing

(1) Observation

Applied ethnography - deeply understand the goals that people have, and the problems that they face. Do not take short cuts. Go there and see for yourself. Do not rely on international students from Korea, when you are trying to understand the Korean market. Observe them from their natural environment.

Nothing beats human understanding. You might have sexy charts, and big data, but never forget the real needs and desires of people.

(1a) Requirement Generation

  • Theoretical requirements are usually wrong.
  • Requirements generated by asking people - are usually wrong. i.e. “I want faster horses”. They usually solve the wrong problem, with the wrong solution. They don’t question anything. And for most things in their routine, there are “special cases”. And if your system cannot handle special cases, it will definitely fail.
  • Requirement generated by watching people in their natural environment - and then creating and iterating accordingly - these are much better.

(2) Idea Generation

  • Rule 1: Generate numerous ideas. You don’t want to be fixated upon just one or two ideas.

  • Rule 2: Be creative without constraints. Never shoot down someone’s idea, because it will likely spur on the creative process. Things can be extracted and utilised.

  • Rule 3: question everything. Stupid questions reveal fundamentally assumptions that have now become “common sense”. It might not be.

(3) Prototype it

  • Does not have to be full featured. But the second you start with something, even the most basic version, you will uncover REAL needs, and REAL problems. Consider various prototypes - because different prototypes will uncover different needs. e.g. An airline system was being tested - except it was not real a real system: it had a graudate student behind the scenes, typing the responses. The person using the system did not know this. Because the system responded by typing, the human being adapts accordingly, and asks questions like: “I want to be back at San Fran before my class at 9:00 am”. Of course, if instead of a graduate student typing responses like some type of AI bot, something different was being presented, the user would surely adapt accordingly.

(4) Testing

  • You need to understand what people are actually thinking when they are using the product. Norman suggests using pairs: one person to as the hand, and another one to guide and interpret the results out loud. The two will usually bicker / argue / get confused. Their insights when they do this is extremely valuable to uncover biases / assumptions etc that they have, and to remedy shortfalls in design. It is especially useful if this is recorded on video.

  • Study 5 people (Jakob Nielsen). Then iterate, then study 5 more.

(5) Iteration

“Fail fast, and fail often” (David Kelly, Standford, Co-Founder of Design firm IDEO).

Activity Based Design

Focus on people’s overall goals. And strive to meet them here.

  • People will learn complex things, if they are appropriate to an activity.
  • Design for high level activities. And allow for seamless integration of low level tasks, done in support of a high-level activity or goal. e.g. Apple’s high level activity: that of listening to music, is supported and integrated across a bunch of low-level tasks: (i) finding music, (ii) storing music, in playlists, and (iii) playing the music.

Activity aggregating activites or services, especially those that are easy to use and seamless, are especially valuable.

In reality, activity centred design is difficult. Features are added, simply because competition has added them (and we need to match them), or perhaps features are added because the engineering team wants to utilize a particular form of technology. e.g. OCR, or machine learning, or management might want to say to investors that they are investing in “block-chain” or crypto-currency. Or maybe, time and budget affect our ability to focus on human centered design.

Norman’s law of product development: it’s already behind and over-budget.

Many teams have goals / issues that make sense when viewed individually, but are often contradictory. Marketing wants x, while engineering wants product simplicity, while management want immediate free cash flow, while others want a design that is carefully specc-ed out.

The Design Challenge: Products have many (often conflicting) requirements

  • e.g. Items that go into rental properties: they are bought by landlords. They must be cheap. Who cares about the functionality/maintenance aspect? Purchasing departments: they typically only consider price, and not usability. Consequently, manufacturers focus on selling to these “users” who are not actually end-users.

  • Different groups might change the product, in a piece-meal fashion. e.g. marketing might add a new cover-piece to the phone, without engineering input. The engineers might complain about loss of functionality. Manufacturers might complain about defects, and fabrication issues. Watch out: all parties must come together: pros and cons must be resolved.

Design for Special People: Design for a person in mind. Above all: make it better/easier. If it’s easier for visually impaired people, then it will likely be easier for everyone else. My view: design a product for use. i.e. it generally always plays to make a product simple.

Examples of making things difficult:

  • Hide identifiers.
  • Use unatural mappings.
  • Require precision (e.g. complex memory, complex tasks), with no feedback.
  • Make it difficult for humans to perform.

Design in the World of Business

The Golden Rule: Design for the end-user, first. And solve their problems first. You needn’t worry about what competition is doing, provided you handle this.

*Featuritis: Don’t do it: Don’t burden your product with features simply because the competition does so. Remember, you are trying to solve problems, not create them:

  • Customers like a feature, but they might want a small change.
  • Competitors add features - so should you do the same too? Don’t do it! Focus on users. If you match features, then there is no obvious distinguishing feature between you and the competition. New features tend to get added, but old, irrelevant features tend to stay. Youngme Moon (Harvard professor, author of Different): argues: you should invest even more into your strengths. And your weaknesses need be just good-enough. The marketing department will beg for each feature under the sun. Don’t blindly add them. Step back, and look it it afresh from the point of view of users. Like Bezos says, focus on:

  • What does the customer want?
  • How can their needs be met?
  • What can be done to enhance service and value?

… rather than focusing on profit, at the expense of customers.

Introducing new products: may take time. e.g. Multi-touch was around for 3 decades. But cost, and risk, and problems re: design, and legacy, were still there. e.g. early ideas may fail. e.g. early touch screens. Or early cameras: e.g. Apply QuickTake: (i) technology was limited, (ii) with a high price, and (iii) when the alternatives e.g. film was still viable. Similar can be argued of the first American automobile (Duryea), first typewriters, digital cameras, home computers etc.

Interesting Bookmarks:

Written on September 22, 2021