Blog
A Crash Course in Designing Stuff
It goes without saying that we all want the things in our lives to be the best they can be and we just can’t seem to avoid the urge to continually improve stuff. In some ways, we’re all engineers, striving to improve the world around us.
It goes without saying that we all want the things in our lives to be the best they can be and we just can’t seem to avoid the urge to continually improve stuff. In some ways, we’re all engineers, striving to improve the world around us.
Well, when it comes to designing software and systems, we’re no different! And here’s the important bit – software systems are often designed to be used by people! This means that the success of these systems will often depend to a very large extent on an understanding of the people that will actually use it, how they will use it in the real world, their businesses and a whole host of other ‘human factors.’ And so, to satisfy the urge to ensure that people experience these systems in the best possible way, User Experience Design (UXD) obsesses itself with applying a finite set of design principles that take into account human behavioral patterns.
So let’s begin with an exploration of some of UXD’s guiding principles!
Instant gratification
For the most part humans are compelled to act on pleasure. This ranges from satisfying very immediate physiological needs, such as the pleasure we take from eating, breathing and drinking, to the pleasure taken from more complex acts, such as learning or a sense of social belonging (family, friends and the like).
Instant gratification is our desire to take pleasure, immediately, from our actions. It is the happiness we experience when we achieve our goals in the immediate present. This immediate pleasure acts as a motivating force and drives many of our actions, but it can also be a major cause of frustration when we don’t get what we want.
As creators of software, we need to be aware that we’re expected to deliver systems that provide users with instant gratification: systems that respond immediately to users’ actions and inputs, designed for predictability and speed, giving attention to mechanisms that keep users informed and providing useful, rewarding and actionable feedback.
Forgiveness
Humans make mistakes when using a computer. And more often than not, they will want to recover from these mistakes. Forgiveness refers to a system’s ability to prevent users from getting into trouble in the first place, and then its ability to recover from any errors if and when they do occur.
Forgiveness has many implications for the overall user experience because it empowers users to interact freely without being afraid of making mistakes, which in turn creates a sense of control and positive feelings towards the system. This leads to an increase in a user’s willingness to use the system and to an increase in the likelihood of being successful in whatever they are trying to accomplish.
Designing for forgiveness can follow different paths. One involves trying to anticipate user actions that may lead to catastrophic errors and then making the occurrence of these errors impossible. For example, ABS prevents car wheels from ‘locking’ when breaking suddenly at high speed. Another strategy is to put in place ‘safety net mechanisms’ to minimise the impact of catastrophic errors. Examples of these are airbags in cars, fighter jet ejector seats and emergency diesel generators in hospitals, protecting against the consequences of human error. A further preventative mechanism is to build warnings into systems that then require explicit confirmation from users before executing potentially destructive actions –everyone who has ever seen the message: “You have unsaved spreadsheets, do you want to quit MS Excel?” will be familiar with one such mechanism.
Another forgiveness strategy involves reversible errors, allowing the error to occur but providing a way of undoing it. Here we can think of the recycle bin on a desktop computer and the ability to restore deleted items. Of course, designing the system with intuitive controls, good ‘affordances’ and unambiguous ‘signifiers’ minimises the probability of human error in the first place.
What’s more, forgiveness is not always about managing errors. Most of the time, systems need to implement forgiveness mechanisms so that users can explore things, do stuff, reconsider the consequences and then go back a few steps to try something else, without having to redo everything from scratch.
Recognition over recall
Humankind survived and evolved through seeing, hearing, smelling, tasting and feeling, and so it is no surprise that it is much easier and faster for us to recognise something we have previously experienced through sensory perception than it is to try and logically recall it.
This is one of the reasons why visually-metaphorical user interfaces have come to replace the earlier command-line user interfaces. It’s also why web stores that present rich, visually-symbolic representations of product categories are easier and faster to use (and more profitable) than ones that only offer written descriptions.
To help create an ecosystem that is easily recognisable and that becomes familiar, software design also requires consistency, through treating the same controls and inputs in the same way, for example.
Affordances, perceived affordances and signifiers
Objects have physical properties that induce or allow certain features or behaviours and inhibit others. A football induces rolling. A chair allows people to sit on it. A hammer induces, well, hammering stuff with it! A football does not induce hammering with it. A chair does not induce rolling. And a hammer does not induce being sat on!
‘Affordances’ are the physical characteristics of objects that influence and allow us to predict how they work, and what they are intended for. ‘Perceived affordances’ is our understanding (or mental construction) of how an object works and what it is meant to do when we interact with it, based on our perception of its affordances.
When designing human-machine interfaces, everything onscreen plays a huge role in communicating how to use the software - from labels to content, from colors to shapes, from controls to spatial arrangements. None of this should therefore be arbitrary. A push button is only a push button if it looks and behaves as we have come to expect a push button to look and behave.
In software interface design, there is a very thin line between something that is ‘simple’ and something that is ‘enigmatic’ and it is very easy to cross this line, especially when making use of ‘flat design’ principles. When designing software, the words ‘unclear’, ‘enigmatic’, ‘exploratory’ and ‘trial & error’ are precisely the kind of words that we want to steer away from. We want our software to be clear, unambiguous and intelligible. And that has a lot to do with how we handle affordances at the very early stages of software design.
Performance load
Performance load reflects the amount of effort, both mental and physical, that is necessary to fulfill a goal. The more effort necessary to operate a system the less likely users are to achieve their goals, and the greater the chances of error.
Strategies to reduce performance load to a minimum include minimising the amount of onscreen information, prioritising features for common use scenarios, prioritising information for different degrees of user expertise, maximising consistency across the entire software ecosystem and providing clear, informative and actionable system feedback at all times.
Aesthetics-usability effect
Humans perceive a system that is visually or sensory pleasing to be easier to use. The so-called "WOW factor" is therefore nothing but an expression of this human trait - people tend to favor aesthetically pleasing stuff.
Designing rewarding software experiences requires careful and thorough experimentation at the visual layer, a sense of order and harmony, enhanced legibility, enhanced readability, meaningful and exciting micro-interactions and an overall environment appealing to our senses and our brains.
Sure, we could argue that "as long as the necessary features are implemented, the software will work and that's what really matters.” But that's only part of the story. The ways a system represents its features onscreen are just as important as the features themselves. If people can't use these features, don't understand them, don't relate to them or simply don't trust them, then very quickly all that effort on the software engineering side counts for very little.
Progressive disclosure
Dealing with complexity is hard. A common strategy to manage complexity is to break it down into parts and deal with each part one at a time. Progressive disclosure is a strategy to manage complexity by displaying only the necessary content and controls onscreen, typically in steps or sequences. This helps users to complete very complex tasks one step at a time, decreasing the overall cognitive load.
Again, this is something we’re all actually familiar with. When shopping online, we often enter a payment closure sequence where we’re asked to review our personal data, the items we have purchased and then our payment details before actually making the purchase.
Another example that’s prevalent today ate the setup wizards that take you through the initial setup of a new smartphone. All of these try to manage down unnecessary complexity, by exposing the user to the information they need to interact with one step at a time.
Metaphors
Software uses metaphors to symbolically associate actions or ideas to concepts that are understandable in our everyday lives.
The most common use of metaphors in interaction design takes the form of iconic graphics which stand for atomic, discrete features. Obvious examples include the trash can acting as a metaphor for the feature of data deletion, or the magnifying lens as a metaphor for the find or search feature. But metaphors can also be broader and set up an entire mental framework for a whole software ecosystem, such as the desktop paradigm used by Windows.
Metaphors are a major signification tool used by interaction designers to convey complex meanings with ease. But they also carry a risk: they are circumstantial, context dependent and certainly not universal in what they often represent. One example of this found in the typical email icon on a desktop. Often a visual representation of a paper letter is used, or an envelope, perhaps nostalgically referencing a time when people used to handwrite letters using paper. Most people don’t do this anymore. The metaphor is becoming increasingly obsolete because the real reference point is increasingly redundant.
However, frequent use of these metaphors can lead to them ascending their signification symbols, where the association with real life objects becomes nothing but a post-modern pastiche. Take the Home button for instance. The “house” pictogram does not really stand for a house. It has grown beyond its initial metaphor and has become a universal symbol that more directly serves its purpose through common digital use.
Reciprocity
Reciprocation is a fundamental social dynamic, a basic human behavior. Game theory demonstrates how reciprocation creates cohesion between individuals, which then allows individuals to build social relationships that allow them to do better than they otherwise would alone.
The reciprocity principle in software refers to the interaction strategy of giving something before asking users to return a favour.
Software that follows this principle aims to build trust.
Returning to our example of online purchases, after purchasing an item, the software that processes your purchase might ask you if you’d like to create a registered account, so that it will be easier for you to purchase something next time you return. Just like in the ‘outside world’, this helpful reciprocation encourages relationship building and creates loyalty.
Mental models
A mental model is a human construct of how things work.
It represents our understanding of how a system functions when we encounter it, based on our previous experiences with similar systems. It’s essentially a prediction.
Imagine handing your iPad to your grandfather. Or sharing your technics vinyl turntable with a young child. In all probability, they wont have a clear frame of reference for how to use the respective ‘systems.’ Even if they can tell you what you’re showing them, it is unlikely they will be able to operate the equipment because they don’t have a clear mental model of how it works.
When designing interactions for software systems, it is critical to make it compatible with the mental models used by users, for ease of use. This is a major reason why we need to understand real users and scenarios of use at the early stages of design, so that we can come up with solutions that close the gap between what users already know and what knowledge our new system requires from users in order to be able to operate it.
So that’s some of the most important principles of good UXD. These principles represent a useful starting point when thinking of developing any software system. Great software design starts with obsessing over these details – the more we care about these principles, the more likely our software will be fit for the job it intends to solve.