Saturday 1 November 2014

Moving on

A few months ago we built a new website to tell the story of our new firm, Nomat. Future blogs will be published there. Check it out at:

https://www.nomat.com.au/nomat-blog/

I would love to hear your thoughts!

Chris

Monday 25 November 2013

Better User Research Through Surveys on UX Mastery

Last week I wrote an article on using surveys in user experience (UX) for the excellent UX Mastery. The piece covers what a survey is, how they can benefit the design process, what to consider before writing a survey, creating an effective survey (writing good questions etc.) and an introduction to some tools.

UX Mastery is a great site with some excellent content including a resources section, with a nice overview of UX techniques and an extensive list of tools.  They also created a fantastic video on what UX is, which is worth checking out.


I hope you like the piece on Surveys.

Let me know your thoughts on using surveys in UX or if I missed anything.

Wednesday 20 November 2013

Think the home button is unnecessary? Think again.

There has been a trend over the years, which is not particularly new, to remove the home button from website navigation. I’m not sure why this is, maybe it is to free up some space for other navigation options or maybe it is because there is an assumption that users understand the convention of making the company logo a link to the homepage. Regardless of why it is done, countless usability test sessions I have observed and run strongly indicate that this simply doesn’t work.

I recognise that a large number of users understand the logo convention however time and time again participants turn up who are not familiar with it. Usually their only way of getting back to the homepage is to use the back button – which can be a real pain if they have been on the site for a while or the back button acts as an undo for in-page functionality (think search filters). While it may seem trivial, getting back to the homepage is a fundamental aspect of user behavior when navigating websites.  This remains true today even when we know that less traffic arrives in the homepage (http://giraffeforum.com/wordpress/2010/04/18/the-decline-of-the-homepage/). The homepage is commonly used by people to orient themselves on a site. Something along the lines of: the information I was seeking wasn’t there, so I’ll go back to the homepage to look elsewhere for it. Ironically it can be the users who are less experienced or confident online who get lost and need to get to the homepage to re-orient themselves.

LinkedIn uses a very conventional home link which will almost certainly be understood by users:



If there is a genuine need to deviate from a conventional home link, below are two examples of sites which take a slightly different approach however are likely to be effective.



Both sites place the link in the top left of the screen, a conventional location. Asos saves space by using an icon while The Iconic avoids having to include a home link on their homepage by using prominent bread crumbs.


The humble home link plays an important role in assisting people to navigate and use a website effectively. If there is a need to avoid a conventional home link consider a creative approach, but keep in mind that usability testing with real users remains important when deviating from known and established conventions.  

Thursday 24 October 2013

How to moderate a usability test


Last week I had the pleasure of running some training and mentoring for moderating usability tests. I have been able to gain a lot of experience in running usability tests; working for one agency I was running up to 8 sessions a day! Within the industry there is a variety of skill levels and expertise in this area. I have witnessed moderators who act like a chameleon, adapting to each participant and skillfully eliciting feedback like a puppet master and have also been horrified with clumsily structured sessions where the participant was uneasy and plied with direction on how to act and left in no doubt about the feedback the moderator would like.

Moderation is not a difficult skill to learn, however the difference between good and bad moderation is huge and can have major implication for your project. I genuinely believe that anyone can learn the skills required with the right guidance. Like any skill, moderation takes time to master and mentoring can help. This post covers some of the basics for moderation; getting these right should lead to an effective usability test.

1.   Start on the right note: The key to a successful session is engagement. From the minute you first interact with the participant it is crucial to set them at ease and make them feel comfortable. This rapport building is about gaining the participant's trust as well as letting them know you are in charge. Mutual trust building helps participant feel comfortable that the session will be run in a manner where they are not going to be judged or made to feel uneasy. This comes down to a friendly but professional demeanour. Rapport can be built quickly, you need to be friendly and warm, be aware of your body language, ask some open questions about their day, or job etc. actively listen to their small talk and reflect that you are interested in what they have to say. 

2.   Make the process crystal clear: Setting boundaries with participants as well as providing a clear understanding of the process is crucial to making sure that they know what to expect and what is expected of them. This can overcome some potential issues regarding the anxiety some participants feel about undertaking a test, the desire of participants to please the moderator (tell the moderator what they think you want to know) and their potential diversion into trying to solve the design issues during the test.

A good script can be utilised to communicate key parameters to participants including:
a.     That you want to understand how they would use the interface in their typical environment.
b.     That you will evaluate (and ultimately improve) the website by observing their interactions with the site and that with this understanding, you will work out how it needs to be designed; make it clear that the participant doesn’t need to worry about re-designing the site during the session. 
c.      That you are not testing them. A successful usability test is one where participants behave as normally as possible (or as close as is realistic in a unnatural setting). Other tactics can be employed such as avoiding the term “task” and using “activity” instead. Also avoid comments like “good” or “well done”. While it is unrealistic to total eradicate the sense that they are being testing, it can be minimised.
d.     That you are independent of the design and that you won’t be offended by negative feedback. Independence is crucial to promote honest feedback (see Usabilitytesting: Does independence matter?) The reverse can be true, making out that you expect negative feedback can also adversely effect a session.

3.  The masterful art of deflection: Some participants will seek out assistance from you as the moderator (a good introduction script should reduce the likelihood of this happening). When this happens the key is to deflect the question while maintaining the participant’s engagement. For example deflecting the question poorly such as, “My role is not to answer your question” can actually do harm by making the participant feel that their feedback is unimportant. A better approach is to use phrases such as “What do you think?” or “Let’s discuss that in a moment”. Maintaining a respectful tone of voice is the key to deflecting the question.

4.   Not leading participants in their discussion or behaviour: Eliciting non-prompted and honest feedback is fundamental to the role of effective moderation. The way you phrase your questions to participants can lead to completely different responses and it is important that participants are not lead to an answer. An example of a leading question would be “Did you find creating a password difficult?” The use of a non-leading question such as “how did you find creating a password?” will elicit a more true response.

5.   Get comfortable with silence: Silence is one of the most effective moderation techniques. As outlined above, questions can impact a participant’s response, especially during a task. This presents a challenge, as any stakeholder viewing the session will want you to ask why, “why did the participant select that option?” “Why didn’t the participant sign-up?” etc. Instead, a better choice is to encourage natural behavior by deferring discussion to after the activity or to an appropriate pause.
Silence is difficult, and sometime you want to encourage your participant to continue – using some minimal encouragers, such as “I see”, “tell me more about that”, “and then…”, are a nice supplement to silence, if you feel your participant needs encouragement without impacting on their train of thought or their subsequent behaviours.
It can be argued that asking probing questions as things happens is likely to reveal more insightful feedback and at times this can be of value. You need to ask yourself, will asking this question now have an adverse impact on behaviour? If so, does the insight gained outweigh changing their natural behavior.

6.   Listen: As someone much smarter than myself once said, “we have two ears and one mouth so that we can listen twice as much as we speak” and when moderating we should follow this advice, and then take it to the extreme. Active listening will make your participant feel comfortable, make them feel heard, and allow trust to be built, hopefully supporting them to participate in the test more fully. The best way to start active listening, is to genuinely listen and show an interest in what the participant is saying (don’t tune out – remember you don’t have to be friends and it will be over in 60 minutes or so!). You can show an interest in their responses through your body language/posture, eye contact, use of minimal encouragers, asking open questions and use of attentive silence. Finally, active listening requires the moderator to get some feedback from the participant that their message is being understood in the way it was meant to be. This can be done by reflecting back to the participant what it is that they just said eg. Participant: “I didn’t like setting up the password, it was hard” – Moderator: “so you are saying that the password set up was challenging?” Apart from this showing the participant that you have actually listened, it allows you to check that you have understood what the participant has communicated to you, validating your conclusions.

Conclusions
Mastering the art of moderation can take years, running lots of sessions and making lots of mistakes. That said, learning to gain useful insights from usability testing can be quite straight forward. The most crucial elements are learning to listen, making it clear to the participant what is expected of them and starting on the right note.

Tuesday 16 July 2013

5 web analytics reports for UX


Web analytics data is acknowledged as a core competency for managing websites but  I'm amazed at how infrequently I see it  being used by UX practitioners. It amazes me because I think it is one of the most valuable information sources available. I'm going to discuss some of the basic reports which can be used to inform the design process and can aid any UXer to better understand the end user.

The following reports have been selected because they give a sense of how a site is being used. This is really simple stuff but it helps to provide a foundation understanding into how people are behaving on a site.

1. Visitation by day, week and year.
Looking at visitation by time provides insight into the usage patterns of the site.  The purpose of the site should dictate how we would expect the site to be used. Would seasonal usage make sense? Are people likely to use the site more or less on the weekend? Do we expect usage to increase in the evening? Looking into where the data supports and contradicts these questions is a great way to start using analytics. Understanding time the site is being used allows us to create hypotheses regarding how the site is being used more broadly and what additional data is required to confirm or dismiss 

these theories.  For example, if peak usage is weekdays between 7:30-9:00am and 5:00-6:00pm we might assume that the site is being used while people commute to work. Further evidence would be required to confirm this from within the web analytics data as well as from other available information sources. Remember when looking at time of day to consider the various time-zones throughout the world and to make sure time zone in the account has been configured to the most relevant location. 

2. Top content
Understanding the content being used is crucial during the design process. This helps us to identify content of value, or not, to the end customer. This exercise can be exceptional valuable to identify what content should be prioritised and what can be culled.  It does need to be stated that the content being viewed is simply that, it may or may not be what users want to view. Other data sources such as surveys and interviews can provide insight into what people want to find on a site. Segmenting content data can provide even more insight, for example looking into the content viewed by customers who made a purchase or become members can highlight some of the necessary information for carrying out these activities.

3. Internal site search 
Obviously this report includes common terms users are searching for on the site. We do not know whether they are being searched for because it is content which cannot be located or content being sought by those with a preference for search  (there are other tools and techniques for this, which I am not going to discuss in this post). Nevertheless it does give some understanding of what people could be interested in on the site and this data may support an existing assumption about what people can't locate or want to locate.

I should note that I have seen lots of Google Analytics configurations where the internal site search has not been configured. It is a simple process which can be seen in these instructions.


Internal site search

4. Keywords

Keywords are the terms which are typed into a search engine to locate a site.  They form part of the traffic sources information which covers how people get to a website, which include directly, referred, campaign or via a search engine. Keywords provide great feedback on the context in which the site is used. Have people stumbled on the site as a result of a blog post, were they seeking a specific product or have people typed in the business name? Each of these scenarios help to build our understanding of how people use a site which in turn informs how to best improve the site to meet user needs.  

As a side note a site may have 'Not provided' in the list of keywords. The proportion of searches identified as "not provided", has increased since 2011 because Google changed their policy on passing on the search term of account holders who are logged in.  For more information see the Google Analytics blog.

5 Top landing pages
These are the most common pages people arrive on, when coming to a site. On most sites this will be the homepage however over the years the proportion of visits which start on the homepage has declined due to deep-linking to content via search engines.  This information can be used to prioritise effort for enhancing the site. Furthermore, the site can be evaluated based on the experience of arriving on a specific page. Identifying the most common landing pages can be a great starting place for more detailed analysis. The landing pages report also includes a number of metrics, including bounce rate, average visit duration and pages per visit. These allow an evaluation of how effective a landing page is, using this information in combination with keywords can also provide considerable insight.

Conclusion

These reports are ideal for getting a taste of web analytics and getting a sense of how people are interacting with a site. Key findings identified from this analysis would be a great value to during any project. They also lay the foundation for further analysis and greater insight which can be gained through exercises like segmenting the data.  

Tuesday 11 June 2013

User research in an agile environment

I was invited to speak at the Global Reviews Digital Leader Summit last week which focused on driving customer engagement, sales and program velocity. The summit was a tremendous success with an impressive list of attendees and speakers.  It was daunting to speak after Chris Ho from NAB and Barry Newstead from Australia Post, who provided insightful and engaging presentations. That said, I enjoyed presenting and being able to share my views on the importance of user research in web design within an agile development process.

I thought I would take the opportunity to share some of my presentation here. 



The increased adoption of the agile development process within the digital industry presents a challenge for incorporating rigorous research as part of the design process, particularly in the way that we have traditionally approached research. This is due to the time involved in doing research with rigour.

Not including research can result in building products which do not meet user needs. This is ironic because this is also one of the reasons for organisations embracing agile. That is to ensure the end product aligns with user requirements instead of spending years building a product which is not what people want.

At its essence research is about extracting information. However that information only has value if it is accurate and can be relied upon. To make doing research worthwhile we need to do it properly.

The problem: Conducting rigorous research within an agile development process

Research can be a slow process, potentially incompatible with the rapid iterations of Agile. This is due to:
  • Recruiting participants for research activities, for qualitative activities like interviews or moderated usability testing, which can take as much as 2-3 weeks
  • Conducting qualitative activities such as interviews or contextual enquiry is time intensive. This typically constitutes days or weeks of work as opposed to hours.
  • The analysis phase can also be time intensive; identifying the insights from research requires time opposed to regurgitating observations and direct feedback from users.

Incorporating design research into an agile process


Much like the culture shift which is required for going ‘agile’, research also needs to become a part of the culture. Here are some ways to include research in agile.

1.    Effective planning: Research activities must be planned for. Sprint zero can be used to define the research needs generally, as well as the information sought from research. This could include any outstanding questions regarding users and identifying the design assumptions which require validation. Research activities can then be scheduled for upcoming sprints which can accommodate the time involved with recruitment.  For example we can schedule a round of usability testing for in 2 weeks time, to test the primary design assumptions and include any questions which arise over coming weeks.

2.       Using time efficient techniques and tools: Online quantitative techniques such as un-moderated usability testing, tree jack studies for testing IA, surveys and online card sorting can be conducted without the lengthy fieldwork periods associated with qual techniques. 

Some of the great tools out there include Userzoom which is a comprehensive suite of UX research methods including un-moderated online usability testing, cards sorting, IA testing and survey capability. It is an enterprise level tool which is used by Global Reviews. Optimal workshop, is another example of the tools available, offering card sorting and IA testing.

A further factor reducing the time associated with utilising these tools, is their great analysis functionality which can dramatically reduce the amount of time taken to complete analysis in comparison to in-person methods. Time is also saved in the collection of data.

3.       Placing the right systems and processes in place: A key requirement is setting up access to customers to be able to get rapid feedback. A database of customers who are willing to participate in research is ideal. This can be effectively supported by an active social media presence. By having a system to get access to customers quickly it is possible rapidly reduce the time involved in recruitment.

Customer can be asked to get involved in research during a sign-up process or via communication channels such as email.

Another approach is to schedule research which takes place at set intervals. For example scheduling customer interviews once a month, every month, regardless of the research needs and information required. I heard a great interview with Tomer Sharon on Gerry Gaffney’s fantastic UX Pod  where he talked about using this type of approach at Google. He conducts ‘Fieldwork Fridays’ where he gets software engineers to conduct the research with customers on a regular basis and argues that this has a huge positive impact on their products.

Both of these approaches overcome the shortcomings of long lead times for recruitment.
The key here is to have customers ready to participate in research activities at short notice.

Example: bringing it together


To provide an example, I recently conducted a card-sort within an agile team. There were questions about how the IA should be labelled and grouped; an understanding of how customers thought about the content was core to creating a successful design. Evidence was also required to justify decisions to stakeholders. On a Wednesday morning I created the card sort in Optimal Sort with around 40 cards. At midday it was sent out to customers. By 10AM the next day I was analysing the results and in the afternoon I was able to provide feedback the product manager and the rest of the team. This was a great example of having the right systems in place and making use of the right tool to provide rigorous and highly rapid feedback.

Thursday 18 April 2013

Usability testing. Does independence matter?


I caught up with a former client recently and got onto the topic of independence in usability testing a design. They were lamenting the fact that there are very few companies who focus on usability testing without also providing their own design services. Having tested my own designs and tested the designs of my colleagues, as well as working as a third party brought in to assess an agency’s design, I have a strong view on the role of independence. I feel that it is impossible to be truly objective with your own design. This is not to say that you shouldn't test your own design. Confused? Let me explain.


Generally speaking, there are two questions which usability testing answers: 1) How usable is an interface? 2) How can an interface be improved? Often these questions are explored together, however identifying which is the most important will dictate whether independence is required.

When the goal of testing a design is to gain an accurate measure of how usable it is, the testing requires an independent practitioner with no vested interest in a positive or negative outcome. The person or organisation which created the design has an interest in a positive outcome which will impact their assessment of the interface's usability. This is not to suggest that a designer intentionally goes out of their way to present the results in a misleading manner, although I am sure this has happened. Put simply, the assumptions which were made during the design phase make it extremely difficult to objectively assess behaviour during a usability test. The outcome is more likely to be favourable because the practitioner is more likely to acknowledge the behaviour which supports their assumptions (the salience effect). Furthermore, an agency who has designed an interface is also compromised in their assessment of usability, even in cases where separate teams design and evaluate an interface. Again this is because they have a vested interest in a positive outcome. This is never more evident than during difficult projects and where budgets tighten.

There are a number of scenarios where I think independence matters:

  • A project where significant investment has been made to design an interface
  • Interfaces that will have a significant impact on the bottom line i.e. a new eCommerce platform
  • Projects which are highly strategic for an organisation and numerous stakeholders are involved
  • Projects where the team must report to a steering committee or senior management
  • Interfaces that depart from industry or design conventions

In all of the above scenarios, being able to state that an interface has been independently tested and is usable is of great value to the designers and the project owners. It also represents a risk mitigation strategy for organisations in identifying readiness for deployment.

So having argued all the pro's of independence, when does independence become less important? When the goal of usability testing is purely focused on enhancing the design. Why? Because a good designer should be able to see the flaws in their execution and ultimately have an interest in creating the best possible design. Hopefully they are motivated to identify opportunities for improvement.

Ironically it seems the UX marketplace is moving away from independent usability testing despite the value that it offers to organisations as well as the integrity it affords the design process itself. Focusing on the objectives of usability testing will help to clarify whether independence is necessary. I think that if understanding usability is the goal, one agency or designer cannot objectively test their own design.

What do you think?