Over the years we’ve seen several attempts to utilise built-in sensors to hardware to improve the user experience, but this hasn’t often been particularly useful – more of a gimmick. In some instances, there are examples with Smart TVs where hand gestures were tried and occasionally it made sense – you’re sitting several feet away from the screen and you’ve lost your remote down the side of the sofa.

For example, you could take simple actions things like swiping across the screen to change channels one at a time, but realistically how many people do that? There are hundreds (thousands even) of channels and the ones you want to watch are rarely next to each other, so unless you want a 20-minute workout to find your channel, you’re best off just finding the remote!

We then come on to smaller devices such as phones and tablets. Companies have delved into this space before and there are apps available that make gesture controls available, but I still wonder how useful these are. The likes of contactless gestures for example feel somewhat limited in usefulness when the device is in your hand. As a result, I thought I’d look more closely at how you can better utilise built-in sensors and settings of modern hardware to provide a better, more practical user experience in a GUI.

 

Utilising the Proximity Sensors

Proximity sensors are commonly used in devices now for actions such as sleep/wake of the device, making the screen inactive when you put it to your ear. This is to avoid accidental input when on the phone when on a call. However, how could this technology be better used to enhance the user experience?

Well, let’s just start by clarifying I’m not going to talk about swipeable carousels or contactless scrolling. One of the challenges on these smaller devices is balancing the hierarchy of information, providing clear CTAs and ergonomics.

In the world of e-commerce, for example, one of the core things any business wants you to do is add to the basket and checkout easily. However, throw in your upsells/cross-sells and special offers and the list goes on, and it’s easy to dilute the visibility of your key CTA. As a result, if you could make use of the proximity sensor so that the closer your finger was to the screen, you could resize elements (such as the ‘buy’ CTA) to draw attention to them and make it easier for a user to checkout.

The other practical use was in reference to ergonomics. There are typically some hard-to-reach areas which also happen to be common positioning for functions such as the menu and search.

Companies have experimented before in this space by having the ability to pull the top of the browser down closer to the main touch area, based on a combination of taps. The alternative approach to this however is to sense the proximity to the top of the screen and bring the functions in the top corners closer to avoid the user having to adjust their grip to reach.

 

 

The same thing applies to exit intent. Many websites utilise exit intent capture forms based on the positioning of the cursor on a desktop computer but utilising a similar function with the proximity to the ‘back’ button, for example, could serve a similar purpose.

While I’m talking about this, I should stress I’m not saying you would put all of these into one UI. After all, it would be easy to over-complicate the experience and ultimately make your site more confusing, but all could be relevant considerations based on your user stories.

With all this talk of using proximity sensors, however, you start to wonder how you could utilise some of these actions on a desktop too. One simple example would be increasing GUI elements such as CTAs based on the proximity of the cursor. This would provide more fluid awareness of UI elements than that of rollover states.

 

Ambient light sensors

Another sensor built-in to devices is the ambient light sensor. The purpose of this is to adjust the brightness and tone of the screen based on environmental lighting. This reduces glare and eyestrain of the user for a smoother experience. But what if you could read that lighting sensor and actually adjust the UI itself? Some simple CSS work would allow you to optimise your GUI and actually highlight elements in a much clearer way based on the intensity of the ambient lighting.

 

Dark mode

Dark mode is popular with many people now for the same reasons highlighted above regarding eye strain and glare. The other reason, however, is that people just think it looks cool. So, if you could identify a device was in dark mode, it may be worth creating a dark mode for your website. This would provide a seamless transition between OS and website and provide a more consistent experience for those users, encouraging them to spend longer on your site.

 

Camera

Before I get into this last one, I should call out this one is just a bit of fun – there are all sorts of question marks around security and performance, but what if you could use the camera to understand the user’s emotion whilst on your site? You could adjust the UI and tone of voice based on a user’s expression. Maybe you pick up frustration or confusion and you can proactively offer help on-screen. We use facial expression analysis with our biometric lab, but being able to present information, help or UI changes based on a user’s expression would give another level of hyper-personalisation.

 

As technology gets better and expectations rise, we’re always looking for new ways to improve the user experience. Where constant innovation is important for all of us and sometimes it’s just about a bit of fun, we should also look to see how these innovations can be put to more practical use.