Skip to the content

2019 - Moving Forwards - Voice Technology


So far we've seen two major ways that we interact with computers. 

Firstly we had TUI (Text [based] User Interface) where a user needed to understand the computer's requests and responses.  These were clunky to use in all but the most textually based applications (for example categorising items or adding products etc) and were of little use with graphical assets (e.g. showing a product packaging for an ecommerce site).

Later (and presently) we are in the GUI (Graphical User Interface) period.  This started with keyboard control, moving to trackball and mouse, and currently utilisation of touchscreen technologies.

Looking forwards we're rapidly moving into the period of VUI (Voice User Interfaces) for the majority of our interactions.

Passive vs. Reactive / Active Computing

Traditionally we've lived in a world where computers do something when we use them.  This even followed through the apps where you remember to open the app, then use the functionality.

However this model of "passive" computing is beginning to wane, instead apps and programs are "active" and interact with the user; prompting questions and requiring a response. (read our article on passive vs reactive computing here)

VUI is a poor fit for passive interactions it requires a dialogue between the computing device and the user.  At the moment most interactions start with "alexa" or "hey google" but we predict that will change over the next 5 years with your smart assistants actively commencing (and engaging in) conversations with users.

Does VUI require different form of User Experience (UX)

The short answer is yes!

Early VUI forms are failing because they're reproducing existing websites/apps and attempting to map that onto a voice user experience.  This fails in the same way as taking a website and building it directly into an app: positives and strengths become weaknesses and vice-versa.

VUX (Voice UX) is nearer TUI design than GUIs.  TUI's UX relied on textual input and commands (rather than clicking/tapping the correct areas) but VUI needs to understand conversation flow and synonyms (alternative names for nouns and verbs).

A key component of VUX is being able to infer what a user wants/needs and making that more readily available and simpler in a dialogue.  For example your assistant asking "you're eating a stir fry tomorrow.  Should i order beansprouts for delivery tomorrow morning?" is far easier than you having to remember and say "hey google, can you order me some beansprouts" (leading inevitably to the from where, for delivery when, what size etc).

How to implement VUI

VUI shouldn't be custom coded: it's pointless.

VUI capabilities should be thought of like an operating system.  No sensible person would attempt to build an operating system, instead they build their apps on top of operating systems / platforms.

At present it looks like the major full stack platforms will be by Google and Amazon (i.e. Google Assistant and Alexa) but this is bolstered by AI as a Service platforms like Azure Cognition and AWS which can be used anywhere (on any device with an internet connection) to add VUI capability.

Mixing GUI and VUI

Most important Apps over the next few years will use the utility of GUI and augment using VUI.  

GUI will be used for precision (imagine buying a coat via voice without a display - could you be precise enough on colour, shape and style to find one that suits you?) and VUI adding additional capabilities.

A good example is construction and engineering where the hands are often occupied.  VUI adds features/access in these circumstances ("show me the plans" which could use the location to show the blueprints for an architect then switching to a pointer/pen/mouse for more detailed changing of the plans)


The future is coming but there's still room to improve.  Voice activated user interfaces already have utility in a number of areas for improving worker productivity but detailed and precise work is still a way off.


About the author

Stuart Muckley

Stuart Muckley

I’ve been a programmer and IT enthusiast for 30 years (since the zx spectrum) and concentrated on AI (neural nets & genetic algorithms) at University. My principle skills are concentrated on Enterprise and Solution Architecture and managing effective developer teams.

I enjoy the mix between technical and business aspects; how technology enables and how that (hopefully) improves profit/EBITDA & reduces cost-per-transaction, the impact upon staff and how to remediate go-live and handover, and risk identification and mitigation. My guiding principle is “Occams Razor” that simplicity is almost always the best option by reducing complexity, time to build, organisational stress and longer term costs.

comments powered by Disqus

We're Honestly Here to Help!

Get in Touch

Got questions?  We're always happy to (at least) try and help.  Give us a call, email, tweet, FB post or just shout if you have questions.

Code Wizards worked with us a number of initiatives including enhancing our Giving Voucher proposition and fleshing out our Giving Street concept. They provide a good blend of both business and technology expertise. We are grateful for their contribution to TheGivingMachine allowing us to improve our services and enhance our social impact

Richard Morris, Founder and CEO TheGivingMachine

Code Wizards are a pleasure to work with.

We've challenged their team to integrate our cutting edge AI, AR and VR solutions and always come up trumps.

I love their approach and help on strategy.

Richard Corps, Serial Entrepreneur and Augmented Reality Expert

Working In Partnership

We're up for (almost) anything.

We believe that life is better in co-op (and if we can't help then we probably know somebody else that we can direct you to).  Get in touch.