Ashwin vaswani Jq Z7q S3x OE unsplash

Are we entering a new age of digital experience? Reflections on WWDC 2021

Apple’s Worldwide Developers Conference (WWDC) 2021 showed us a whole host of new features and functionalities, but the trajectory of the interaction between hardware, software and human could be a significant signpost for our digital future—or indeed just our future.

The hardware/software pendulum

Before we talk about “dub dub” specifically, let’s zoom out a little. The last decade or so has been a journey of identity for devices large and small. What are they meant to do? What are they not meant to do? When would I use a tablet and when would I intuitively reach for my phone instead? Why do I ‘do important things like big purchases on desktop’?

The roles and expectations of different device types are now relatively stable, in part thanks to the inevitable plateau of their underlying technology. It is perhaps no wonder, then, that ‘innovation’—let’s call it significant change instead—for the past few years has been predominantly software-led.

Is this the real life, is this just fantasy?

Whilst this back and forth between hardware and software as drivers of change is nothing new, we may be reaching a tipping point for the next baseline of experience. We are currently living the nascence of machine learning and on-device processing, which is moving us beyond the early empowerment rhetoric of ‘putting you in control of your device’, to something that was once the stuff of sci-fi daydreams: Your device understands you so well that you don’t even need to think about controlling it.

Neural networks and ever-growing sets of training data have allowed digital assistants to both understand and respond is remarkably skilful ways, and for a while it looked like the digital assistant was the software epicentre of the next big shift in human/machine interaction. But that looks to be only part of the story.

Software…is world

Let’s talk about Apple and WWDC ‘21. From Messages to FaceTime, Photos, Maps, Notes… the usual suspects got their updates and became ‘the most advanced, the best performing, the smartest…’ they have been since last year. But something ties all these updates together in a way that points to more than just incremental improvements.

These experience changes across the board are applying real world expectations to the user’s relationship with their device. What I mean by this is that the asymptote—the ‘extremely close, but not there’ gap between ‘digital’ and ‘real’—is closer to being resolved than ever before.

Consider FaceTime with spatial audio: your friends’ voices come from different ‘locations’ in your audio space and vary based on where you are looking. The wide spectrum sound mode allows you to include all the audio around you as opposed to assuming that the person calling wants the focus solely on their voice.

Shareplay creates shared experiences across devices: synchronised, shared content and shared playback controls, whether it is within Apple Music, playing on screen-in-screen whilst you’re on a FaceTime call, or even when you jump device to Apple TV. Notably the Shareplay API is also open for developers to put into action right off the bat.

The ‘shared with you’ feature already crops up in a number of circumstances, for example completely integrated into your camera roll within Photos thanks to a subtle but recogniseable signifier. Perhaps more notably, however, is ‘shared with you’ from Messages also being a swimlane in the News app—an understated but significant curveball take on what ‘News’ means to people and a recognition of the real sphere of influence surrounding an individual.

Live text within photos allows you to effortlessly scan and use text or numbers identified in a photo, plus the classic ‘visual lookup’ made famous by Google Goggles in 2010 but getting more powerful by the year.

Maps scans surrounding buildings with the camera to instruct you which direction to walk in. When using driving directions, the interface now shows overpasses and complex junctions in 3D for extra clarity in lane and exit choice.

Wallet allows you to lock and unlock your home, jump straight into your car and drive without even taking out your phone, and go straight to your hotel room and unlock the door. The potential for evolution in hospitality experiences is not to be ignored here.

Even Health is pushing to become a medically trusted source of information for doctors and patients—the details of which are perhaps suited to a separate article.

Charting the course for our digital future

But there are two changes that illustrate the underlying significance particularly well.

Focus—a big new addition to handling notifications—learns how, where and when you use your phone to proactively group, prioritise and curate your notifications. But it goes beyond this: Focus will also suggest specific homescreen configurations in line with its understanding of how you use your device. Craig Federighi spoke of how focus “matches your device to your current mindset”—seemingly banal on the surface, perhaps because we have heard these promises before with virtual assistants and chat bots, most of which have failed to even scratch the surface of our expectations.

But this could be different. This might be the beginning of something that has been on the horizon for a few years: predictive UI. Pushes for innovation in the fundamental aspects of interface have to a large extent not gained traction, not least because within the bounds of the input/output frameworks that we have available to us (here we are again, back at the form factor debate…), we have done a pretty good job at refining them: the mouse+keyboard and Graphical User Interface combination is still going strong. But what if the basic rules didn’t change, and the whole thing just got massively smarter? When Craig says ‘mindset’, we could also just as well understand ‘context’, and a device understanding our context is the gateway to J.A.R.V.I.S., Joi, KITT and beyond.

The hidden axis of software innovation might be giving the user what they want, before they even know they need it. If that were to succeed, why would you even need to think about the ground rules of the interface as an end user? The device would become an extension of the self. To the user, what would be the difference between ‘matching’ their mindset and the device simply ‘being’ their mindset?

And this brings us to the second wonderfully illustrative addition to the ‘Apple magic’ collection: multitasking. An iMac, a MacBook and an iPad sit next to each other…no it’s not the start of a bar joke, but instead the fulfilment of a lifelong ‘why can’t I just…?’ moment for many of the computer generation. Now, with Apple’s new multitasking, you can move your mouse to the edge of your screen, and miraculously see it straining to enter the screen next to it, as if it had just ‘jumped’ over to the other device. A little more force and it ‘pops’ onto said adjacent device, and so it goes for the device after it. Not only this, but you can grab a file from your iPad, drag it all the way ‘through’ your MacBook and ‘drop’ it on your iMac, as if the mouse itself were moving in physical, real-world time and space. It is astoundingly obvious from a spatial perspective, yet hitherto unseen in the digital world.

The power of perception

But what does it all mean? Moving forward, if Apple and other ecosystems can harness the opportunity of these advancements, it implies a new baseline for ‘digital’ and ‘real’. In fact, it suggests that the baseline may, at least in the eyes of the end user, all but cease to exist entirely. The role of the device itself becomes secondary as experiences become device-agnostic: with no barrier between starting a task on one device and finishing it on another, no interruption switching contexts whilst talking to friends in a virtual space mimicking real-world acoustics, no need to think about the interface since it—as an extension of my mindset—is ready for me exactly how and when I need it…the role of the device and the interface, the role of the actively perceived ‘digital’ shifts into the subconscious, leaving nothing but an extension of human capability.





____

Jack Mitchell is Director UX Strategy at MetaDesign Berlin

Photo by Ashwin Vaswani on Unsplash