Today’s immersive technologies are still at the earliest stage of development compared to TV and cinema, but non-linear storytelling and artificial intelligence are revolutionising how we interact with the real and the virtual world.
When we look at today’s immersive technologies what we find are clearly defined immersive experiences from full 3D world replacement with VR, to augmented and mixed reality where we use graphics that let us interact with the real world in new ways. But these technologies are still in their infancy and we have a lot to learn about how people use them and what works creatively.
Virtual reality is one of the most hyped and well understood virtual technologies of recent times. As a synthetic reality that replaces normal experience we can’t just use it any time we want. We have to take time out in a dedicated space with a headset to enjoy it comfortably.
Augmented reality (AR) requires less effort to enjoy as it takes a video feed or a live view of your reality and overlays data on it – we can quickly bring it into our day-to-day activities to experience it.
Meanwhile mixed reality (and its offspring cross reality or XR) is essentially an integration of AR into the real world so that we can interact with it. Mixed reality allows users to manually activate a holographic display or walk round a digital object.
Interface Advances
The important thing to remember when talking about virtual and augmented realities are that these immersive and interactive technologies are not actually true technological advances. So says Playstation VR pioneer and founding partner of XR consultancy Realised Realities, Jed Ashforth. He refers to them as ‘interface advances’ – new ways we can interact with media and the world – and new genres for creating and experiencing content.
“It’s not just about what’s on screen but the kinds of screen that VR and AR use and the way you interact with that content,” he says. “At Playstation that is how we used to talk about it, the different elements of the interface. So what we get is new interaction possibilities to explore and new ‘design paradigms’ – new ways of doing things – and this opens up creative opportunities.”
Commercial video games are nearly 50 years old and Ashforth played his first video game in 1972 at the age of four. Since then what we have mainly experienced is incremental improvements from better graphics, to higher resolution screens and now high dynamic range (HDR) video.
VR Standards
Virtual reality is currently split into high-end PC tethered and untethered systems (HTC Vive Pro/Cosmos, Oculus Rift/Quest, Valve Index), console VR (Playstation) and mobile VR (eg, Samsung Gear). High-end PC and console VR gives users six degrees of freedom (6DoF) where users can move up and down, left and right and forward and backwards through a scene, as well as tilting, turning and pivoting their head to get a complete 360 view. They can translate movement in that space, with their head or hands.
Touch controls are fairly intuitive to use, but the size of the play area is limited and there is limited awareness of what is going on outside of VR. Headsets don’t allow you to walk too far from your PC or console and most are expensive, though the price is coming down.
Mobile VR on the other hand is affordable and available wherever there is a good network connection. It is a good fit for 360 video as you only really need three degrees of freedom (3DoF) – tilting the head up and down, rotating left and right and tilting side-to-side. One of the main problems is that 3DoF allows no lateral movement and it doesn’t feel natural because the view is fixed and when you move the entire world moves with you.

Magic Leap’s headset has been developed for AR and mixed reality content. Priced at around £2,000, it is still something of a high-end gadget.
Mixed Up and Magic Leap
AR and mixed reality by contrast are projections onto the real world. They’re not immersion, they are addition. For AR we only really need a smartphone. For mixed reailty we need smart glasses or a headset such as Microsoft’s HoloLens or the Magic Leap One.
The problem with AR is how to adequately interact with it while holding your smartphone. It’s fine for playing video content but if you are going to track an object, to see if your hand has pressed a virtual button, that is relatively difficult for a mobile device to do because of the motion and the device’s limited processing power.
“It works well in some locations but it can be absolutely dreadful if the lighting is wrong or the colour of a wall is wrong,” says Ashforth. “It can also be glitchy if there’s a reflective glass cabinet, mirrored surfaces or chromed coathooks. When it works it’s great, but it doesn’t always work brilliantly and it’s low immersion.”
This ‘magic window’ experience – whether VR or AR – works as novelty but that’s about it. Users often feel like outsiders peering in rather than active participants. Mixed reality on the other hand tends to be where users are wearing glasses and their hands are free to interact.
“What you are doing with mixed reality is bringing things into the environment that can interact directly,” says Ashforth. “You could have two characters fighting on a table top and they know where the edges are, they can trip over the edge of a carpet in your room. It’s clever stuff.”
Storytelling Advantage
One benefit of VR storytelling is the sense of presence, which allows users to experience the narrative through an immersive perspective that was not possible before. Outside of experimental theatre or performance ‘in the round’ it has not been possible to tell a story where the audience is at the centre of what’s happening. VR is a new discipline in that sense and we are still figuring out the rules.
360 filmmaking has the usual elements of standard two-dimentional filmmaking – scripts, sets, costumes and actors – but with added complications. For a lot of 360 video the rules of production are now quite mature and best practices have been worked out. These include:
- Everything is in the shot – so remove yourself and any equipment so the viewer sees only what the camera sees.
- Buffer zone – allow at least four or five feet between the camera and any people in the scene, so they are framed properly.
- Eye level – position the camera just below the eye level of the principal subject featured in the scene.
- Stitch lines – be aware of stitch zones where the lenses cross over and limit movement in these zones.
- Front facing – keep the front of the camera focusing on the key subject. This is what people will see when they enter the scene.
- Resolution – aim for 4K resolution or higher to get the best results.
- Sound cues – because keeping the camera still is often important, sound plays a crucial role in directing the viewer’s gaze – spatial audio design helps here to let people know when and where to look.
Another massive advantage of VR is that it is brilliant for emotional engagement. Nielsen statistics show VR is 27% more emotionally engaging than TV content and had 34% longer engagement times than TV. For storytellers, Nielsen shows that the new medium offers new vistas for emotional involvement.

Interactive VR: “No matter what you think the story is, your player is going to have a different idea.”
Interacting in Immersion
Interactive VR is totally different to 360 VR. In 360 the viewer is wrapped in a bubble and wherever they look they can see their immersive world, but if they start to physically move, that bubble just travels with them. The viewer is trapped inside a sphere with a projection inside.
Interactive VR on the other hand is not filmed, it is created and run in a game engine (such as Unity or the Unreal Engine) and the user can go wherever they want. It is modelled graphic content and users can virtually interact with the world.
“Lots of clients and agencies now want to make the jump from 360 to interactive,” says Ashforth. “360 video was something they were excited about a couple of years ago and they’ve learned how to do it really well. Everybody is talking now about making interactive VR.”
The Challenge of True Interactivity
Traditional linear storytelling is where the author creates what he wants and serves it up to the viewer or the player. The audience surrenders to the content: there is no choice and they can’t change the story. They are along for the ride and there’s a powerful component in the fact that the creator can tell the story they want to tell.
In interactive VR that is not true at all. Players can do whatever they want. “No matter what you think the story is, your player is going to have a different idea,” says Ashforth. “There are many different player types and as a game designer you have to learn about this. There are 20 or 30 different categories of gamers and at least half of them are really destructive. They want to mess up the game, break the narrative and push the boundaries to see what happens, or simply to laugh at the outcome.”
The future of VR storytelling is centred around interactivity. Linear narratives are not going to disappear and neither will 360 video. Even in interactive VR the linear narrative will be used to direct action, but now there are options to do something different and, often from users, an expectation.
The bad news is that interactive stories are extremely challenging to create because you don’t have full control, says Ashforth. As soon as you hand over control to the audience then you have to deal with whatever they want the story to be.
“Whatever influence over the story you give to them you have to make sure your interactive story can cope with what’s about to happen,” he says. “You have to be able to bend the narrative or just ignore what’s going on and stick to the narrative enough to tell the story.”
Doing it Detroit
‘Detroit: Become Human’ on PS4 is an outstanding example of an interactive storytelling game. Its high production values see it being sold as the chance to play a movie with lots of different endings. There are branching choices all the way through and it is an extremely effective piece of VR. The problem with this, says Ashforth, is the sheer expense and difficulty of making an interactive story.
“You have to plan for every single option you offer the player,” he says. “If the option isn’t there for what you would do in real life then it breaks the illusion that it is your story.
“Detroit offers choices and some look arbitrary, some will make changes to the end of the game. What it means is that when you look at the map of possibilities – endings and routes you can take to the end – every single one has to be created, scripted, acted and recorded, and all the models have to be built. These games are extremely expensive to make, which is why there are not many of them.”
Sony has the investment needed to make it happen. Microsoft does too and created something similar with ‘Quantum Break‘, a “cinematic action” game described as part game, part live-action show. Both companies went to great lengths to make them look and sound like a film.
Levelling Up Agency
The illusion of control in high-end VR experiences like Detroit or Quantum Break can break down. Game characters exist in the digital world either as an observer or as an active character. In 360 video the player is a simple witness to events, and in games like Henry, an Emmy-winning story about a hedgehog’s birthday party, the player is an inactive participant – acknowledged by the characters but without any agency to change the outcome of the story.
The next level up is where you have some choice between different narratives – but all that does is play either video section A or B like a branching path adventure. Then at the top level you are a character with impact, an active agent that is a participant in the storyline. You get to change things. This is where interactive VR sits and it is still very hard to do. Yet fully interactive VR is what everyoneis aiming for now.
Even with agency you may at certain points have local agency where you can only change what’s happening in the scene, like a point and click adventure game, but you may not be able to change the outcome. The concept of ‘global’ agency is similar to what players have in ‘Detroit’ where they can change the ending.
Good Fit
“Detroit has 30 possible endings and all of them had to be created,” says Ashforth. “If your particular path through does not fit one of those endings it is not going to come across as a good fit. If you get a good ending it feels like it was a really good experience, but if you get a wishy-washy ending it’s unsatisfying and it’s really not the right way to crack storytelling.
“This approach is nothing new, it is just throwing a lot of time, money and resources at it. When you think about what the adventure book ‘Warlock of Firetop Mountain’ modelled itself on, it was Dungeons and Dragons – where you have a live storyteller. Automating the storyteller role is the central problem we face in VR storytelling.”
“The branching narrative in Detroit is just a more complicated version of the adventure books we saw in the 1980s. What VR creators really want to achieve is live storytelling, smart enough to be able to bend the narrative and accommodate what users want to do.”
Illusion of Choice
Another approach is to give the illusion of choice – pseudo-agency. Users like choice but in this case you give them arbitrary choices. Do they want pancakes or eggs and bacon? It doesn’t matter which they choose because the scene is going to play out the same. It does not affect the ending of the game.
The illusion of choice is a psychological trick that works with players and engages them. However it is not going to work well in VR. That sense of immersion, presence and place inside the virtual world means that expectations are very different.
Sandbox World
Another popular approach in gaming is to have a big open world area (a sandbox) with individual chapters to drive the narrative. The overarching narrative to the story is essentially linear, with lots of side chapters that spring up that can be tackled in any order, without affecting the main spine of the narrative.
One game that does this extremely effectively is the 2005 title ‘Facade‘. “You can type anything in and the game reacts to it,” says Ashforth. “It operates through a typed interface plugged into natural language processing [NLP] – the AI can analyse what’s being said and respond as a human might.”

What’s the truth behind the Facade? This superb online game uses natural language processing technology.
The premise behind Facade is a night out at a friend’s cocktail party with married couple Trip and Grace, but all is not right. There is some serious friction going on between the couple despite the initial air of married bliss and it’s your job to talk them into reconciliation or a date with the divorce courts. This free web game offers a really novel experience. Its twist on typed adventure gaming is brilliant because the NLP engine is so accomplished.
Having a character that can respond to anything you say or do is a vitally important part of the puzzle. But not every game can compete in this sphere. Expectation is a big thing in VR. If something does not react as it would in the real world then it snaps the user out of the experience.
“We’ve had cinema for more than 100 years and video games have been trying to copy that for 50 – and still can’t do it properly,” says Ashforth. “They are fundamentally different mediums.”
Target Holodeck
VR is fundamentally different. Where developers want to go is what we see in the Star Trek Holodeck, and in fact that is where immersive media is converging: a complete sense of immersion and believability, where you can walk around in virtual and actual space and where the digital world and characters are believable.
We are less than 10 years into VR on a larger commercial level despite its technological origins in the Eighties. Advances are pushing it into new areas, but when you think about it we are still in the silent movie era where 99 per cent of the challenges (unlike 360 filmmaking) have not been cracked. As Ashforth admits, digital games creators never really cracked video gaming and now with VR there’s the huge added complication of placing the player at the centre of VR.
“We are at the start of this journey but everyone’s expectations of the quality of the media is based on the fact that we are 50 years into gaming and 70 years into TV,” says Ashforth. “People have created masterful works in those mediums but there is a huge learning curve to go through in VR before we can get there.
“When you see something in VR that is really accomplished, it is way more impressive than we give it credit for. There are very few learnings from other mediums that we can take forward. Everything is learned from scratch.”
Empathy Machine vs Agency
‘VR is the ultimate empathy machine’ said VR pioneer Chris Milk and it definitely holds true for linear 360 VR that you can’t interact with. ‘Clouds Over Sidra‘ is a very powerful piece of 360 filmmaking. It compels you to connect with and empathise with people. The narrative is about their life. If there was a way to have agency with that video and help those people in some way of your choice, it would cease to be so empathetic.
As soon as you add interactivity, agency, the empathy stops. Agency is the enemy of empathy, says Ashforth, but interactivity is where most of the storytelling will be done. Another element that will add to this new era of storytelling is embodiment, or the feeling of being inside a virtual body. The idea of being able to inhabit another body, to look in a mirror and see your movements mirrored, is very powerful.
“We know from years of academic research that embodiment can make us sympathise with characters – people who are of a different race to you or a different gender,” says Ashforth. “There are lots of ways that we can explore the differences between us as humans. Putting yourself in the body of someone else and seeing their reflection is a very powerful tool for storytelling.”
The Indiana Jones Problem
Still embodiment comes with its own issues, one of which is the ‘Indiana Jones problem’. While it might be great to make an Indiana Jones game full of cracking whips and acts of derring do, you’ll still be afraid of snakes. You personally might not be, but the character that you are embodying is. Also, you may have to walk across a narrow bridge over a canyon, but what if you personally are afraid of heights? In VR players are put in situations that their character might be able to cope with but they may not. Are you physically able as a player to play the character Indiana Jones? This is where the cracks begin to show.
In video games there are many layers of abstraction. You don’t actually walk across a bridge, you move the joystick in a certain direction. The character on your screen, on the other side of the safety window, is walking across the bridge. You don’t feel threatened and you won’t fall off. You can empathise with the character in the story but you are not inhabiting that character and you don’t ultimately feel like it is happening to you.
Ashforth worked on Batman Arkham VR for Playstation and one of the main problems was how to get people to make Batman decisions. “There is a point where you first become Batman, you look in the mirror and the first thing everyone does is to do a little dance. But all of the other choices in the game only offer options to do Batman-like things. You are limited in where you move and what you can interact with. The main interactions are with your gadgets – grappling hook, baterangs, batmobile. You can’t graffiti anything, take a wee in the corner or do what Batman would never do.”
Chewbacca Solution
There is a solution to this clash of big personalities, says Ashforth, it’s what he calls the ‘Chewbacca solution’. “Don’t make your player Han Solo, Indiana Jones, or Batman – make them the sidekick,” he says. “You can carry them along for the narrative, to consult with and make important choices, but Chewbacca is not the one who falls in love with Princess Leia so you don’t have to worry about making an epic kiss experience in VR.
“All those difficult decisions that you don’t want to give to the player because it could really affect their agency and screw up the game, you give those to Han Solo. Then You give the player just enough agency in areas that it is easy to achieve, but the tricky things we can’t do yet we give to Han Solo. That is the type of model that people are starting to adopt and it’s quite clever.”
AI Characters in VR
The idea of having automated AI characters still seems like something from the far future – like a holodeck character – but Mica from Magic Leap is a full-size hologram that reacts to the user. Mica appears when you put on the Magic Leap headset. She stands about 5 feet 4 inches tall and responds in a way that is really believable.
If you talk to Mica she has the same text to speech engine as Amazon Echo or Siri. She takes what you say and translates it using NLP to understand what you’re saying before she responds. All of her animations are sourced from a real actor, but all the interactions are taken from how we react to Mica. It is a robust stand alone system.
You can spend hours talking to Mica in the headset. It is a very powerful and progressive realisation of a pure AI character.
Character Trends
An intriguing trend with AI in gaming is to create characters that come into the player’s life during their normal day. Ashforth describes a prototype character that can meet with you at lunchtime or in the evening at home, but only when you are wearing AR glasses. “The character is a spy who will come to tell you a secret or to get your advice,” he says. “This to me seems like a great innovation.
“My wife has always said that if she could do anything in VR she would want it to be Coronation Street. She doesn’t want to be in it, but she wants to sit in the pub, or sit at the sewing machine while they’re having an argument. She doesn’t want to interact, she just wants to listen in.
“The characters would come to you wherever you are when you activate the game. It’s a really different, really new way of storytelling.”
Narrative Designers
In the HBO series Westworld one of the characters is a narrative designer who creates stories for the autonomous robots that live in the theme park. This is what developers are now looking for in VR. The characters will be virtual rather than physical robots, but the fact remains that mixed reality companies need people who can create realistic people.
“If you want to know where the jobs in VR will be in five years, that’s what we need, people who understand people’s personalities,” says Ashforth. “People who understand how people think and react, and can create personality and character, and know exactly what they will do in any given circumstance. For character authors this is a big opportunity.”
Eye tracking is another exciting area that VR and mixed reality developers are looking to learn more about. Knowing where people are looking and having realistic reactions for that can create more satisfying interactions.
“A really simple thing in VR is where you have three options in a shop and we know which one you’re looking at because your head is pointing at it,” says Ashforth. “Then the shopkeeper says ‘Ah you’re interested in that’. It’s a simple trick but it works.”
Holographic Capture and Voice Bots
Holographic capture is the next level for creating truly realistic real-life characters. The Natural History Museum’s Hold the World app contains a 3D scanned, holographic David Attenborough that users can walk around, stand next to, and look over his shoulder.
By combining that with AI, through voice assistants like Alexa or Siri, we establish the first step towards creating totally synthetic characters. Superstars of the future may well be AI bots that are well developed, popular characters. If you are a voice actor you could be about to be replaced by a robot.
What this offers for gamers and those looking for mixed reality experiences is more realism, more involving challenges and game play, and more intuitive reactions from characters in VR, AR and mixed reality. For both creators and users, amazing creative opportunities are just around the corner.
Jed Ashforth writes about VR, AR and XR on his blog at Realised Realities. He previously spoke to WeLove Media about his experiences launching Playstation VR and setting up his own specialist XR consultancy.