The metaverse has been touted as the place where we’ll spend our lives in the future—but the visuals still leave a lot to be desired.
Mark Zuckerberg’s metaverse avatar. Image: Meta
It’s approaching 20 years since the release of Second Life, an early stab at an immersive multiplayer universe from Linden Labs, in which people began living and working—as well as making plenty of cash along the way. Two decades on, the promise first hinted at in Second Life is nearing reality, as the persistent digital world of the metaverse starts to make inroads into the mainstream.
The breathless coverage and endless hype of the metaverse would have the average person convinced that they’ll need to start planning for a life permanently attached to a VR headset.
A billion of us are due to enter the metaverse by the end of the decade, if Mark Zuckerberg has his way, while research bank Citi says that the metaverse industry will prop up an economy that could be worth anything from $8 trillion to $13 trillion by the same date. It’s eye-popping figures like this that have attracted over $177 billion in investment to the metaverse since the start of 2021, according to McKinsey.
There’s just one problem: the graphics of the platforms being heralded as at the forefront of that future look about the same—if not worse—than the 20-year-old Second Life.
When Meta announced the launch of its metaverse platform Horizon Worlds in France and Spain this week, it was greeted with widespread mockery. The brunt of the criticism was borne by CEO Mark Zuckerberg’s “dead-eyed,” legless cartoon avatar, forcing a hasty redesign.
It’s not just big tech’s legacy players that are afflicted. Web3 metaverse platforms like Decentraland have come in for criticism for their graphical stylings, too.
Decrypt’s own review of Decentraland took aim at its “relentlessly flat” terrain and pop-up. “Even on the highest settings,” said our reviewer, “it’s too limited graphically to be a particularly engrossing virtual reality experience.” CryptoVoxels, The Sandbox; they’re all rendered in blocky, cartoonish visuals reminiscent of a 2000s-vintage game.
It all begs the question: why are the graphics so terrible in the metaverse?
There are plenty of reasons why it could be the case, with different platforms offering different excuses depending on the graphical fidelity they offer.
One major issue that metaverses currently have is that rendering graphics in real time takes a lot of processing power—and superfast internet speeds that aren’t always available to users. Graphics cards and broadband connection speeds limit the ability for metaverses to present highly detailed graphics, meaning they often instead rely on broader-brush graphics.
Metaverses often have worse graphics than MMO games because they are, by design, much more open-world. Rather than allowing users simply to follow a pre-programmed list of commands, which games do, the metaverse theoretically allows an infinite number of options that can’t be pre-rendered and called upon when needed.
There’s also the suggestion that having a totally cartoonish metaverse is better than the alternative: a mostly-lifelike environment with a few fatal flaws.
The concept of the uncanny valley, where graphics are almost perfect but have one thing wrong with them that unnerves users, already exists in video games. And in an environment where you’re rendering things in real time, and allowing users the option of almost limitless decisions, there are simply too many variables that could go wrong and push people into the uncanny valley.
A problem with legs
The issue is particularly vexed when it comes to legs.
For metaverses built around virtual reality interfaces, legs are “super hard and basically not workable just from a physics standpoint with existing headsets”, Andrew Bosworth, Meta’s then-vice president of Reality Labs, and now its chief technology officer, told CNN Business in February.
“It’s a hardware problem,” says Gijs Den Butter of SenseGlove, a Dutch company that develops haptic feedback gloves and devices that will be a major part of the metaverse—should we eventually fully inhabit it. “Manufacturers on this occasion have a headset, which has controllers or hand tracking, and that’s what our computer is for the metaverse,” he says. “In the current state, it doesn’t have legs, because the hardware can see your hands and maybe your arms, and track that, but when you look forward, you can’t see your legs.”
That’s difficult because the body tracking algorithms that help identify where you’re pointing within the metaverse require input from body parts they can see—and as anyone stood straight and looking directly ahead of them knows, you don’t see your own legs. Therefore the computers trying to render the digital equivalent of your body in the metaverse don’t have legs.
That’s less of a problem for crypto-based metaverses like Decentraland and The Sandbox, which mostly rely on browser- or desktop-based interfaces rather than fully-immersive VR—for the time being.
“It’s really Facebook/Meta and Microsoft—these immersive platforms,” that don’t have avatars with legs, says Weronika Marciniak, a Hong Kong-based metaverse architect at Future Is Meta. “Most worlds, like VRChat, Decentraland, Sandbox and others present avatars with legs, although you don’t necessarily have sensors with legs.” Those platforms get around the problem by “pretending”—before Marciniak corrects herself to “assuming the position of users’ legs.”