Here’s what’s new in the world of VR…
It’s only been live for two days, but the Kickstarter for VRGluv, a new controller glove created specifically for use in VR, has already earned nearly 90% of its $100,000 goal. VRGluv, which is compatible with both the HTC Vive and Oculus Rift, boasts not only the ability to have full tracking of your hands (even down to the finger joints) and incorporate pressure sensitivity, but also the capability to give users force feedback – essentially, the gloves are able to physically restrict the fingers when the user’s virtual hands are in contact with a virtual object. With this technology, VRGluv looks to expand a user’s sense of immersion far beyond just sight and sound. Even better, the gloves are wireless and come with adapters for Oculus Touch, Vive controllers, and Vive trackers. Units are expected to start shipping in December 2017.
For anyone who’s ever wondered what it’s like to be a Chestburster from ALIEN, you’re in luck: ALIEN: COVENANT IN UTERO is out on Oculus and Gear VR. Coming soon to Vive, PSVR, and Google Daydream, Ridley Scott’s new ALIEN: COVENANT tie-in experience puts the viewer in the middle of an unlucky human’s chest. What comes next is a short, but iconically graphic, scene as the freshly evolved Neomorph (ALIEN: COVENANT’s evolutionary precursors to the classic Xenomorphs) erupts from the hapless victim’s body and turns its sights on the other human locked in the room with it. IN UTERO is a teaser for ALIEN: COVENANT, out in theaters May 19.
Adobe announced this week that they’re working on technology to artificially create 6 degrees of freedom (6-DoF) – essentially, the ability to lean and tilt in addition to looking in any direction – in 360 degree video without the use of a depth-recording camera. Prior to Adobe’s announcement, the capability to produce 6-DoF in recorded video remained solely with depth-recording cameras, such as the Lytro and Facebook volumetric cameras previously mentioned in this column. Now, Adobe has revealed a method that uses algorithms and the motion of a monoscopic (flat frame recording) 360 degree camera to discern depth information by mapping out different points of a setting. The method, which Adobe is calling their “structure-from-motion” algorithm, relies on the camera movement to detect different perspectives on the geometry of the setting, thus allowing the information to be stitched together to make a three-dimensional environment. As a side effect, this algorithm is also capable of allowing 360 image stabilization. Adobe’s head of research Gavin Miller is set to present more about this new technology at NAB in Las Vegas next week.
Check back every week to see what’s new in VR content and technology!