by firstname.lastname@example.org | Jun 25, 2022 | Development, Ideas, News, Research, Technology
As Virtual Reality (VR) continues to gain momentum toward wide-scale adoption both for individuals, and in the enterprise setting, one blemish remains constantly alongside it. That is the concept of people feeling the effects of motion sickness or vertigo while viewing VR content. And to be clear, we’re not talking about the feeling of Keanu Reeves “Whoa!” right after a VR headset is put on – that fades away almost immediately. We’re referring to the longer-lasting symptoms people refer to as motion sickness or vertigo.
Feelings of motion sickness in VR is one of those things where a little knowledge beforehand can be quite powerful. And with the right content executed in the right way can be minimized if not totally avoided by most, if not all, who put on a headset.
We’ve all seen the TikTok of Grandma wearing a VR headset and taking her first VR Roller Coaster ride. She’s screaming her head off while other family members gather around her in a state of paralysis brought on by acute belly-laughter. Sure, it’s all in good fun. And rest assured no Grandmas were harmed in the filming of said video. But this principal is key to understanding the first of three critical factors for those who may be susceptible to feelings of motion sickness while in VR.
FACTOR #1 : OUR BRAINS
The first reason people may experience it has to do with the VR content specifically and how the perception of that content is interpreted by our bodies – most importantly our brains. As is widely known our inner ear is responsible for our sense of balance, or if you prefer the 15-cent word-equilibrium. Aside from taking care of the all the automatic functions to keep us alive – you know like breathing – our brain does another pretty amazing trick. It takes the input from what our eyes see, and combines that with data from the inner ear.
These two things (and probably a bunch of other things) tell us that we’re moving or not and hence when to feel motion or not. So one of the main instances of motion sickness in VR is the Roller Coaster ride. Sure, this was once the low-hanging fruit for all the content producers to showcase VR as a medium. The simple fact that VR is so convincing visually plays into people’s negative reactions to situations like the roller coaster. It’s as simple as the eyes reporting they’re seeing movement to the brain. The brain then checks with the inner ear who reports there is no movement, and (insert John Madden BOOM) that discrepancy is what makes us feel woozy. So, if you’re a VR content creator and you’re producing content that creates this vestibular variance, just STOP! Don’t make me tell Grandma on you!
FACTOR #2 : FRAMES PER SECOND (FPS)
The second area where motion sickness may be noticed by some also falls directly at the feet of content creators once again. That is the notion of Frames Per Second or FPS. Remember those little flipbooks we used to sketch of the little stickman running? It’s sort of like that. Think of each of those little pages as a frame. Our TVs, computer monitors, and yes even VR hardware displays content much like a flipbook – in frames. VR hardware has limits for performance. And sure, it’s the content makers’ responsibility to find those and push against them. The trouble is that some content producers tend to play a little fast and loose with the issue of frame rate – oftentimes at the cost of the user. At times certain content may require more processing power than a device can muster. At that point the device or app has no choice but to drop a frame (i.e. show less little pages) at times OR simply crash which no one wants. The feeling of frame droppage can sometimes cause a person to feel wobbly or a little dizzy. Again, back to that pesky brain and our eyes’ need to constantly make sure we’re seeing everything correctly. When a frame or two is missing, whether we’re conscious of it or not, our brains notice. And it’s the brain’s need to fill in that gap that causes the negative feeling. On devices like the Meta Quest 2 for instance, a frame rate should never drop below 70 fps at an absolute minimum. Desirably, the lowest frame rate should really be 90 fps to be super safe.
FACTOR #3 : VR HARDWARE
The third reason motion sickness could be experienced by some is very simple and mechanical as it relates directly to the VR hardware itself. And that is tracking. Tracking is the term that corresponds to how the headset (and controllers) maintain their orientation in the virtual environment based on their location in real life. This is done in various ways. The HTC Vive Pro for instance uses external infrared emitting devices called “base stations” to create an IR field whereby the actual hardware uses its dimpled design to reflect the IR light and broadcast where the headset and controllers are. This is called outside-in tracking.
The Meta Quest 2 uses (4) cameras mounted in the headset to track that hardware. This is called inside-out tracking. Tracking has made a lot of advancement over the years but remains incredibly important to creating a good experience. If tracking is lost or interrupted, the image in the headset will stutter and/or freeze in such a way as to possibly create dizziness or disorientation. The good news about this is that it’s a very simple issue to address. First, don’t purchase or use a device with notorious for having tracking issues (not pointing fingers or mentioning names here). Second, know the ins and outs of what creates a positive tracking environment. So, if you’re using a Vive, make sure there’s no objects in the way of the base stations and they’re facing each other adequately. Or if you’re on Quest 2, make sure there’s plenty of light so the cameras can see. These best practices should eliminate about 90% of tracking issue-causing motion sickness while in VR.
BUT WAIT, THERE'S MORE…
The last issue to discuss is less a cause of potential motion sickness in VR and more of a reality. It’s really about the people that are out there using VR. For whatever reason it would seem the older a person is, the more often it is they could experience these feelings. Data has shown during the aging process, people may fall prey to a condition like vertigo. Surprisingly it doesn’t have to do with the fluid in the ear solidifying over time as many have thought. Instead, it is brought on by circulatory changes in the very small blood vessels in the ear. This explains why your 10-year old nephew can spend about 6.5 hours playing Gorilla Tag without a break. But if you’re a person that is affect by actual vertigo, you’re going to know that and subsequently VR may not be a good fit. Second, there is a certain portion of the population that just can’t do VR comfortably. Period. It’s just that simple. These are people that for varying reasons simply cannot participate without having these negative feelings. Now, is this a large segment of the population? Absolutely not. It’s certainly not a large enough segment to deter the exciting technology from widespread adoption. It’s closer to SCUBA diving. There is a certain percentage of people who simply are not able to equalize their ears versus the water pressure (I know! Ears again right?!). But hey, maybe they could SCUBA in VR! Whoa, that’s Meta concept! (See what I did there?)
by visionthree | Dec 13, 2017 | Design, Development, Research
VisionThree is using the latest technology to optimize Virtual Reality (VR) hardware to its greatest potential.
We’re creating VR solutions that effectively allow the user to interact with virtual objects with more natural, controller-free, gestures. Users are now able to see virtual representations of their own hands in the environment!
VisionThree recently had the opportunity to present our innovative VR maintenance training capabilities to an audience of U.S. Military officials. The distinguished audience, who have been early adopters of virtual training, were amazed by how seamlessly our VR solutions incorporated the user’s own hands instead of requiring the use of bulky controllers.
As the military has learned, the advantages of VR in training are vast. Current research shows that use of a VR-based training solution can change the paradigm from traditional passive learning to active learning models. Learners don’t always remember what they are told, but they almost always remember what they’ve done.
Imagine training your employees in a risk-free virtual environment. One where they are free to make and learn from mistakes without worrying about wasting company resources. They could instantly reset whatever skills they are trying to hone again and again until they have mastered them. This sort of repetition would not require more materials for practicing physical tasks. Additionally, an instructor presence is not required to demonstrate a specific skill again and again until no one had any further questions.
Virtual Training is rapidly becoming the new standard for boosting training retention rates and lowering training times and costs. Recent feedback indicates that our VR environment can:
- Create a more realistic/effective learning environment
- Increase training depth (more reps/sets available in a quicker time period)
- Allow for more natural interactions with the integration of the user’s hands, and eliminates any handicap due to knowledge of controller functionality
- Lower training costs (on-site training available)
- Eliminate need for a full-scale simulator (equipment costs are vastly improved)
- Provide distance learning opportunities (Portability and/or over networked lines)
Scientists have long predicted the value of VR in training. Studies about information retention conducted by the National Training Laboratories show the following statistics:
Traditional, passive learning methods such as reading, listening to lectures, and watching instructional videos result in information retention of between five and 20 percent.
Seeing a demonstration of a technique increases retention to 30 percent.
Active learning—repetitively performing a technique or applying information—can increase retention to 75 percent.
Having the opportunity to practice and make mistakes can increase retention to 90 percent.
We believe that Virtual Reality has endless possibilities in training. VisionThree has a number of VR projects in the pipeline already today. And we’re excited to see where these new developments take this powerful tool in the future!
by visionthree | Oct 31, 2017 | Development, Projects, Research
Dow Agrosciences is an industry leader in pest management, and when they came to us to help them debut an innovative new product, we here at VisionThree were excited to help bring their ideas to (virtual) life in the VR space.
At this year’s PestWorld convention, held last week at the Baltimore Convention Center, Dow Agrosciences presented a Virtual Reality demonstration of their newest technology: ActiveSense traps/sensors. This demo was meant to introduce audiences to a full-scale training program that we are currently working closely with Dow to bring to completion. In this demo, users are invited to don the HTC Vive Virtual Reality headset and take on the role of the pest control technician; they are then tasked with setting Dow’s new ActiveSense traps/sensors in a warehouse setting.
(Scene from Dow’s ActiveSense VR Training App)
In this demo, training focuses on the plan of attack that pest control technicians will need in order to adequately protect a space and prevent infestations. They will be asked to set traps in the full-scale virtual warehouse based on specific details that are reinforced in the training curriculum (i.e. proximity to food sources, safe hiding spots, tendency to travel along established paths). The central crux of this training exercise is all about *where* to place sensors and *why.* Trainees will be scored on their ability to think critically to find predetermined “hot spots” in each of 3 educational scenes.
Dow and VisionThree based their decision to create this VR training simulation on recent research that discovered that Virtual training produces a 30% increase in student/trainee performance speed, and a 90% increase in accuracy on training tasks. In addition, VR-based training is proving to increase retention (75% retention) when compared with traditional, lecture-style learning (5% retention). What we are learning is that people don’t always remember what you tell them, but they are far more likely to remember something that they’ve actively participated in.
Gone are the days when passive, lecture-style training was the norm. Dow is setting the bar in this style of active-learning-based training for pest control technicians. In this program, technicians will have the opportunity to gain hands-on practice in a consequence-free environment.
By removing consequences, yet still reinforcing the training materials, trainees are allowed to fail more quickly, and thus, learn from their mistakes more quickly. Since each virtual scenario is reset at the touch of a button, trainees can take what they’ve learned from their mistakes and immediately correct their actions in an experiential way that will remain with them long after the training ends.
Revolutionary products demand revolutionary training, and VisionThree is excited to continue our ongoing partnership with an industry leader in such innovative, groundbreaking work.
by visionthree | Nov 30, 2016 | Research
After breaking the ice with this demo, we decided to take it a step further and explore ways that we could provide functionality in larger spaces.
The goal of this prototype was to create a system to navigate to a waypoint, and also to empower the user to create points of interest as well. We also wanted to explore voice commands, and we saw this as a great opportunity to accomplish that as well.
One of the first tasks, and an initial challenge when we sat down to plan the prototype, was to research how to use an existing path finding algorithm, called Dijkstra’s Algorithm. By fully understanding the principles behind this, we were armed with enough information to apply it to our demo. The alternative – pathfinding through sheer number crunching, evaluating every possible path – may not be feasible on Hololens, if dealing with a large amount of data. We knew that we not only wanted the best path to be provided, but we also wanted it to update based on the user’s current position. Both of these are implemented in this prototype.
There are many utility functions available that make setting up a room easy and fun to do. All functions can be triggered by simply saying a command.
Here is a list of possible commands:
- Create – creates a new node
- Create Orphan – creates a new node without linking it to the previous one
- Go to [node] – begin path finding to the specified node
- Idle all – clears your current selection
- Select all – selects all nodes
- Tag [name] – applies a label to a selected node
- Untag – removes the label from a selected node
- Link – adds a viable path between two or more selected nodes
- Unlink – removes the path between two or more selected nodes
- Delete – deletes all selected nodes
- Pinch – selects a node
- Grab – allows the selected nodes to follow your view
- Drop – releases the selected nodes from following your view
There are many practical applications for a solution such as this.
Imagine a hospital, where the nursing staff is provided updated information on patient status by simply looking down the hallway. Or perhaps an art gallery experience, where approaching a painting on the wall triggers a hologram of the artist describing the work to you in person. Virtual tours of museums or visitor centers could come alive, with a personal guide discussing the room around you, speaking directly to you through the Hololens’ built-in spatial sound system.
For further reading on this prototype, please refer to Brendon’s project documentation here.
Greg Foxworthy is the Interactive Director at VisionThree. He is responsible for planning and leading the development team in the creation of all of our experiences, along with guiding our R&D efforts.
by visionthree | Aug 9, 2016 | Development, Research, Technology
A Brief Note on Things To Come
Here at Vision Three, we’ve always been a company that has strived to stay on the cutting edge of technology, with both hardware and software solutions. In my 10 years here, I’ve been fortunate to have countless opportunities to discover new techniques and solutions for pretty much every project I’ve been involved with, large and small. For us, discovery usually happens between projects, however it can also take place concurrently with client work, especially if they are on board with integrating something new into their product. So when these opportunities arise, we jump into them without hesitation.
Being on the cutting edge has different meanings for different people. We’ve found that having some fundamental knowledge of what the solution is – how it can benefit our clients first and foremost – is key to increasing the breadth of our capabilities and service offerings. Simply scratching the surface on something new, and demonstrating a core understanding of it, is often enough to open the door to new possibilities.
Creating experiences is what we are passionate about. To that end, we have started a more focused initiative on experimentation and prototyping with various high-tech gadgets and SDKs, which leads to unique software solutions and hardware advancements. This post is just the beginning of exciting things to come!
The Microsoft Hololens is a virtual reality headset unlike any other currently available.
The user is able to see through the visor into the real world, with virtual content overlaying the room they are standing in. This is also known as augmented reality, or the description I prefer – mixed reality. The Hololens’ hardware uses a technique known as spatial mapping, which allows virtual objects to be set on a desk, or hung on a wall.
We’ve been aware of the possibilities of Hololens for quite some time, and have recently been digging in to discover how it could help our clients communicate their messages in new, engaging ways. In the past, companies have relied on us to create applications to view hotspots floating around a 3D model of their product. The user would rotate the model with a touch screen to view different angles, and tap the hotspots to learn more about key features. While these experiences are informative, they aren’t exactly revolutionary.
This following prototype was created to explore new possibilities for conveying the same information in a brand new way. We are just using a box in this demo, but you can imagine something else – such as a car at a trade show, a dinosaur fossil in a museum, a jet engine for a training solution – and so much more.