Using Hands in Virtual Reality – How VisionThree is Eliminating Controllers

Using Hands in Virtual Reality – How VisionThree is Eliminating Controllers

VisionThree is using the latest technology to optimize Virtual Reality (VR) hardware to its greatest potential. 

We’re creating VR solutions that effectively allow the user to interact with virtual objects with more natural, controller-free, gestures. Users are now able to see virtual representations of their own hands in the environment!

VisionThree recently had the opportunity to present our innovative VR maintenance training capabilities to an audience of U.S. Military officials. The distinguished audience, who have been early adopters of virtual training, were amazed by how seamlessly our VR solutions incorporated the user’s own hands instead of requiring the use of bulky controllers.

As the military has learned, the advantages of VR in training are vast. Current research shows that use of a VR-based training solution can change the paradigm from traditional passive learning to active learning models. Learners don’t always remember what they are told, but they almost always remember what they’ve done.

Imagine training your employees in a risk-free virtual environment. One where they are free to make and learn from mistakes without worrying about wasting company resources. They could instantly reset whatever skills they are trying to hone again and again until they have mastered them. This sort of repetition would not require more materials for practicing physical tasks. Additionally, an instructor presence is not required to demonstrate a specific skill again and again until no one had any further questions.

Virtual Training is rapidly becoming the new standard for boosting training retention rates and lowering training times and costs.  Recent feedback indicates that our VR environment can:

  • Create a more realistic/effective learning environment
  • Increase training depth (more reps/sets available in a quicker time period)
  • Allow for more natural interactions with the integration of the user’s hands, and eliminates any handicap due to knowledge of controller functionality
  • Lower training costs (on-site training available)
  • Eliminate need for a full-scale simulator (equipment costs are vastly improved)
  • Provide distance learning opportunities (Portability and/or over networked lines)

Scientists have long predicted the value of VR in training. Studies about information retention conducted by the National Training Laboratories show the following statistics:

  • Traditional, passive learning methods such as reading, listening to lectures, and watching instructional videos result in information retention of between five and 20 percent.

  • Seeing a demonstration of a technique increases retention to 30 percent.

  • Active learning—repetitively performing a technique or applying information—can increase retention to 75 percent.

  • Having the opportunity to practice and make mistakes can increase retention to 90 percent.

We believe that Virtual Reality has endless possibilities in training. VisionThree has a number of VR projects in the pipeline already today. And we’re excited to see where these new developments take this powerful tool in the future!

VisionThree Brings Hands-On Training to the Virtual Space

VisionThree Brings Hands-On Training to the Virtual Space

Dow Agrosciences is an industry leader in pest management, and when they came to us to help them debut an innovative new product, we here at VisionThree were excited to help bring their ideas to (virtual) life in the VR space.

At this year’s PestWorld convention, held last week at the Baltimore Convention Center, Dow Agrosciences presented a Virtual Reality demonstration of their newest technology:  ActiveSense traps/sensors. This demo was meant to introduce audiences to a full-scale training program that we are currently working closely with Dow to bring to completion. In this demo, users are invited to don the HTC Vive Virtual Reality headset and take on the role of the pest control technician; they are then tasked with setting Dow’s new ActiveSense traps/sensors in a warehouse setting.

(Scene from Dow’s ActiveSense VR Training App)

In this demo, training focuses on the plan of attack that pest control technicians will need in order to adequately protect a space and prevent infestations. They will be asked to set traps in the full-scale virtual warehouse based on specific details that are reinforced in the training curriculum (i.e. proximity to food sources, safe hiding spots, tendency to travel along established paths). The central crux of this training exercise is all about *where* to place sensors and *why.* Trainees will be scored on their ability to think critically to find predetermined “hot spots” in each of 3 educational scenes.

Dow and VisionThree based their decision to create this VR training simulation on recent research that discovered that Virtual training produces a 30% increase in student/trainee performance speed, and a 90% increase in accuracy on training tasks. In addition, VR-based training is proving to increase retention (75% retention) when compared with traditional, lecture-style learning (5% retention). What we are learning is that people don’t always remember what you tell them, but they are far more likely to remember something that they’ve actively participated in.

Gone are the days when passive, lecture-style training was the norm. Dow is setting the bar in this style of active-learning-based training for pest control technicians. In this program, technicians will have the opportunity to gain hands-on practice in a consequence-free environment.

By removing consequences, yet still reinforcing the training materials, trainees are allowed to fail more quickly, and thus, learn from their mistakes more quickly. Since each virtual scenario is reset at the touch of a button, trainees can take what they’ve learned from their mistakes and immediately correct their actions in an experiential way that will remain with them long after the training ends.

Revolutionary products demand revolutionary training, and VisionThree is excited to continue our ongoing partnership with an  industry leader in such innovative, groundbreaking work.

Hololens Tech Demo: Pathfinding using Voice Commands

Hololens Tech Demo: Pathfinding using Voice Commands

After breaking the ice with this demo, we decided to take it a step further and explore ways that we could provide functionality in larger spaces.

The goal of this prototype was to create a system to navigate to a waypoint, and also to empower the user to create points of interest as well. We also wanted to explore voice commands, and we saw this as a great opportunity to accomplish that as well.

One of the first tasks, and an initial challenge when we sat down to plan the prototype, was to research how to use an existing path finding algorithm, called Dijkstra’s Algorithm.  By fully understanding the principles behind this, we were armed with enough information to apply it to our demo.  The alternative – pathfinding through sheer number crunching, evaluating every possible path – may not be feasible on Hololens, if dealing with a large amount of data.  We knew that we not only wanted the best path to be provided, but we also wanted it to update based on the user’s current position.  Both of these are implemented in this prototype.

There are many utility functions available that make setting up a room easy and fun to do. All functions can be triggered by simply saying a command.

Here is a list of possible commands:

  • Create – creates a new node
  • Create Orphan – creates a new node without linking it to the previous one
  • Go to [node] – begin path finding to the specified node
  • Idle all – clears your current selection
  • Select all – selects all nodes
  • Tag [name] – applies a label to a selected node
  • Untag – removes the label from a selected node
  • Link – adds a viable path between two or more selected nodes
  • Unlink – removes the path between two or more selected nodes
  • Delete – deletes all selected nodes
  • Pinch – selects a node
  • Grab – allows the selected nodes to follow your view
  • Drop – releases the selected nodes from following your view

There are many practical applications for a solution such as this.

Imagine a hospital, where the nursing staff is provided updated information on patient status by simply looking down the hallway. Or perhaps an art gallery experience, where approaching a painting on the wall triggers a hologram of the artist describing the work to you in person. Virtual tours of museums or visitor centers could come alive, with a personal guide discussing the room around you, speaking directly to you through the Hololens’ built-in spatial sound system.

For further reading on this prototype, please refer to Brendon’s project documentation here.

Greg Foxworthy is the Interactive Director at VisionThree. He is responsible for planning and leading the development team in the creation of all of our experiences, along with guiding our R&D efforts.

Hololens Tech Demo: Placing Virtual Objects around Physical Objects

Hololens Tech Demo: Placing Virtual Objects around Physical Objects

A Brief Note on Things To Come

Here at Vision Three, we’ve always been a company that has strived to stay on the cutting edge of technology, with both hardware and software solutions. In my 10 years here, I’ve been fortunate to have countless opportunities to discover new techniques and solutions for pretty much every project I’ve been involved with, large and small. For us, discovery usually happens between projects, however it can also take place concurrently with client work, especially if they are on board with integrating something new into their product. So when these opportunities arise, we jump into them without hesitation.

Being on the cutting edge has different meanings for different people. We’ve found that having some fundamental knowledge of what the solution is – how it can benefit our clients first and foremost – is key to increasing the breadth of our capabilities and service offerings. Simply scratching the surface on something new, and demonstrating a core understanding of it, is often enough to open the door to new possibilities.

Creating experiences is what we are passionate about. To that end, we have started a more focused initiative on experimentation and prototyping with various high-tech gadgets and SDKs, which leads to unique software solutions and hardware advancements. This post is just the beginning of exciting things to come!

The Microsoft Hololens is a virtual reality headset unlike any other currently available.

The user is able to see through the visor into the real world, with virtual content overlaying the room they are standing in. This is also known as augmented reality, or the description I prefer – mixed reality. The Hololens’ hardware uses a technique known as spatial mapping, which allows virtual objects to be set on a desk, or hung on a wall.

 

We’ve been aware of the possibilities of Hololens for quite some time, and have recently been digging in to discover how it could help our clients communicate their messages in new, engaging ways. In the past, companies have relied on us to create applications to view hotspots floating around a 3D model of their product. The user would rotate the model with a touch screen to view different angles, and tap the hotspots to learn more about key features. While these experiences are informative, they aren’t exactly revolutionary.

This following prototype was created to explore new possibilities for conveying the same information in a brand new way. We are just using a box in this demo, but you can imagine something else – such as a car at a trade show, a dinosaur fossil in a museum, a jet engine for a training solution – and so much more.