virtual reality in action

Driving Simulator

 

At the Transport Systems Catapult we have built our simulation and visualisation capability to allow us to recreate multiple modes of transportation in virtual reality. To this end, we recently acquired a driving simulator to aid the development of driverless cars and other forms of transportation.

The Driving Simulator is available to internal colleagues, projects, and external clients and forms part of our unique capabilities hosted at the TSC.

The simulator is usually used for race car driver training; however, we have modified the simulator to work with Virtual Reality technologies to create an immersive world.

The simulator is equipped with electromechanical actuators, allowing the simulator/virtual vehicle to freely move as it navigates around different road conditions in the virtual world.The simulator is also equipped with surround sound which allows users to hear directional sound which strengthens the immersive world.

We have built a collection of assets within our Visualisation Lab which enables us to conduct Virtual Reality user trials. This gives us the ability to run multiple scenarios and variables with complete consistence and repeatability at a relatively low cost and short production timescales.


Reducing real world costs of trials and testing

Traditional real world trials can require lots of paperwork, risk assessments, insurance cover, administrators, and an appropriate venue, however none of this is needed for simulated user trials.

This technology and approach allows designers and engineers to reduce the number of variables under consideration to a much lower number before physical trials. This allows designers and engineers have greater confidence at the point of transitioning from the virtual to the real world and saves time and money.

Using this new technology, we can currently undertake the following tasks/user trials:

  • Behavioural/physical responses to trigger events in a virtual environment.  (how do pod users react to an unexpected event during a journey i.e. someone changes direction in front of the pod’s trajectory)
  • Behavioural response to user interface concepts. (what will autonomous vehicle exhibit externally or internally to passengers and pedestrians?)
  • Measure stress levels. What was the participant looking/doing at when their heart rate spiked?  (looking at a departure board, walking the 2miles to the platform/departure gate etc.)
  • Feedback on autonomous vehicle dynamics (as a pedestrian or a passenger, what did the vehicle’s’ driving characteristics feel like/look like?)
  • What override mechanism does an autonomous vehicle need? (an emergency stop button, a voice activated control system, destination change interface, what payment methods and what service offerings)
  • How should the final autonomous vehicle look.  (Design it in VR and allow designers and potential end users to walk around it and sit in it virtually to give feedback)
  • What do control rooms look like in the future with VR headsets?  (do we need large, high cost and high complexity rooms any more when you can display multiple screen all around you in VR?)
  • Other applications are being considered daily.  Please let us know your thoughts!

In addition to the road based simulation, the same system can be used to simulate driving a train or an aircraft.

Using VR, we can recreate the interior of the vehicle we’re interested in and move the virtual vehicle through a virtual environment.

Next Steps

We have now acquired SMI’s DK2 eye tracking system. This is an eye tracking technology that sits inside an Oculus rift and provides a link between what subjects participating in user trials are looking at in VR.

We are combining eye tracking with physiological monitors that will allow us to collect biometric data during experiments/user trials. Eye tracking provides another opportunity beyond knowing where you are looking within a virtual world.


Using a technique call “Foveated Rendering” we can increase graphical performance significantly. This could save time and money during the construction of 3D environments. “Foveated Rendering” works by rendering the user’s focal point to the highest resolution. Areas outside the focal point are rendered at a lower resolution, therefore using less processing power.
This technique means that we can build scenes that are more detailed and we can worry less about IT performance limitations.

Cookies on Catapult explained

To comply with EU directives we now provide detailed information about the cookies we use. To find out more about cookies on this site, what they do and how to remove them, see our information about cookies. Click OK to continue using this site.

OK