How it works

Shortcomings in traditional mouse/keyboard/screen operation

Most 3D medical data is analyzed using 2D tools such as 2D screens, mouse and keyboard. The richness of the data, the full potential of the 3D images is not explored to its fullest potential. Also slice by slice analysis can be very time consuming and cumbersome.Advanced users might use 3D visualization of the stack of 2D slices or even use a 3D stereoscopic screen. However all systems fall short when easy navigation in 3D is required. Easy navigation is essential for optimal analysis. In PS-Medtech’s opinion navigation fails and is far from optimal as shown by the following examples:‘When you pick up an apple, examine it for spots, peel and slice it you are using both your hands. Doing it with one hand tied behind your back is extremely difficult. So why are 3D analysis being done with one hand tied behind the back?’

Our view is that when analyzing 3D medical images the user should experience a true lifelike experience. In a lifelike experience images are convincing and the user is able to interact with the data by “touching”, grabbing and holding the data using both hands. To realize this, both live rendering of 3D volumetric data combined with intuitive 3D navigation is essential.Further the existing workstations as well as the current applications have evolved from a traditional 2D workstation environment. These workstations have serious flaws when it comes to working with 3D medical imaging data.Problems are:

  • These workstations (traditional, screen mouse and keyboard) do not support the intuitive interaction needed to pick up a data set and handle it as if it were a physical object.
  • Most 3D analysis software packages evolved from 2D analysis software and are built upon a 2D foundation. They cannot deal with two handed interaction. A true lifelike experience is not available and the 3D functionality is often too difficult to use and also time consuming.

Navigation and Visualization

Navigation and Visualization are at the core of PS-Medtech’s expertise.‘Can I see that? That question often means, can I hold the object in my hands, can I rotate it, look inside it and examine it from all angles?’

The analysis of 3D medical images should be no different. Seeing with your hands is the most natural way people examine objects, a workstation for the analysis of digital 3D information should create this experience as well.

Navigation – two handed interaction

Navigation means the ability to move freely through a 3D data set in order to grasp, touch and interact with the data. To achieve this it is essential that the movements of the user are tracked.  A relation between the 3D image and the physical world of the user has to be made, and for this tracking technology is essential. The combination of both positioning of the data and the appropriate tools to maneuver inside this data are required for lifelike navigation.

Furthermore it requires that applications support two handed interaction and therefore the movement of objects held in each hand should be tracked.

Visualization- images that keep looking real during interaction

When 3D data is visualized the data often looks great when it is static. However when interaction with the data is required, the resolution and frame rate often drops to a level that gives users a headache.  A true lifelike experience presents the user with images that he or she believes, does not cause strain, or start breaking up while interacting with the data.  A true lifelike experience should be smooth, believable and a user should intuitively know what to do.

Basic requirement are:

  • A good 3D display
  • A correct position to look at the display
  • 3D images that are presented around the pane of the screen
  • No blocking of the Image during interaction (when “touching” the image)
  • Live rendering of 3D Volumetric data

Without a good display, it will be impossible to create a true lifelike experience. However the other listed requirements are just as important and will be discussed briefly.

Screens have an optimal viewing distance and position. Manufacturers ask users to use the screen accordingly. However outside the optimal viewing zone the perceived quality drops and most users do not have the screen at an optimal location.

The optimal 3D image on a display is created around the pane of the screen.  Users have a tendency to extract images out of the 3D display for an optimal view. Especially when they want to interact with the image and “touch” the data.  The quality of the image is dramatically reduced the further the image is extracted from the pane of the display. If you want to touch the image when it is optimally rendered at the pane, you can’t do this because the physical screen is there.

As a result users extract the 3D Image out of the display and then partly block the image with their hands or the devices used. The 3D image view is partly blocked, even those parts of the image that the viewer originally perceived between his hands and his eyes.  As a result a true lifelike experience is lost.

When interaction with 3D volumetric images is required (e.g. medical 3D images) the computer system has to keep calculating (rendering) the correct image based on the actions of the user. Unfortunately the bigger the data sets the higher the required processing power of the computer system rendering the image. As a result the image quality drops and the movement of the image becomes scattered (drop in frame rate). True lifelike interactive imaging of 3D images requires live rendering with a minimal frame rate and no perceived loss in image quality.

Workstations Explained

The workstation are based on the following principles:Users look at a 3D display via a mirror. As a result the user sees the screen in effect behind the mirror. The display is seen at the so-called Virtual Focus Plane (VFP)

The trick with the mirror allows users to put hands in the same location as where the display is seen. The tracking technology allows for interaction using arbitrary objects. Interaction and the 3D rendering can take place at optimal location around the pane of the screen.Compared to a traditional setup the effective optimal space is more than doubled as there is no physical barrier. Interaction can take place not just in front but also on and behind the perceived location of the screen. This enables users to bring their hands into the same environment as the virtual 3D-objects without interrupting the visual image. The users are invited to grab, hold and interact with the data using highly accurate wireless optical tracking technology. Interaction devices are used to position an object in 3D with six degrees of freedom.  In the C-Station and PSS,  users are visually holding the data in their hands to analyze it (inside and out) intuitively, faster and better. A better lifelike experience is created.

Requirement

Traditional Desktop + Application

C-station or PSS

CStation

Interaction where the visualization is optimal

The data is at the location where the hands are. This allows for the best intuitive lifelike experience

Not possible.

Optimal visualization and interaction is around the pane of the screen. The physical screen prevents optimal interaction.

The only space available for interaction is in front of the screen.

Extracting the image out of the screen reduces the quality of the image

Realized

Freedom to move and touch the data without physical barriers.

Visually both interaction and physical interaction are at the same location – around the pane of the screen

Use the optimal viewing zone of a 3D display

3D displays work best when seen from a specific viewing zone

Problematic

The freedom to move as a user is much bigger than the optimal viewing zone permits

Realized

The user is invited to work in the optimal viewing zone

Unobstructed view while touching the data

Not possible

Realized

Unobstructed view while holding the data in the hands

Ability to interact with BOTH hands.

(The difference between peeling an apple with two hands and with one hand behind your back)

Not possible

This logical addition of using both hands in a 3D interface is unfortunately not supported by the overwhelming majority of 3D applications. If 3D navigation is possible it is only for one device at the time

Realized

Seamless integration of 2D and 3D interfaces. Two handed interaction is fully supported

Live 3D rendering

Live interactive 3D rendering of medical images without perceived loss in quality

Problematic

During interaction the frame rate and resolution often drops

Realized

A 3D rendering engine was specifically developed that renders volumetric data in 3D, in Real-time. This engine maintains a high frame rate during live 3D interaction without sacrificing quality