How it works
- Shortcomings of traditional interfaces
- 3D Navigation AND 3D Visualization
- Our Workstations explained
Shortcomings in traditional mouse/keyboard/screen operation
Most 3D medical data is analyzed using 2D tools such as 2D screens, mouse and keyboard. The richness of the data, the full potential of the 3D images is not explored to its fullest potential. Also slice by slice analysis can be very time consuming and cumbersome.Advanced users might use 3D visualization of the stack of 2D slices or even use a 3D stereoscopic screen. However all systems fall short when easy navigation in 3D is required. Easy navigation is essential for optimal analysis. In PS-Medtech’s opinion navigation fails and is far from optimal as shown by the following examples:‘When you pick up an apple, examine it for spots, peel and slice it you are using both your hands. Doing it with one hand tied behind your back is extremely difficult. So why are 3D analysis being done with one hand tied behind the back?’
|Our view is that when analyzing 3D medical images the user should experience a true lifelike experience. In a lifelike experience images are convincing and the user is able to interact with the data by “touching”, grabbing and holding the data using both hands. To realize this, both live rendering of 3D volumetric data combined with intuitive 3D navigation is essential.Further the existing workstations as well as the current applications have evolved from a traditional 2D workstation environment. These workstations have serious flaws when it comes to working with 3D medical imaging data.Problems are:
Navigation and Visualization
Navigation and Visualization are at the core of PS-Medtech’s expertise.‘Can I see that? That question often means, can I hold the object in my hands, can I rotate it, look inside it and examine it from all angles?’
The analysis of 3D medical images should be no different. Seeing with your hands is the most natural way people examine objects, a workstation for the analysis of digital 3D information should create this experience as well.
Navigation – two handed interaction
Navigation means the ability to move freely through a 3D data set in order to grasp, touch and interact with the data. To achieve this it is essential that the movements of the user are tracked. A relation between the 3D image and the physical world of the user has to be made, and for this tracking technology is essential. The combination of both positioning of the data and the appropriate tools to maneuver inside this data are required for lifelike navigation.
Furthermore it requires that applications support two handed interaction and therefore the movement of objects held in each hand should be tracked.
Visualization- images that keep looking real during interaction
When 3D data is visualized the data often looks great when it is static. However when interaction with the data is required, the resolution and frame rate often drops to a level that gives users a headache. A true lifelike experience presents the user with images that he or she believes, does not cause strain, or start breaking up while interacting with the data. A true lifelike experience should be smooth, believable and a user should intuitively know what to do.
Basic requirement are:
Without a good display, it will be impossible to create a true lifelike experience. However the other listed requirements are just as important and will be discussed briefly.
Screens have an optimal viewing distance and position. Manufacturers ask users to use the screen accordingly. However outside the optimal viewing zone the perceived quality drops and most users do not have the screen at an optimal location.
The optimal 3D image on a display is created around the pane of the screen. Users have a tendency to extract images out of the 3D display for an optimal view. Especially when they want to interact with the image and “touch” the data. The quality of the image is dramatically reduced the further the image is extracted from the pane of the display. If you want to touch the image when it is optimally rendered at the pane, you can’t do this because the physical screen is there.
As a result users extract the 3D Image out of the display and then partly block the image with their hands or the devices used. The 3D image view is partly blocked, even those parts of the image that the viewer originally perceived between his hands and his eyes. As a result a true lifelike experience is lost.
When interaction with 3D volumetric images is required (e.g. medical 3D images) the computer system has to keep calculating (rendering) the correct image based on the actions of the user. Unfortunately the bigger the data sets the higher the required processing power of the computer system rendering the image. As a result the image quality drops and the movement of the image becomes scattered (drop in frame rate). True lifelike interactive imaging of 3D images requires live rendering with a minimal frame rate and no perceived loss in image quality.
The workstation are based on the following principles:Users look at a 3D display via a mirror. As a result the user sees the screen in effect behind the mirror. The display is seen at the so-called Virtual Focus Plane (VFP)
The trick with the mirror allows users to put hands in the same location as where the display is seen. The tracking technology allows for interaction using arbitrary objects. Interaction and the 3D rendering can take place at optimal location around the pane of the screen.Compared to a traditional setup the effective optimal space is more than doubled as there is no physical barrier. Interaction can take place not just in front but also on and behind the perceived location of the screen. This enables users to bring their hands into the same environment as the virtual 3D-objects without interrupting the visual image. The users are invited to grab, hold and interact with the data using highly accurate wireless optical tracking technology. Interaction devices are used to position an object in 3D with six degrees of freedom. In the C-Station and PSS, users are visually holding the data in their hands to analyze it (inside and out) intuitively, faster and better. A better lifelike experience is created.