Matrox 4Sight used in 3D ultrasound diagnostic imaging
Matrox 4Sight used in 3D ultrasound diagnostic imaging
For anyone other than a physician or radiologist, 2D prenatal ultrasound images are usually ultra difficult to decipher – “is that our child’s head, hands or feet?” A recent breakthrough in 3D ultrasound diagnostic imaging is about to change all that.
BioMediCom – a company based in Jerusalem, Israel – recognized the need for improved ultrasound imaging and responded with BabyFace™ – an “add-on” technology that maintains conventional 2D ultrasound scanning practices while offering 3D volumetric representation at the touch of a button.
“Doctors and radiologists in the field of diagnostic ultrasound imaging conveyed to us the strong urge of parents-to-be to ‘see’ their babies in a realistic form, rather than trying to decipher the inherently noisy 2D images,” explains Dr. Hillel Rom, who led the software development for the BabyFace™ project.
This technology not only offers aesthetic benefits but also the potential for reduced diagnostic times and greater accuracy in prognoses. “3D imaging provides the ultrasound professional with a tool to view various organs and tissues in a completely new way. The technology has potential for more accurate measurements and more reliable abnormality detection – which of course has great clinical importance,” says Rom.
BabyFace™ is comprised of a unique data acquisition transducer, advanced 3D visualization software and a miniature computer. The system fully integrates with most existing 2D ultrasound systems used in hospitals and clinics around the world, and it does not interrupt the normal scanning procedure in any way during the diagnostic routine.
How does it work? The 2D images are captured directly from the conventional ultrasound system using the transducer and transferred to the BabyFace™ miniature computer system. A unique motion tracker within the transducer gives positional data on the captured 2D images. This data enables accurate 3D-volume reconstruction and high quality visualization using the robust BabyFace™ 3D software. The user can view the image in 2D and 3D immediately after acquisition, at the same time. Three-dimensional observation of a particular region of the 2D image simply requires pressing a button to activate the BabyFace™ user interface.
An advanced motion-tracking sensor is integrated into a standard ultrasound transducer – developed in collaboration with Sonora Medical Systems Inc. – that can be connected to conventional ultrasound systems. With this integrated sensor, there are no mechanical restrictions imposed on the diagnostic procedure and no susceptibility to electromagnetic interference.
“The 2D images are captured through the video output of the ultrasound scanner, and then acquired by a framegrabber in the miniature computer,” explains Rom. “Concurrent with the grabbing, the positional data on the relative orientation of the images are transferred to the computer from the sensor. This data is synchronized in the computer using our software, resulting in an accurate 3D-volume reconstruction.”
BabyFace™ uses the Matrox 4Sight integrated imaging platform for its host computer requirements. Loaded with BioMediCom’s software, the 4Sight runs a Windows NT operating system and uses the Matrox Meteor-II framegrabber to capture 2D images through the video output of the ultrasound system. “The 4Sight provided an integrated solution using all the components we required – the CPU, onboard graphics, video output and the Meteor-II frame grabber, all in an extremely compact (8.2″ x 7.25″) and rugged box. This allowed us to create a system that could fit on virtually any ultrasound cart,” says Rom.
According to Dr. Ziv Soferman, BioMediCom’s Chief Scientist who invented the image processing algorithms encompassed in BabyFace™, the major challenge in developing this technology was to produce high quality 3D renderings of noisy ultrasound images.
“Due to the physical phenomena inherent in the process, ultrasound images have noise in the form of speckles. Traditional image processing techniques tend to be futile on ultrasound imagery. Our unique segmentation tools are able to overcome the speckled nature of the images and isolate the tissue or organ of choice from its surroundings,” says Soferman.
BioMediCom’s image processing algorithms assist the user in isolating the region of interest (ROI) using a semi-automatic segmentation method guided by an input contour provided by the user in one image. The system takes this input contour (which surrounds the ROI) and fits it to the actual image edges. The result-ing corrected contour serves as the template for the next image, and the system proceeds to segment all other captured images automatically. In order to avoid contour drifts, the user can interrupt the segmentation at any point in time and manually correct the boundaries.
“Our unique algorithmic approach to accurate segmentation relies on edge detection as one of its building blocks. By enhancing edges within the ROI, a subset of multiple disconnected segments can be selected and connected to form a continuous optimal contour. In forming this contour, the algorithm is alson guided by input from the physician,” explains Soferman.
The segmentation is fast and efficient, with images of 128×128 pixels being segmented at a rate of 10 images per second, and images of 256×256 pixels at three images per second. These segmentation algorithms also exhibit very high quality – for example, where parts of the ROI are missing, the output contour bridges the gaps.”
While this segmentation process occurs, the data is transferred to the software’s visualization environment – comprised of a set of visualization tools that provide high quality renderings and allow for interactive manipulation of the volume. To achieve high quality and visualization speed, a classic “Ray Casting” approach is used. Rays are cast from the viewpoint through the image plane into the 3D-volume, thereby accumulating the voxel values located along the ray, explains Soferman.
“To speed up the process, the rays are initially intersected by a boundary box surrounding the fetus. From the intersection point, ‘marching’ along the ray takes place until a voxel with values above a user-controllable threshold is encountered. Due to the inherently noisy nature of ultrasound images, the marching continues onward from this first voxel, accumulating the values of all voxels encountered along the way. Accumulation is done using a weighting scheme that factors the transparency value of each voxel. The marching is stopped once a certain level of opacity is reached. Finally, the accumulated value is used to retrieve a color for the pixel from a predefined look-up table, resulting in the final 3D-image color,” he says.