Full text: Proceedings, XXth congress (Part 5)

     
   
  
    
  
  
  
  
  
   
  
   
  
  
   
   
   
   
  
   
   
   
   
    
   
   
    
    
  
   
   
  
   
    
   
   
    
    
    
   
    
  
   
    
   
  
  
    
    
    
    
   
  
  
  
  
  
   
    
   
  
  
  
    
    
   
   
     
      
ul 2004 
iles and 
age file 
so open 
; images 
are very 
1tomatic 
d as soft 
accurate 
me this 
ed from 
interest 
ich slice 
used in 
aximum 
can be 
i| image 
Il of the 
| contain 
images. 
| be seen 
1 mouse 
; of the 
ty level, 
ed to all 
rmed to 
al image 
> to find 
Hints. 
image or 
alization 
ace and 
ied from 
wwe been 
signed to 
nts only 
surface 
surface 
are also 
> opacity 
ee need 
ts which 
entation 
different 
istogram 
tation. 
ing. By 
hreshold 
or below 
1 binary- 
than one 
holds are 
1 spite of 
Iteractive 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
  
thresholding option. By this option, when one changes the 
threshold by using a track bar, its effect is seen on the screen 
synchronously. By the time user decide that the optimal 
segmentation has obtained, he/she can change the threshold. 
Alter thresholding, on the images, there would be many holes 
and also many small areas. To delete, unwanted small areas, we 
make a connectivity analysis, (Gonzalez, 1987, Teuber, 1993). 
By this analysis, the arcas that are smaller than the arca 
threshold are deleted. After connectivity analysis, still there 
might be some unwanted pixels on the image. We have written 
functions to delete these areas manually. After thresholded- 
segmented regions have been obtained by using morphological 
operators such as erode/dilade, we fill or delete the remaining 
holes. The final segmentation is recorded as a file. 
3.2. Contour Segmentation 
By this method, possible boundary value of a tissue is selected 
with histogram analysis. This value is assumed to be the 
contour value and the interested image is contoured by tracking 
this value. After contouring, small areas can be detected 
automatically by connectivity analysis or manually with hand. 
After refinement of contours, we assign labels to pixels which 
are bounded by the counter lines. If user doesn’t like the 
contouring result, he/she can ignore it and make a new 
segmentation easily. 
3.3. Manual Segmentation 
With the automatic segmentation procedures, it is inevitable to 
make some incorrect label assignment. So in the literature, 
manual segmentation is said to be still the best method. For 
precise medical applications, manual segmentation will give the 
best results. In this case, user draws the boundaries of the 
interested region by using mouse pointer. User can make 
editing during manual segmentation. However, manual 
segmentation is too time consuming. It can take hours or 
sometimes days to segment complex MR images by manual 
segmentation. 
4. REGISTRATION OF 2D SLICE IMAGES AND 3D 
SURFACE MODELS 
Registration is the determination of a one to one mapping or 
transformation between the coordinates in one space and those 
in another, such that points in the two spaces that correspond to 
the same anatomical point are mapped to each other. 
Registration of multimodal images makes it possible to combine 
different types of structural information (for example CT and 
MR) (West et al, 1997). 
In this paper, basically two types of registration are mentioned: 
1) Registration of 2D (slice images) point sets and 2) 
Registration of 3D (surface models) point sets. Let us assume 
that, a patient had been scanned with CT and MR scanncrs. As 
known, CT images are geometrically more correct than MR 
images. But on the other hand, radiometric information of MR 
images are richer than CT images. By keeping this in mind, we 
cam easily say that, on CT images bone structures are well 
defined when compared soft tissues. But on the other hand, MR 
images represent soft tissues with greater radiometry 
information. Advantages of these (wo imaging modalities can 
be brought together. For this purpose, 2D image registration and 
fusion techniques are used. Ie, if we assume CT images of the 
same patient as a base, if we can find the corresponding 
anatomical points or details on MR images then we can map the 
MR image pixels onto CT image with a mapping function 
(generally a transformation function). After this mapping, CT 
and MR image information is brought together. New density 
values of combined (or registered) image pixels could be 
showed by using r, g, b bands. For example, CT densities are in 
red band and MR densities are in blue band etc. This 
visualization is known as image fusion. With this technique, 
new 2D slices may give more detailed 2D information to 
medical doctors. 
2D registration can be performed after segmentation of CT and 
MR images too. In this case there is no need for fusion. 
Segmented 2D CT and MR images are combined and by using 
this new registered (combined) segmented slices 3D models of 
the tissues can be constructed. 
A medical imaging system should also provide 3D registration 
functionalities. This 3D registration functionality might be used 
for various purposes. For example, for temporal comparison of 
two surfaces generated from the same or different scanner type 
images such as CT and MR. On the other hand, one can 
reconstruct models individually from CT and MR slices without 
register them in 2D. But finally, he/she might want to visualize 
these individual surface models at the same time. For example, 
the same patient's inner brain tissues surface models could be 
obtained from MR slices, and skull and outer skin models could 
be obtained from CT slices as in the example given in this 
paper. This case is equivalent to the registration of the two 
different surface models problem. 
In MIPAS, we provide both 2D and 3D registration 
functionalities. For registration, we used iterative closest point 
transform (Rusinkievicz and Levoy, 2001; Betke et al, 2001; 
Kaneko et al, 2003; Fitzgibbon, 2001). For two dimensional 
registration, we provide both rigid body (2D similarity 
transformation) and non-rigid body (2D affine transformation) 
transformations as being mapping functions. If 2D images 
which are to be registered, had been obtained by the same 
scanner with the same pixel aspect ratios, these images could be 
matched with one transformation, two translations and one scale 
factor. If two image sets to be registered, are scanned with 
different pixel aspect ratios, then a non-rigid transformation is 
required. In this case, we use 2D affine transformation as being 
mapping function. We estimate the transformation parameters 
with least squares adjustment and to do this we use external 
markers or anatomical landmarks as being common points. 
For 3D registration, above assumptions are valid in analogy. 
We use 3D similarity mapping function for rigid body and 3D 
affine for non-rigid body transformation in ICP algorithm. In 
MIPAS, common points are selected manually. We are still 
studying on automatic point selection. Our ICP implementation 
works with global mapping functions. We are still studying for 
local non rigid registration with spline curves (Xie and Farin, 
2000; Ruckert et al., 1999). 
5. PHOTOREALISTIC TEXTURE MAPPING 
After the outer face (skin) surface model had been created from 
CT or MR slices, digital photographs of the patient's face can 
be texture mapped on to 3D surface model for photorealistic 
visualization. Some examples have been shown in the next 
chapter. For texture mapping, we should know the 
corresponding picture (texture) coordinates of the vertex points 
  
  
  
  
  
  
  
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.