The 572 coronal sections from this set have been utilized in creating the digital atlas. The odd numbered sections are stained with the Nissl stain and the even numbered sections are stained for myelin. Since each section is 20 microns thick, the consecutive Nissl sections are 40 microns apart, as are the consecutive myelin sections. In order to obtain a 10 micron resolution in the z axis (to match the resolution in x and y, and yield 10 micron isomorphic voxels), we would need to interpolate three equidistant images, as we will see below.
The images were corrected for inhomogeneity of illumination with IDL software . Image alignment and correction for nonuniform staining was accomplished using VoxelMath and Bitplane-VoxelShopPro software after initial processing was done with IDL software.
Inhomogeneity of illumination, intrinsic to the microscope optics at very low magnification, is apparent across the plane of the image. The images also reveal the limited contrast and independence of the channels.
The channels for the myelin image (figure 3) are:
At this resolution we cannot distinguish differences in luminance [brightness] among the three channels, and hence it is appropriate to combine the images into a single gray scale image.
286 of these paired Nissl and myelin subsets constitute the building material for the present atlas. Each pixel in the image corresponds to 10 microns of linear space in the specimen.
The images were corrected with IDL software . In the correction procedure, a uniform illumination mask is created and the variation of image luminance is mapped over it. An image of the glass slide and coverslip without tissue specimen was captured to help establish the mask. This forms the background image.
We divide the pixel luminances in the background image with the plane of uniform luminance to obtain the distortion mask of the light source. The inverse of this distortion mask is then applied to the captured image, thus reconstructing a corrected image. Secondary scattering of light from the tissue is ignored. The reconstructed images are reproduced below.
Structure names and leader lines have been applied to every 10th Nissl- and myelin-stained image pair (every 200 microns). Some additional sections have been labeled, to include small structures that fall between the 200 micron increments, so that all major gray and white matter structures have been delineated pictorially.
General Comment: Segmentation of brain nuclei in Nissl material is a very difficult task, even for world experts, who differ markedly in their assignment of nuclear boundaries. While some edges are clear, many require the expert to draw on high-level knowledge, accumulated over a lifetime of careful study of disparate materials. Consequently, manual input is mandatory for even approximate parcellation of brain in its fine details. Accordingly, as described and illustrated below, we have now created new methods for entering segmentation data that are both simple and powerful to use and which now let the expert efficiently segment virtually any neuroanatomical object in a 3D data set, and then display segmented structures in 3D in a wide variety of configurations.
Objective: As described earlier, our mouse brain pilot data set exists as two (Nissl and Myelin) 768x496x281 volumes of 8-bit values. The goal is to extract those voxels in either or both volumes which correspond to any of an existing list of over 500 named anatomical structures. It is likely that this enterprise will refine many boundaries specified in legacy atlases, and it also may identify new parcellations. While summary information about each structure will be accumulated (volume, center of mass, volume/surface ratio, etc.), it is expected that the textural information in the individual sections will remain of paramount interest, and the primary information required by the viewer is the assignment of voxels to names: "what structure contains this voxel?"
Methods: As a nucleus or tract is being traced section by section, the previous tracing, or several previous tracings, can now be overlaid on the current section (see, for example, Fig. 10, below). When tracing the course of a tract through a dense white matter terrain, a previously entered target gray matter structure can be highlighted to serve as an "orienting beacon," as the white matter tract heading toward it is traced. For segmentation of many structures, only the general region, rather than the precise boundary, need be selected from each section, and seeding algorithms with appropriate thresholding can then accurately select the desired structure. Once the chosen structure’s contour on each section has been entered, whether semiautomatically or manually, the volume is seeded, and the data saved in a tabular file, as illustrated (Fig. 11).
Fig. 10. Segmentation Procedure. Top left: Portion of a Nissl-stained atlas section through the dorsal thalamus (lower two-thirds of the field) and a very densely stained segment of the dentate gyrus of the hippocampal formation bilaterally (upper one-third of the field). Above and to the left of center lies the third ventricle, with choroid plexus hanging down from its roof and the densely stained medial habenular nuclei forming its lateral walls. Top middle: Segmentation can be accomplished easily with one hand via the Octane’s three-button mouse and several simple keyboard hotkey combinations. 1) A black boundary has been traced (with the left mouse button held down) around the right lateral habenular nucleus, located just above the center of the field. 2) The profile is entered into a temporary buffer by clicking with the right mouse button. Top right: 3) With the left and center mouse buttons held down, the lateral habenular profile is flood-filled in 2D (displayed as a solid black profile). Middle left: 4) The selected profile is isolated by subtracting this result from a copy of the original and is then saved as one 2D segment of the eventual 3D nucleus. Middle middle: The saved profile can be displayed (in green, as here, or any other chosen color) on the next section as a guide to positioning the new contour. This guide is particularly important when small, obliquely-oriented structures are traced because is crucial that adjacent profiles overlap if subsequent seeding in 3D is to be successful. Middle right: This field shows the next profile of the lateral habenular nucleus. Bottom left and middle: These fields repeat the selection and flood-filling steps 1– 4. Bottom right: After all profiles have been entered and saved on both sides, the left and right lateral habenular nuclei are tagged by a 3D seeding process and can then be displayed selectively in 3D and/or saved to disk.
Fig. 11. Segmentation Procedure: Assembling Seed Data (table full view). A text file is used to list the extracted components (.SEED) files which the researcher wishes to assemble in 3D. Each line specifies one object. The x, y, z, and intensity properties can be modified when the "seed" is read into the volume data set. This is to accommodate data derived from disparate sources, and to compensate for the occasional inadvertent reflections and offsets. The I-shift value (when I-scale is 0) can ensure that a seed's voxel intensities uniformly encode the index number of the object's name in this master list. This simple method then permits the researcher to view automatically a name associated with every voxel merely by passing a cursor over the image. Note that the I-shift values are similar for structures that are related. Not shown are the remaining numbers spanning olfactory bulb to spinal cord.
We match segmented structures by means of a second pair of 3D volumes, in register with the first, whose voxels have been modified in intensity so that all voxels of an object have a fixed value. This value is used to retrieve the name from a master 8-bit list. Hence the user can merely pass a cursor over any portion of the volume to read the identity of the structure, provided it is one of our group of segmented objects. Simultaneously, the original and segmented volume can be rendered in 3D, and the elements in the segmented volume can be presented in any combination by simply toggling any of the 256 entries in the interactive table (Fig. 11).
It helps to assign contrasting colors to adjacent objects, while at the same time representing members of a family of anatomical objects in similar hues. To assign these colors manually is very time consuming. We devised a general method, which partitions a 256 look-up into (e.g., 16) hues, each containing (e.g., 16) intensities and saturations. By assigning neighboring encoded intensity values to structures with shared associations (such as "hippocampal formation," "nuclei of the limbic thalamus," "cerebellum"), it is easy to group local regions by shades of similar color, and to differentiate large regions of the brain by general hue (Fig. 12). Thus far, 16x16 8x32 and 32x8 parcellations have proved useful. Such look-up tables can be generated quickly and applied in rapid succession to the currently selected objects, as one assesses whether adjacent structures are sufficiently discriminated in both 2D and 3D. Candidate tables can be further edited by hand to reduce ambiguities and can be saved as standards. An important feature of the representational process is that one can render the isolated segmented objects in 3D using their original gray-scale textures. By thresholding them, either during segmentation or subsequently, one can view nuclei as "clouds" of cells, rather than simply as voxels or tessellated solids (Fig. 13). This helps dramatically to depict the texture characteristic of each nucleus, and conveys greatly improved realism and educational clarity to the resultant images.
This strategy does not directly permit identification of regions where two or more named structures co-exist, but this is of little practical importance at the moment. More importantly, this strategy enables only 255 objects to be encoded at one time. Conversion to 16 bit (which our software permits) would extend this to 65,535 objects. The look-up tables may remain 8-bit, but be paged in dynamically to view groups of 256 objects. Extension of this strategy to 24-bit full color images is numerically possible (though demanding of RAM and disk space) by operating on three channels in parallel. This will be necessary for data we will be collecting in the future.
Fig. 12. Color-based Informatics. This figure shows two examples of the type of color look-up table used. Modulation of the spectrum allows small numbers of similar-numbered (related) structures to take on similar but distinguishable hues. A rotation of the left look-up table produced the color assignments on the right. Numerous such tables can be produced systematically and applied rapidly in 2D and 3D to any combination of seeded objects. The upper central panels show the segmented structures in a coronal section. The lower panels show renderings using the same look-up tables. The bottom right panel illustrates that enabling or disabling table entries allows one to visualize selectively arbitrary subsets of up to 255 seeded structures. The solid green anterior dorsal and medial habenular nuclei are additionally distinguished from the remaining structures below via intensity-specific differences in rendered opacity. Note that lookup entries 0–15 are in these cases reserved for a small gray-scale ramp. This then provides a very useful visualization option: one can read coded seed objects into an original gray-scale volume (scaled from 0–255 and 0–15). The contrast between solid and gray-scale heightens the sense of contrast for a given object. One can restore any coded region merely by rereading the seed in uncoded form.
Fig. 13. A visualization alternative. One of our methods of segmenting in Fig. 10 retains all of the intensity values in the original data. But when read back into a volume as an identified object, the data are assigned a uniform color encoding the object's name. Non-surface texture information is lost (left panel). However, if one thresholds the seed when re-reading it into a data set (see text file at top) and represents sub-threshold locations as black, then much of the texture (i.e., the distribution patterns of stained cells) can be recovered, and shows up as variably dense cloud patterns (more realistic right panel).
Further Implementation: These tools are capable of representing the data in a stunningly wide didactic range: from simple volumetric views of extracted solid objects, as in other 3D atlases, to more realistic and unique "clouds" of cell bodies (Fig. 14) or of modeled cell clusters interconnected by arrays of simulated (and actively growing!) nerve fibers (Fig. 15), which can be identified and selected by name. Magnification and orientation of these images can be continuously adjusted (Figs. 16-17), and "walk-throughs" or rotations readily captured as animation sequences. These voxel manipulations are performed primarily within the program "VoxelMath" from Vital Images. One of the collaborators (SLS) is the author of that program and maintains its code, written in C. It has the advantage of having been used for more than a decade in manipulating and image-processing microscope and medical data, and has a shared memory interface with the high-performance volume-rendering program, "VoxelView," which permits a rapid cycle of data manipulation and inspection. SLS also has written a software program "ArborVitae" (Stephen L. Senft 1997 A statistical framework for presenting developmental neuroanatomy, In: Neural Network Models of Cognition. Biobehavioral Foundations, J, Donahow and V. P. Dorsel, eds. Elsevier Press, N.Y., pp. 37–57), which is able to represent arbitrary numbers of simulated neurons and their processes in 4D. It has proved possible to use that program to represent the segmented gray matter nuclei generated in this work as cell clusters (Fig. 14). It was used also to link the extracted objects (cell groups) into networks by simulating axonal projection pathways (Figs. 15–17). We run these visualization programs on a dual-processor SGI Octane workstation equipped with 1 GB of main memory and two 18GB disk drives, and do subsequent pre-press formatting with Photoshop software on 400 MHz G3 Macintosh computers.
Fig. 14. Modeled (ArborVitae, Senft 1997) display of thalamic nuclei. At the top is a volume rendering of the anterior thalamic nuclei. Below, in ascending magnification, the same data (most prominently the medial and lateral habenular nuclei) are represented in ArborVitae as clusters of cells (modeled as 3D capped tubules). This model-based approach provides a flexible alternative to volume rendering, and may facilitate the generation of hypotheses by creating specific structures (which can be compared with observations) from biologically plausible rules.
Fig. 15. left. Arbor Vitae wide-field view of the superior cerebellar peduncle (scp) axonal pathway. The brain is seen from below, with the olfactory bulbs at the top. Three sets of bilaterally represented gray matter nuclei are displayed as clouds of dots: the cerebellar lateral, interpositus and medial nuclei faintly visible in blue-green, the red nuclei of the midbrain in red, and the ventral lateral nuclei of the thalamus in yellow.
right. Simulated axons of the superior cerebellar peduncle, scp, arise from neuron cell bodies in the cerebellar lateral and interpositus nuclei, decussate in the caudal half of the midbrain, pass under and lateral to the red nuclei in the rostral part of the midbrain and innervate red nucleus neurons, and then pass rostrally to the ventrolateral nucleus of the thalamus. This, to our knowledge, is the first rendition of axonal trajectories in 3D brain maps. The method is in its first phase, and at present, axons are instructed to "grow" from structure A to structure B. They have not yet "learned" not to arise from the medial nuclei of the cerebellum, only from the other deep nuclei, nor precisely how to skirt the red nucleus rather than penetrate through it. In future it should be possible to have axons grow along precisely delimited trajectories set by the already-plotted white matter tracts. In the animated version of the present program, not only does one observe the extension of the axons from A to B as a function of time, but once grown, they have been simulated to show bright flashes of "nerve impulses" in the physiological direction — in this case, from cerebellum to thalamus.
Fig. 16. ArborVitae close-up: high magnification views of axon terminals.
top left. View from above. Pale blue axons of the superior cerebellar peduncle are coursing past, partially enveloping (and weakly "synapsing" with) cells of the red nucleus on their way to their major target in the ventral lateral (VL) nucleus of the thalamus.
top right. The same axons are seen approaching their targets in VL. They begin to branch just as they enter the nucleus. A wide variety of patterns, most of them different from those thought to occur in the red nucleus, can be produced by varying the rules assigned to govern axonal branching behavior.
bottom left and right. Higher magnifications of the field shown at top right.
Fig. 17. ArborVitae: terminal arborizations. An even higher magnification of the caudal part of VL, to show, in the center and along the left edge of the field, a series of dense terminal presynaptic arborizations of superior cerebellar peduncle axons, and to show (speculative) sizes and shapes of postsynaptic neurons in VL. Simulated synapses (not shown here) are formed when axons come within a specified distance of their targets . The colors of cells and axons are arbitrary, but can be made to share color tables with the 2D and 3D voxel representations.