1. Trang chủ
  2. » Ngoại Ngữ

Computer Vision Algorithms for Retinal Vessel With Change

50 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Computer Vision Algorithms For Retinal Vessel Width Change Detection And Quantification
Tác giả Kenneth H. Fritzsche
Người hướng dẫn Charles V. Stewart, Badrinath Roysam
Trường học Rensselaer Polytechnic Institute
Thể loại thesis
Năm xuất bản 2002
Thành phố Troy
Định dạng
Số trang 50
Dung lượng 2,53 MB

Cấu trúc

  • 1.1 Retina Vessel Change (4)
  • 1.2 System Requirements (5)
  • 2.1 Mission (9)
  • 2.2 Discussion (9)
  • 2.3 Vessel Models (9)
  • 2.4 Previous Vessel Extraction Methods (10)
    • 2.4.1 Can’s Vessel Extraction Algorithm (11)
  • 2.5 Work Done So Far (12)
    • 2.5.1 Smoothing Vessel Boundaries (13)
    • 2.5.2 Other Modifications to Can’s Algorithm (14)
    • 2.5.3 Limitations of the Modified Can Algorithms (15)
  • 2.6 Proposed Methodology (16)
    • 2.6.1 Snakes (17)
    • 2.6.2 Ribbon Snakes (20)
    • 2.6.3 Summary (21)
  • rithms 21 (0)
    • 3.1 Mission (22)
    • 3.2 Discussion (22)
    • 3.3 Previous Methods (22)
      • 3.3.1 Creating Ground-truth from Conflicting Observers (24)
      • 3.3.2 Limitations of Previous Methods (24)
    • 3.4 A Proposed Methodology (25)
      • 3.4.1 Validation Using a Probabilistic Gold Standard (26)
      • 3.4.2 Validation for Results Generated by Vessel Extraction Algorithms . 28 (29)
    • 4.1 Mission (32)
    • 4.2 Discussion (32)
    • 4.3 A Proposed Methodology (33)
    • 5.1 Mission (35)
    • 5.2 Discussion (35)
    • 5.3 Previous Methods (35)
    • 5.4 Proposed Methodology (36)
      • 5.4.1 Vessel Detection (36)
      • 5.4.2 Determining Corresponding Vessels (37)
      • 5.4.3 Measuring Vessel Width (38)
    • 5.5 Validating Vessel Widths (40)

Nội dung

Retina Vessel Change

Various eye diseases impact the retinal vasculature, leading to significant alterations in the eye's blood vessels These conditions can result in changes in vessel width, color, and trajectory, as well as the development of neovascularization.

— the growth of new vessels Table 1 shows how some changes in vessels are linked to different eye diseases.

Artery Vein Artery Artery Vein Vein

Disease NVE NVD Color Color Narrowing Dilation Narrowing Dilation Choroidal

Table 1: The above table shows different manifestations of disease that affect the blood vessels of the retina (NVD = Neovascularization at the optic disk, NVE = Neovascular- ization elsewhere).

Eye diseases can impact retinal blood vessels, but systemic diseases also affect blood vessels throughout the body The retina's vasculature allows physicians to observe blood vessels directly, providing crucial insights for diagnosing systemic conditions For example, while hypertension can cause a 15% dilation in large arteries, retinal artery dilation can increase by up to 35% Furthermore, both age and hypertension may alter the bifurcation geometry of retinal vessels Retinal arteriolar narrowing is believed to precede diabetes onset and is associated with coronary heart disease risk in women.

System Requirements

Previous research has focused on the global characteristics of the retina by calculating summary metrics, such as the average width of retinal vasculature in individual images These metrics are then compared to past measurements for the same individual or analyzed against population distributions However, this approach provides only coarse measures, necessitating substantial changes to achieve statistical significance.

We introduce a novel approach to detect longitudinal changes in individual blood vessels by utilizing the advanced Dual-Bootstrap ICP algorithm, which allows for precise alignment of retinal images taken months apart, despite significant disease-related alterations This innovative image registration technique unlocks new opportunities for studying disease-induced modifications Our thesis aims to create tools that leverage this registration process to identify changes in blood vessels, thereby guiding physicians to potential areas of concern.

Detecting changes in the registered images of individual vessels involves several essential capabilities beyond mere registration It requires algorithms that can extract vascular structures, assess their characteristics, and establish connections between vessels in aligned images Additionally, these algorithms must measure changes and evaluate their significance, highlighting the complexity of analyzing vascular alterations.

Figure 1 demonstrates the progression of vascular changes in a patient over 18 months, highlighting significant vessel thinning in images (b) and (d) compared to (a) and (c) Notably, image (d) shows a substantial shift in the vessels where they bend While existing methods, as shown in Figure 2, can detect these changes, they require enhancements and validation for effective application in change detection Our current and proposed work will focus on improving these methods, categorized for clarity.

Vascular modeling and feature extraction have seen numerous vessel models proposed in the literature, with the parallel-edge model standing out as particularly effective I have significantly enhanced the original parallel-edge retina vessel tracing algorithm created by Can My plan is to further refine this algorithm by implementing a twin-sided, ribbon snakes technique to achieve more precise boundary delineation Additionally, I will conduct an experimental analysis of the parallel-edge model by comparing its performance against other existing models.

Validation Change-detection requires reliability in extracting vessels and measuring their properties I propose to develop algorithms to measure the success of techniques

The results of vessel extraction are depicted in Image (a), while Image (b) showcases the alignment of two individual images Image (c) highlights the capability to calculate vessel morphometries, with aggregate vessel statistics displayed in the top corners and specific information for the selected vessel in the bottom right Finally, Image (d) illustrates the effectiveness of the registration algorithm in aligning multiple images to create a cohesive "mosaic" image, demonstrating both approximate accuracy against ground truth and the repeatability of measurements.

Multi-image feature extraction involves capturing multiple images of the same retina from different angles during a single session By aligning these images, redundant measurements can be collected, which helps mitigate the challenges posed by varying illumination and glare This redundancy enables the development of a robust multi-image vessel extraction method, resulting in a more comprehensive and consistent extraction across all images.

Change detection involves several key processes leading up to the identification of changes in images The algorithm utilized for change detection is designed to establish correspondence between various images of the same vessel, differentiate between vessels that are genuinely absent and those overlooked by the algorithms, and accurately calculate the changes while assessing their significance.

This proposal outlines the integration of various algorithms into a C++ software system and user interface Subsequent sections, 2 through 5, delve into specific research areas, beginning with a concise mission statement for each topic Each section elaborates on relevant background information and highlights achievements by other researchers, followed by a proposed methodology to achieve the defined research objectives Section 7 will detail the anticipated significant contributions from this work, while Section 8 presents a timeline for project completion Finally, Section 9 offers an assessment of the current progress for each component of the research.

2 Single Image Vessel Extraction and Description

Mission

To extract and describe the blood vessels that appear in a single retinal image.

Discussion

Vessel extraction, or vessel segmentation, involves separating retinal image vessels from the background, a process complicated by factors such as poor contrast, noise, varying illumination, and physical inconsistencies in the vessels Despite the development of algorithms to tackle these challenges, multiple images from the same patient often produce inconsistent segmentation results This section will explore various models for blood vessel identification, discuss different segmentation techniques, present my completed work, and propose a methodology aimed at enhancing extraction outcomes.

Vessel Models

Various models are utilized to identify vessels in images, primarily relying on detectable features like edges, cross-sectional profiles, or uniform intensity regions Edge detection models pinpoint vessel boundaries using operators such as differential, gradient, Sobel, Kirsch, or first-order Gaussian filters Cross-sectional models aim to identify areas that resemble specific shapes, including half ellipses, Gaussian forms, or sixth-degree polynomials Algorithms focusing on uniform intensity typically implement techniques like thresholding, relaxation, or morphological operators.

Accurate vessel segmentation is crucial for effective vessel change detection, with boundary detection methodologies proving to be the most suitable for precisely locating vessel boundaries In contrast, cross-sectional and intensity region models often generate images that necessitate additional processing to extract boundary information, making them sensitive to threshold selections This sensitivity can lead to slight modifications in the extracted boundaries based on the parameters used, such as the Gaussian profile model's σ value Furthermore, defining boundaries appropriately poses challenges for these methods, unlike boundary detection models A repeatability study, as outlined in Section 5.5, will be conducted to evaluate the effectiveness of boundary models for detecting changes in vessel width.

Previous Vessel Extraction Methods

Can’s Vessel Extraction Algorithm

The Can’s vessel extraction algorithm utilizes an iterative process to trace blood vessels through a localized model that relies on two key physical properties: the local straightness of vessels and their parallel sides This method effectively identifies vessel boundaries by detecting two parallel, locally straight edges The algorithm is structured in two distinct stages to enhance its accuracy and efficiency in vascular tracing.

Stage 1 (seed point initialization): The algorithm analyzes the image along a coarse grid to gather gray-scale statistics (contrast and brightness levels) and to detect initial

The process of identifying "seed" locations on blood vessels utilizes gray-scale minima, where false seed points are eliminated by assessing the presence of strong parallel edges surrounding the minima An initial seed point is discarded if the two strongest edges do not surpass a contrast threshold based on gray-scale statistics, or if their directional similarity falls outside of 22.5 degrees This filtering method typically results in the removal of approximately 40% of the initial seed points.

Stage 2 (recursive tracing): The second stage is a sequence of recursive tracing steps that are initiated at each of the filtered seed points For each filtered seed, the tracing process proceeds along vessel centerlines by searching for vessel edges These edges are searched from a known vessel center point in a direction orthogonal to the local vessel direction to a distance of one half the maximum expected vessel width (Note all directions are from a discrete set of angles, usually 16 angles, with an interval of 22.5 degrees) Three angles are searched (the previous center point direction±1) and the angle of the strongest edge is estimated The next trace point is determined by taking a step from the current trace point in the direction of the strongest edge The resulting point is then refined by applying a correction that is calculated based upon how far the new point is from the center of the points at which the left and right boundaries are found This new point is kept only if it does not intersect any previously detected vessels, is not outside the image frame, and if the sum of edges strengths is greater than the global contrast threshold calculated during seed point detection.

The primary objective of Can's algorithm was to extract vessels for identifying key landmarks, such as vessel bifurcation and crossover points, which serve as a foundation for registration Although the algorithm successfully generated these landmarks, it frequently produced unsmooth vessel boundaries and overlooked some vessel segments.

Work Done So Far

Smoothing Vessel Boundaries

To accurately determine vessel boundaries, we implemented an iterative template application method, starting from a known vessel center and extending outward to a distance of M/2, where M represents the anticipated maximum width of the blood vessel This approach accommodates significant variations in boundaries between adjacent vessel segments To enhance boundary smoothness, we refined our search strategy to focus on locating vessel edges within a predicted edge range of ±d/2, with d indicating the maximum permissible width change at each tracing step Consequently, at each algorithm iteration, the maximum width difference between adjacent segments is constrained to d, while ensuring that the predicted edge location remains within the defined limits of M/2.

Figure 3 demonstrates Can's initial search strategy for defining vessel boundaries on the left and the enhanced method on the right, which achieves smoother boundaries more efficiently However, despite its computational advantages and improved boundary smoothness, the modified approach still permits excessive variance throughout the length of the vessel.

Other Modifications to Can’s Algorithm

The latest enhancement to Can's base algorithm involves the use of local thresholds instead of a single global threshold for determining vessel boundaries This adjustment addresses the limitations of the original algorithm, which often overlooked vessels in images with significant variations due to uneven lighting or pathological conditions By segmenting the image into smaller regions and calculating individual thresholds for each area, the algorithm achieves more accurate results and improves vessel detection.

The left and middle images illustrate the outcomes of the original algorithm, which presents challenges in comparing the three recognizable vessel segments, as each result only shows two segments that vary significantly In contrast, the image on the right demonstrates the current algorithm's ability to consistently represent results as three distinct segments across all images, facilitating easier comparison.

The third modification involves segmenting vessels to enable effective comparison of corresponding segments across images This is achieved by constructing a simple graph structure where landmarks, such as bifurcation and crossover points, are treated as nodes and vessel traces as edges Vessel segments are defined by their endpoints, which can be either a landmark or an open end, necessitating the division of detected vessels into multiple segments at each landmark Although this may seem minor, the original algorithm did not focus on creating a vessel graph or identifying segments, often resulting in a single bifurcation being represented by only two traces In contrast, the modified algorithm ensures that such a vessel is consistently represented by three traces, as illustrated in Figure 4.

Additional enhancements were made to improve the accuracy and efficiency of the tracing algorithm; however, these modifications are not essential to the main focus of this work and will only be briefly acknowledged.

1 better initial seed point determination;

2 use of multiple lengths of templates;

3 use of a binning structure to cache results;

4 interpolation of results to sub-pixel accuracy;

5 better accuracy and repeatability in determining landmark location;

6 inclusion of a momentum strategy to allow tracing to trace through noisy parts of an image.

Limitations of the Modified Can Algorithms

Despite advancements to Can's original algorithm, limitations persist, especially in detecting extremely small neovascular vessels The challenges stem from the low contrast of these thin vessels, which are typically only 1-2 pixels wide, and their highly tortuous structure Consequently, the existing locally straight parallel edge model is inadequate, indicating the necessity for a new method to identify neovascular vessels, although this issue is not addressed in my proposed work.

The precise location of vessel boundaries is crucial for detecting changes in vessel width, yet it faces limitations due to factors like poor contrast and noise in images, as well as the methodology used for boundary estimation Inaccuracies arise from modeling boundaries with a discrete set of edge detectors, leading to additional errors While these inaccuracies may be acceptable for low-resolution segmentation or feature extraction, they are inadequate for the high-resolution requirements of change detection Thus, there is a pressing need for improved methods to accurately determine vessel boundaries.

Figure 5 highlights the inaccuracies in vessel boundary delineation, with the left image displaying a relatively precise vessel centerline segmentation The right image zooms in on a specific region from the left, revealing boundary point inaccuracies attributed to discretization errors and noise Additionally, it is evident that the boundaries for each trace point are frequently misaligned and not perpendicular to the vessel's orientation, compromising their accuracy for measuring vessel width.

Proposed Methodology

Snakes

Snakes as originally introduced by Kass et al [24] are represented parametrically, with each position denoted as v(s) = (x(s), y(s)) and the snake being controlled by an energy function given by

The equation E int (v(s)) + E image (v(s)) + E con (v(s)) ds describes the total energy of a snake model, where E int signifies the internal energy, E image accounts for external image forces such as lines, edges, and corners, and E con represents external constraint forces Internal energy, E int, is typically expressed in a specific mathematical form.

In vessel boundary detection, the energy model comprises two key terms: the first regulates the snake's elasticity, while the second governs its rigidity The image energy, denoted as E image, is designed to attract the snake to edges and is mathematically represented as E image = −|∇I(x, y)|², with E con set to zero.

To determine the final set of boundary points for a single vessel, the initial detected boundary points can be utilized as starting points for a snake algorithm This approach minimizes the energy equation outlined in Equation 2.6.1 by reformulating it into a series of Euler-Lagrange differential equations for resolution.

Converting to discrete space, these equations are solved iteratively until gradient de- scent convergence is achieved as approximated when the total change in v is below some threshold.

This method produces a smooth curve that accurately aligns with the vessel boundary defined by the gradient, offering a superior representation compared to traditional vessel tracing code It effectively converges on the optimal local boundary and can be utilized to identify both sides of an individual vessel distinctly The subsequent section will detail a more suitable application of this snake methodology for vessel extraction and precise boundary localization.

Figure 6 highlights the distinction in boundary detection between the vessel tracing algorithm and the snakes method The top row presents the vessel boundaries identified by the tracing algorithm (red dots) and the snakes method (yellow line) on both the original intensity image and the gradient magnitude image The bottom row provides a closer look at these areas, revealing that the snake method yields significantly smoother results, with boundary discrepancies of up to 2 pixels compared to the vessel tracing algorithm.

Ribbon Snakes

Ribbon snakes are effective tools for detecting linear features by identifying their left and right boundaries, making them valuable for extracting roads from aerial images Their superior modeling of vessel properties enhances their capability in vessel boundary detection, as they accurately capture linear features defined by contrasting edges.

Figure 7: Illustration showing the parameters of a ribbon snake Note the projection of the gradient on the unit norm (n(s)) used to further improve the boundary extraction.

Ribbon snakes enhance the traditional snake model by incorporating a third parameter: width Each position is defined as v(s) = (x(s), y(s), w(s)), where x(s) and y(s) indicate the center of the blood vessel, while w(s) specifies the vessel's boundaries This approach allows for the identification of two distinct boundaries within the image.

The energy term E image can be refined to reflect the understanding that the vessel appears darker than the surrounding area Specifically, on the left side of the ribbon snake, the intensity transitions from a light background to a dark vessel, while on the right side, it shifts from dark to light This relationship can be mathematically represented by ensuring that the projection of the gradient vector onto the ribbon’s unit normal vector is positive on the left side and negative on the right side, as illustrated in Figure 7 Consequently, this modification enhances the formulation of E image.

Eimage(v(s)) =|(∇I(vr(s))| − |∇I(vl(s)))| ãn(s) (6) where n(s) is the unit normal at x(s), y(s) andvr(s) andvl(s) are the vessel right and left boundaries which can be expressed as v r (s) = w(s)n(s), v l (s) = −w(s)n(s) (7)

Using the modified Can's algorithm, the centerline positions and widths of a vessel can be supplied to initialize and solve ribbon snakes similarly to traditional snakes The key distinction is that ribbon snakes yield two contours that represent the vessel's boundaries Furthermore, ribbon snakes enable the incorporation of additional constraints to ensure the boundaries remain "almost parallel," preventing significant variations in the ribbon snake's width relative to s.

Summary

Our improved version of Can's tracing algorithm still yields boundaries that lack the necessary accuracy for effective change detection To address these limitations, I will employ snakes for enhanced boundary detection These snakes will be initialized using the modified tracing algorithm results, while also referencing the original image intensity structure to finalize the boundary locations This approach aims to produce more consistent and reliable boundaries for change detection applications.

Mission

To validate results and measure the performance of vessel detection algorithms.

Discussion

The most effective way to validate algorithms is by comparing them to ground truth, which is a known correct answer that outlines expected outcomes However, obtaining ground truth can be challenging, particularly with retinal images Therefore, it is essential to develop methods that can approximate this ground truth for accurate validation.

In our analysis, we emphasize the importance of validating segmentation algorithms, as highlighted in existing literature Additionally, we can benchmark our feature extraction algorithm against retinal image segmentation techniques using this validation approach.

We then extend this methodology to develop validation for vascular centerline extraction.Finally, we develop a method for determining the repeatability of blood vessel width estimation.

Previous Methods

The arrival at ground truth is a known hard problem in image analysis and pattern recog- nition systems [19] Previous research in fundus image vessel segmentation [22, 34, 23,

17, 32, 39] has often approximated ground truth by the creation of a human-generated segmentation also called a “gold standard” to which computer-generated segmentations are measured.

The gold standard is defined as a binary segmentation where each pixel is assigned a value of 1 for vessels and 0 for the background When comparing computer-generated results to this gold standard, there are four possible outcomes for each pixel: true positive, true negative, false positive, and false negative In this context, the gold standard is represented as G, with individual pixels denoted as G(x, y), while the computer-generated segmentation is represented as C and its pixels as C(x, y).

The frequency of these cases provides data that can be used as an indication of an algo- rithm’s performance.

Creating gold standard images is often expensive and time-intensive, and relying on a single human expert's annotations is insufficient due to inherent subjectivity and variability among observers To develop a more reliable gold standard, it is essential to integrate multiple manual segmentations produced by different observers Given a collection of segmentations, denoted as H, where each individual segmentation is represented as Hi, the goal is to derive a unified binary segmentation, G, that serves as the gold standard A strategy must be established to reconcile the differences among the various Hi segmentations.

3.3.1 Creating Ground-truth from Conflicting Observers

To effectively combine different observers' segmentation results, it is crucial to establish criteria for correct and incorrect vessel segmentation A conservative approach defines a pixel as part of a vessel only when all observers agree, marking it as a vessel in the gold standard image, while all other pixels are classified as background This method effectively utilizes a Boolean "AND" operation across all segmentations Alternatively, a majority rule approach allows each segmentation to cast a vote for a pixel; if 50% or more of observers classify a pixel as a vessel, it is included in the gold standard, with the threshold adjustable based on specific needs The least conservative method classifies a pixel as a vessel if at least one observer identifies it as such, corresponding to a Boolean "OR" operation among the segmentations.

Knowing that there is some uncertainty between observers, the above methods fail to capture the uncertainty in the gold standard For instance, consider the case when only

Only 1 in 10 observers identify a specific pixel as part of a blood vessel, raising questions about the accuracy of such assessments It remains uncertain whether the one observer is correct or if the majority are mistaken This discrepancy highlights the challenge in establishing a binary gold standard, as the consensus among observers is only 10% Therefore, determining the number of individuals required for a reliable standard becomes crucial in evaluating pixel classification.

“correct”, this information should be captured in a modified gold standard in which each

Figure 8 demonstrates the creation of a multi-observer standard image derived from five individual hand tracings, where each pixel is assigned a value based on the consensus of observers regarding its classification as a vessel Blue pixels indicate unanimous agreement among five observers, while cyan, green, red, yellow, and grey represent decreasing levels of consensus This method effectively utilizes probabilities to establish a non-binary gold standard for vessel segmentation, highlighting the varying degrees of agreement among observers.

A Proposed Methodology

This methodology is driven by three key principles: firstly, it aims to establish a standard that accurately reflects the uncertainty among observers; secondly, it introduces a relative cost scale for mislabelled pixels, where those with unanimous observer agreement are deemed more costly than those with discrepancies; lastly, it posits that computer segmentation should not be penalized when its results align with any observer's findings The ultimate objective is for computer algorithms to produce results comparable to human performance—swift and consistent—without facing penalties for achieving such parity.

3.4.1 Validation Using a Probabilistic Gold Standard

In a scenario with K observers (K = 5), each pixel in a computer-generated segmentation C(x, y) is represented as a binary random variable V, where V = 0 indicates background and V = 1 indicates foreground An integer-valued random variable M, ranging from 0 to K, represents the number of observers who label a pixel as a vessel The joint probability distribution P V M (v, m) can be estimated using both the computer-generated segmentation and manually scored segmentations, serving as the foundation for the vessel performance measure described subsequently.

The S vessel is calibrated to yield values between 0 and 1, with 1 representing a perfect score and 0 indicating total failure A perfect score signifies that all pixels identified as vessel pixels by R or more observers are also recognized as vessel pixels in C Conversely, a complete failure occurs when none of the vessel pixels identified are correctly classified.

In the analysis of vessel pixels in image C, pixels not labeled as vessels by R or more observers are assigned a value between 0 and 1, based on their classification in the probabilistic gold standard This method emphasizes that a missed vessel pixel in C, which all observers unanimously identify as a vessel, holds greater significance in the final scoring compared to pixels agreed upon by only R observers The parameter R can be adjusted to any suitable value, exemplifying a "majority rules" approach, as illustrated in Figure 8.

In Figure 9, an enlarged section of the boxed area from Figure 8 is presented, maintaining the same color scheme as described in the caption of Figure 8 We define the S vessel based on the observation that a majority, specifically 50% or more, of the observers meet the criterion (R = dK/2e).

The equation highlights the expected value of M for pixels identified as vessels by a computer algorithm and R or more observers, reflecting the consensus between the algorithm and the majority Meanwhile, the denominator indicates the average value of M for pixels labeled as vessels by R or more observers, serving as a benchmark for the "perfect score."

While the S vessel model does not consider the number of false positives—specifically, the pixels incorrectly identified as vessels—it is essential to include this metric in evaluating segmentation performance The subsequent measure addresses this important aspect of segmentation accuracy.

The numerator represents the joint probability that a computer algorithm identifies a pixel as a vessel while no observers concur with this classification In contrast, the denominator reflects the likelihood that the algorithm labels the pixel as a vessel and that more than R observers support this labeling A score of zero indicates the absence of false positives, whereas a score of one signifies that the number of false positives matches the number of true positives necessary to attain the Svessel score.

Using the same type of reasoning, it is possible to define similar metrics, denoted

S background and F background , respectively, for classification of non-vessel pixels as shown below:

The metrics S vessels and S background can be combined by averaging to create a comprehensive score ranging from 0 to 1 A score closer to 1 indicates that the results are nearer to the multi-observer standard Figure 10 demonstrates these performance measures.

3.4.2 Validation for Results Generated by Vessel Extraction Algorithms

While the previously mentioned techniques are effective for segmenting entire vessels from their background, they are not suitable for algorithms focused on determining vessel centerlines To address this, one effective method is to transform trace results into segmentation results Most tracing algorithms provide essential data, including vessel width and boundary location for each traced point By segmenting all pixels between neighboring trace points according to their widths or boundaries, a comprehensive segmentation can be achieved, which can then be evaluated as outlined earlier.

An alternative method involves evaluating each point in the centerline trace against a multi-observer standard Exploratory algorithms typically produce results with vessel center points at fixed or varying intervals A straightforward procedure would involve identifying true positives by checking if a centerline point corresponds to any non-zero (vessel) pixel in the multi-observer standard, contributing to the S vessel measure or being counted Similarly, false positives can also be recorded, allowing for performance metrics to be calculated However, this method overlooks true and false negatives, as defining these categories can be complex Therefore, it's essential to consider a more comprehensive approach that incorporates true and false negatives for a complete evaluation.

To establish a reliable trace, one can utilize the multi-observer standard by implementing a tracing algorithm, followed by manual adjustments if necessary This resulting trace, referred to as G Trace, serves as the benchmark for evaluating other tracing outcomes, including the assessment of false positives and true positives.

The measurement of segmentation performance from multiple manual observers reveals significant inter-observer variation, as illustrated in the original image and the multi-observer gold standard created from five hand-traced images The segmentation results produced by Hoover's algorithm show a vessel sensitivity (S vessel) of 0.69 and a background sensitivity (S background) of 0.98, but with a high false positive rate for vessel detection (F vessel = 0.14) In contrast, the modified Can's algorithm achieved a higher vessel sensitivity of 0.94, although it also presented a greater incidence of false positives (F vessel = 0.35) The evaluation of true positives, false positives, and false negatives is determined using a distance tolerance, d, where true positives are trace points in the segmentation that align closely with the gold standard, while false positives and false negatives are identified based on their correspondence within this distance.

4 Using Multiple Images to Improve Vessel Extrac- tion

Mission

This study aims to precisely identify blood vessel regions in individual images by leveraging data from corresponding areas across multiple images captured during the same patient visit, ensuring consistent results from these identical regions in all images.

Discussion

To accurately detect longitudinal changes in vessels between two sets of images, it is crucial that the vessels extracted from each image are complete and precise However, discrepancies often arise when analyzing multiple images taken during a single session, especially when they focus on different areas of the retina Although the tracing results can provide sufficient data for registration, enhancing their accuracy is vital before conducting change detection Therefore, it is necessary to explore methods for identifying these differences and ensuring the results are standardized.

Differences in image analysis manifest in two primary ways: tracing errors and boundary discrepancies Tracing errors include false negatives, where vessel segments are missing in certain images, and false positives, known as phantom traces, where segments appear incorrectly These issues can be clearly observed in comparative images Additionally, even when vessels are present in multiple images, their boundaries may not align, complicating the detection of longitudinal changes To accurately assess changes over time, it is crucial to ensure that each image set is as precise as possible by identifying and reconciling discrepancies in both vessel existence and boundary locations.

The images in Figure 11 illustrate the variability in vessel detection from a single visit, with the top two images showing the overall results and the bottom images providing close-ups of specific regions Notably, regions labeled A, B, C, and D in the left image contain traces absent in the right image, while regions B and E in the right image show traces missing from the left Combining the findings from both images could enhance the overall tracing accuracy; however, it is crucial to proceed cautiously to avoid increasing false positives.

A Proposed Methodology

To achieve more complete and accurate trace results, three essential steps must be followed First, tracing is performed on each individual image using Can’s modified tracing algorithm Next, the images are registered through the Dual Bootstrap ICP Registration Algorithm, which creates transformations to identify corresponding points across all images Finally, tracing is executed a second time, this time simultaneously across all images, merging results for corresponding points at each step If discrepancies arise between individual images, a resolution scheme will be employed to determine the best course of action, utilizing various strategies as needed.

• a combining or averaging of the image intensities prior to attempting vessel detection;

• a combining or averaging of the template responses during vessel boundary detection.

Each image processing method incorporates a confidence score that influences individual image results This score can derive from various factors, including global or local contrast measures, vessel boundary strength, or the success of prior iterations For example, areas of higher local contrast may be prioritized in the weighting process Ultimately, this approach ensures that all images reflect consistent tracing results for each region, as the method continues until every region across all images is accurately traced.

Mission

To accurately detect and quantify degree of blood vessel change in longitudinal sets of images.

Discussion

Detecting changes in vessels over time poses significant challenges due to variations in image capture, specifically the differing distances and angles from the camera to the retina These variations result in scale differences and projective discrepancies between corresponding points in images However, these challenges can be addressed through a process called registration, which aligns images into a common scale-space, ensuring consistent distances and directions between corresponding points This capability is crucial for accurately comparing distances and detecting changes in width Fortunately, the DBICP Registration algorithm offers a reliable and precise solution for registering fundus images with detectable features.

Previous Methods

Giansanti et al [15] focus on evaluating retinal changes by analyzing vessel diameter, paths, tortuosity, and the angles of vessel crossings However, their methodology is limited to assessing these metrics in a single image, lacking a systematic approach for result registration and change detection across multiple images Additionally, their tracing algorithm is restricted as it only tracks vessels originating from the optic nerve, overlooking the fact that not all vessels in an image are connected to the optic nerve, and many images may not even include it.

Berger et al [2] present a technique for detecting vessel changes by offering users pairs of registered images They propose two methods for utilizing these image pairs: the first involves creating slides that can be superimposed or viewed using a stereo viewer, while the second method, known as "alternation flicker," displays the registered images on a computer monitor in rapid succession at a rate of 0.3 seconds.

Both methods operate at 10 Hz and are prone to errors due to reliance on user detection of changes Their registration algorithm employs a custom-developed, nonrigid polynomial warping technique, necessitating manual selection of six corresponding points between images Consequently, their registration system lacks the accuracy and robustness of the DBICP algorithm, which will be utilized for improved performance.

Proposed Methodology

To create an automated system for detecting changes in vessel widths, three key requirements must be fulfilled: a method for locating vessels within an image, a technique for identifying corresponding vessel segments, and a means to measure and assess changes in width These essential components are illustrated in Figure 12 and will be elaborated upon in the subsequent sections.

The first requirement in detection of vessel change is to be able to accurately and con- sistently identify vessels in images This includes the accurate location and continuous

To enhance trace results from multiple images captured in a single session, it is essential to define the vessel boundaries accurately This will be achieved by implementing the methodologies outlined in Sections 2 and 4.

The second requirement in this process is crucial: the accurate registration of images using the DBICP registration algorithm This method allows for precise alignment of images, placing them in a common scale-space, which is essential for determining corresponding vessel cross sections and comparing their widths However, identifying corresponding vessels can be challenging, especially when a vessel is absent in one image but present in another Therefore, it is important to implement a search strategy that accounts for these discrepancies when identifying corresponding vessels.

To improve vessel detection accuracy, a search strategy can be implemented where vessels identified in one image are matched with those in another If two vessels are within a specified distance, additional criteria should be applied to confirm their identity, such as assessing if they are moving in the same direction within a localized area If these conditions are not satisfied, it may indicate that the vessel has either evaded detection or is no longer present In such cases, the image lacking the vessel should be re-examined to ascertain whether the vessel is truly missing or if it was a detection error Vessel extraction should then be attempted in the area where the vessel was lost, utilizing varied parameter settings like lower local thresholds Only after a second detection attempt should the vessel be deemed absent.

Various techniques have been employed to estimate vessel diameters, all based on the principle of measuring the vessel perpendicularly to its local longitudinal axis Each technique establishes a cross section and delineates the vessel boundaries within that section to measure its width.

To identify vessel endpoints, it is essential to analyze the cross-sectional intensity profile at points where the intensity is at half-height between maximum and minimum levels This method, known as half-height intensity change, is commonly utilized in various studies Some researchers employ a parabolic fit to the intensity profile to ascertain the width, while others apply Gaussian curves, defining boundaries at a specified distance from the mean of the Gaussian distribution.

Gang et al [12] recently introduced amplitude modified second-order Gaussian filters for measuring vessel width, employing an adaptive estimation method for the value of σ through a specific equation.

Figure 13 presents two distinct methods for measuring width in intensity profiles The left method utilizes half-height intensity change, identifying boundaries at points that are midway between maximum and minimum intensity values Conversely, the right method applies a Gaussian curve fit, determining boundaries at a specified distance from the mean, specifically at 1σ It's important to note that these two approaches yield different results for the same intensity profile.

The filter described by the equation √2πσ 3 (x 2 −σ 2 )e −x 2 /2σ 2 is shown to achieve peak response for vessels of specific diameters when the appropriate σ value is applied A linear relationship is established between vessel diameter and σ, expressed as d = 2.03σ + 0.99, where σ corresponds to the value that maximizes the filter's response However, these methods face limitations due to their reliance on arbitrary thresholds, such as σ, and estimates, including curve fittings and half-height levels, which are susceptible to errors from angular discretization.

To minimize measurement errors, it is essential to directly measure widths as the distance between two points on smooth curves that represent detected boundaries, ensuring they are perpendicular to the vessel's orientation By comparing width measurements at corresponding points on similar vessels, we can identify differences If these differences surpass the typical maximum expected variation of 4.8%, which accounts for changes in vessels due to the cardiac cycle, these specific vessel sections will be flagged for further observation.

The goal of presenting detected differences is to highlight areas with the most significant changes for the physician's review This can be effectively achieved through a vessel difference map, which visually represents vessel segments using color coding to indicate the extent of detected changes Physicians can focus on regions with the most intense colors for further examination.

Validating Vessel Widths

The accuracy of detecting changes in vessel width relies heavily on the precision of width measurements, which are influenced by the accuracy of boundary detection Therefore, validating width measurements involves a two-step process: first, establishing the repeatability of boundary detection, and second, confirming the repeatability of the width calculations.

This study aims to detect vessels and boundaries from multiple images taken from a single patient during one session By registering these images and comparing similar boundaries, we will measure widths at specific sites in each image The expected variation in width, attributed solely to the cardiac cycle, can account for changes of up to 4.8% Any other observed differences in width will be linked to the measurement methodology employed.

In addition to providing algorithms for a diagnostic tool for physicians, several core tech- nical innovations are expected from the project.

1 New model for detecting retina vessel boundaries.

2 Method for detecting width change in blood vessels between two images.

3 Method for improving completeness/accuracy of vessel detection using information from multiple images.

4 Methods for establishing fundus image segmentation probablistic gold standards.

5 Measures/metrics for comparing an algorithm’s vessel segmentation with the gold standard or with another algorithm.

Milestone Expected Date of Completion

4 Gold Standard/Validation Method May 2003

6 Report to West Point June 27, 2003

Below is a current assessment of my progress in each area of my research.

Publication: Jan-May 2002, authored book chapter [10] that described several retina vessel detection models and algorithms and provided examples of their applications The abstract follows:

Quantitative morphometry of the retinal vasculature is crucial for ophthalmology and related diseases affecting vascular structure and function Key features like bifurcations and crossovers are particularly significant for developmental biologists and clinicians studying conditions such as hypertension and diabetes Accurate segmentation and tracing of the retinal vasculature serve as essential spatial landmarks for image registration, which has direct applications in change detection, mosaic synthesis, real-time tracking, and spatial referencing Change detection plays a vital role in supporting clinical trials, high-volume reading centers, and large-scale screening initiatives.

This chapter explores the leading model-based algorithms for segmenting and tracing retinal vasculature, highlighting various models tailored for specific applications It discusses essential implementation choices through the lens of a real-time algorithm and outlines techniques for extracting critical points, such as bifurcations and crossovers Additionally, the chapter examines the application of vessel morphometric data, presents methods for generating ground truth images for comparison with computer-generated results, and provides an experimental analysis of the RPI-Trace and ERPR landmark determination algorithms developed by our team.

[1] K Akita and H Kuga A computer method of understanding ocular fundus images. Pattern Recognition, 15(6):431–443, 1982.

[2] J W Berger, T R Patel, D S Shin, J R Piltz, and R A Stone Computerized stereo chronoscopy and alternation flicker to detect optic nerve head contour change. Ophthalmology, 107(7):1316–20, 2000.

[3] H H Brinchmann-Hansen, O Theoretical relations between light streak character- istics and optical properties of retinal vessels Acta Ophthalmologica, pages 33–37, 1986.

A Can, H Shen, J N Turner, H L Tanenbaum, and B Roysam conducted a study on rapid automated tracing and feature extraction from live high-resolution retinal fundus images Their research, published in the IEEE Transactions on Information Technology in Biomedicine, Volume 3, Issue 2, pages 125-XXX, utilizes direct exploratory algorithms to enhance the efficiency of retinal image analysis.

[5] N Chapman, N Witt, X Gao, A Bharath, A Stanton, S Thom, and A Hughes. Computer algorithms for the automated measurement of retinal arteriolar diameters. British Journal of Ophthalmology, 85:74–79, 2001.

[6] S Chaudhuri, S Chatterjee, N Katz, M Nelson, and M Goldbaum Detection of blood vessels in retinal images using two-dimensional matched filters IEEE Trans- actions on Medical Imaging, 8(3):263–269, 1989.

[7] H Chen, V Patel, J Wiek, S Rassam, and E Kohner Vessel diameter changes during the cardiac cycle Eye, 8:97–103, 1994.

[8] O Chutatape, L Zheng, and S Krishman Retinal blood vessel detection and tracking by matched gaussian and kalman filters InProceeding of IEEE Int Conf Engineering in Medicine and Biology Society, volume 20, pages 3144–3149, 1998.

In a study published in the Graefes Archive for Clinical and Experimental Ophthalmology, Delori et al (1988) evaluated micrometric and microdensitometric methods for measuring the width of retinal vessels in fundus photographs The research focused on manual methodologies for accurately assessing vessel widths, contributing valuable insights to ophthalmic imaging techniques.

[10] K Fritzsche, A Can, H Shen, C Tsai, J Turner, H Tanenbuam, C Stewart, and

B Roysam Automated model based segmentation, tracing and analysis of retinal vasculature from digital fundus images In J S Suri and S Laxminarayan, editors, State-of-The-Art Angiography, Applications and Plaque Imaging Using MR, CT, Ul- trasound and X-rays Academic Press, 2002.

[11] P Fua and Y G Leclerc Model driven edge detection Machine Vision and Appli- cations, 3:45–56, 1990.

In their 2002 study published in the IEEE Transactions on Biomedical Engineering, Gang, Chutatape, and Krishnan developed a method for detecting and measuring retinal vessels in fundus images This approach utilizes an amplitude modified second-order Gaussian filter to identify vessels through a matched filter technique, while the widths of these vessels are calculated using the same modified Gaussian equation.

[13] X Gao, A Bharath, A Stanton, A Hughes, N Chapman, and S Thom Measure- ment of vessel diameters on retinal images for cardiovascular studies In Proceedings of Medical Image Understanding and Analysis, 2001.

[14] X Gao, A Bharath, A Stanton, A Hughes, N Chapman, and S Thom A method of vessel tracking for vessel diameter measurement on retinal images In ProceedingsIEEE International Conference on Image Processing 2001, pages 881–884, 2001.

[15] R Giansanti, P Fumelli, G Passerini, and P Zingaretti Imaging system for reti- nal change evaluation Sixth International Conference on Image Processing and Its Applications, 2:530–534, 1997.

In their 1997 study, Hammer et al conducted a Monte Carlo simulation to analyze retinal vessel profiles, enhancing the interpretation of in vivo oxymetric measurements through imaging fundus reflectometry Their findings were presented in the Proceedings of SPIE, focusing on the medical applications of lasers in dermatology, ophthalmology, dentistry, and endodontics.

[17] A Hoover, V Kouznetsova, and M Goldbaum Locating blood vessels in retinal im- ages by piecewise threshold probing of a matched filter response IEEE Transactions on Medical Imaging, 19(3):203–210, 2000.

[18] A Houben, M Canoy, H Paling, P Derhaag, and P de Leeuw Quantitative analysis of retinal vascular changes in essential and renovascular hypertension Journal of Hypertension, 13:1729–1733, 1995.

[19] J Hu, R Kahsi, D Lopresti, G Nagy, and G Wilfong Why table ground-truthing is hard In Proceedings of the Sixth International Conference on Document Analysis and Recogniton, pages 129–133, 2001.

[20] J Jagoe, C Blauth, P Smith, J Arnold, K Taylor, and W R Quantification of retinal damage done during cardiopulmonary bypass: Comparison of computer and human assessmement IEE Proceedings Communications, Speech and Vision, 137(3):170–175, 1990.

[21] P Jasiobedzki, D McLeod, and C Taylor Detection of non-perfused zones in retinal images Computer-Based Medical Systems: Fourth Annual IEEE Symposium, pages162–169, 1991.

[22] P Jasiobedzki, C Taylor, and J Brunt Automated analysis of retinal images Image, Vision and Computing, 3(11):139–144, 1993.

[23] P Jasiobedzki, C Williams, and F Lu Detecting and reconstructing vascular trees in retinal images In M Loew, editor, SPIE Conference on Image Processing, volume

[24] M Kass, A Witkin, and D Terzopoulos Snakes: Active contour models Interna- tional Journal of Computer Vision, 1(4):321–331, 1988.

In their 1998 study presented at the SPIE Conference on Image Processing, Kochner et al explored the efficient application of steerable filters for model-based image analysis, specifically focusing on course tracking and contour extraction of retinal vessels from color fundus photographs Their findings contribute to advancements in medical imaging techniques, enhancing the accuracy of retinal vessel analysis.

H Li and O Chutatape presented a study on fundus image feature extraction at the IEEE International Conference on Engineering in Medicine and Biology, held in Chicago, IL, in July 2000 Their work, published in the conference proceedings, focuses on advanced techniques for analyzing fundus images, contributing to the field of medical imaging The research was conducted at the Intelligent Machines Lab at Nanyang Technological University in Singapore.

[27] H Mayer, I Laptev, A Baumgartner, and C Steger Automatic road extraction based on multiscale modeling International Archives of Photogrammetry and Remote Sensing, 32:47–56, 1997.

[28] F Miles and A Nutall Matched filter estimation of serial blood vessel diameters from video images tmi, 12(2):147–152, 1993.

[29] W Neuenschwander, P Fua, L Iverson, G Szekely, and O Kubler Ziplock snakes. International Journal of Computer Vision, 25(3):191–201, 1997.

In their 2000 publication, Niessen et al present a comprehensive analysis of error metrics designed for the quantitative evaluation of medical image segmentation This work is included in the book "Performance Characterization in Computer Vision," edited by Klette, Stiehl, Viergever, and Vincken, and published by Kluwer Academic Publishers in the Netherlands The authors emphasize the importance of robust error metrics in improving the accuracy and reliability of medical imaging techniques.

[31] L Pedersen, M Grunkin, B Ersbll, K Madsen, M Larsen, N Christoffersen, and

U Skands Quantitative measurement of changes in retinal vessel diameter in ocular fundus images Pattern Recognition, 21:1215–1223, 2000.

[32] A Pinz, S Bernogger, P Datlinger, and A Kruger Mapping the human retina. IEEE Transactions on Medical Imaging, 17(4):606–620, Aug 1998.

[33] D Roberts Analysis of vessel absorption profiles in retinal oximetry medphys, 14(1):124–139, 1987 Vessel profile discussion Uses 4 parameter curve fitting procr- dure Addresses and describes central reflex.

[34] C Sinthanayothin, J Boyce, H Cook, and T Williamson Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images. The British Journal of Ophthalmology, 83(3):902–10, Aug 1999.

[35] N Solouma, A Youssef, Y Badr, and Y Kadah Real-time retinal tracking for laser treatment planning and administration In SPIE Conference Medical Imaging:Image Processing, volume 4322, pages 1311–1321, 2001.

[36] A Stanton, B Wasan, A Cerutti, S Ford, R Marsh, P Sever, S Thom, and

A Hughes Vascular network changes in the retina with age and hypertension.Journal of Hypertension, 13(12):1724–1728, 1995.

[37] C Stewart, C.-L Tsai, and B Roysam The bootstrap iterative closest point algo- rithm with application to retinal image registration International Journal of Com- puter Vision, submitted 2002.

The study by Suzuma et al (2001) investigates how cyclic stretch and hypertension lead to increased expression of vascular endothelial growth factor (VEGF) and its receptor-2 in the retina, suggesting potential mechanisms through which hypertension may worsen diabetic retinopathy This research serves as a valuable reference for further studies and citations in the field.

[39] W Tan, Y Wang, and S Lee Retinal blood vessel detection using frequency analy- sis and local-mean-interpolation filters In SPIE Conference Medical Imaging:Image Processing, volume 4322, pages 1373–1384, 2001.

[40] G Tascini, G Passerini, P Puliti, and P Zingaretti Retina vascular network recog- nition In SPIE Conference on Image Processing, volume 1898, pages 322–329, 1993.

[41] Y Wang and S Lee A fast method for automated detection of blood vessels in retinal images In M Fargues and R Hippenstiel, editors, Signals, Systems & Computers,

[42] T Wong, R Klein, A Sharrett, B Duncan, D Couper, J Tielsch, B Klein, and

L Hubbard Retinal arteriolar narrowing and risk of coronary heart disease in men and women the atherosclerosis risk in communities study Journal of American Med- ical Association, 287(9):1153–1159, March 6 2002.

[43] T Y Wong, R Klein, A R Sharrett, M I Schmidt, J S Pankow, D J Couper,

B E K Klein, L D Hubbard, and B B Duncan Retinal arteriolar narrowing and risk of diabetes mellitus in middle-aged persons Journal of American Medical Association, 287(19):2528–2533, May 15 2002.

[44] F Zana and J.-C Klein Robust segmentation of vessels from retinal angiography.

In Proceeding International Conference Digital Signal Processing, pages 1087–1090,1997.

Ngày đăng: 21/10/2022, 17:58

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[2] J. W. Berger, T. R. Patel, D. S. Shin, J. R. Piltz, and R. A. Stone. Computerized stereo chronoscopy and alternation flicker to detect optic nerve head contour change.Ophthalmology, 107(7):1316–20, 2000 Sách, tạp chí
Tiêu đề: Computerized stereo chronoscopy and alternation flicker to detect optic nerve head contour change
Tác giả: J. W. Berger, T. R. Patel, D. S. Shin, J. R. Piltz, R. A. Stone
Nhà XB: Ophthalmology
Năm: 2000
[4] A. Can, H. Shen, J. N. Turner, H. L. Tanenbaum, and B. Roysam. Rapid automated tracing and feature extraction from live high-resolution retinal fundus images using direct exploratory algorithms. IEEE Trans. on Info. Tech. for Biomedicine, 3(2):125–138, 1999 Sách, tạp chí
Tiêu đề: Rapid automated tracing and feature extraction from live high-resolution retinal fundus images using direct exploratory algorithms
Tác giả: A. Can, H. Shen, J. N. Turner, H. L. Tanenbaum, B. Roysam
Nhà XB: IEEE Trans. on Info. Tech. for Biomedicine
Năm: 1999
[5] N. Chapman, N. Witt, X. Gao, A. Bharath, A. Stanton, S. Thom, and A. Hughes.Computer algorithms for the automated measurement of retinal arteriolar diameters.British Journal of Ophthalmology, 85:74–79, 2001 Sách, tạp chí
Tiêu đề: Computer algorithms for the automated measurement of retinal arteriolar diameters
Tác giả: N. Chapman, N. Witt, X. Gao, A. Bharath, A. Stanton, S. Thom, A. Hughes
Nhà XB: British Journal of Ophthalmology
Năm: 2001
[7] H. Chen, V. Patel, J. Wiek, S. Rassam, and E. Kohner. Vessel diameter changes during the cardiac cycle. Eye, 8:97–103, 1994 Sách, tạp chí
Tiêu đề: Vessel diameter changes during the cardiac cycle
Tác giả: H. Chen, V. Patel, J. Wiek, S. Rassam, E. Kohner
Nhà XB: Eye
Năm: 1994
[8] O. Chutatape, L. Zheng, and S. Krishman. Retinal blood vessel detection and tracking by matched gaussian and kalman filters. In Proceeding of IEEE Int Conf Engineering in Medicine and Biology Society, volume 20, pages 3144–3149, 1998 Sách, tạp chí
Tiêu đề: Retinal blood vessel detection and tracking by matched gaussian and kalman filters
Tác giả: O. Chutatape, L. Zheng, S. Krishman
Nhà XB: Proceeding of IEEE Int Conf Engineering in Medicine and Biology Society
Năm: 1998
[9] F. Delori, K. Fitch, G. Feke, D. Deupree, and J. Weiter. Evaluation of micrometric and microdensitometric methods for measuring the width of retinal vessel images on fundus photographs. Graefes Archive for Clinical and Experimental Ophthalmology, 226:393–399, 1988. Manual methodology for measuing vessel widths Sách, tạp chí
Tiêu đề: Evaluation of micrometric and microdensitometric methods for measuring the width of retinal vessel images on fundus photographs
Tác giả: F. Delori, K. Fitch, G. Feke, D. Deupree, J. Weiter
Nhà XB: Graefes Archive for Clinical and Experimental Ophthalmology
Năm: 1988
[10] K. Fritzsche, A. Can, H. Shen, C. Tsai, J. Turner, H. Tanenbuam, C. Stewart, and B. Roysam. Automated model based segmentation, tracing and analysis of retinal vasculature from digital fundus images. In J. S. Suri and S. Laxminarayan, editors, State-of-The-Art Angiography, Applications and Plaque Imaging Using MR, CT, Ul- trasound and X-rays. Academic Press, 2002 Sách, tạp chí
Tiêu đề: State-of-The-Art Angiography, Applications and Plaque Imaging Using MR, CT, Ultrasound and X-rays
Tác giả: K. Fritzsche, A. Can, H. Shen, C. Tsai, J. Turner, H. Tanenbuam, C. Stewart, B. Roysam
Nhà XB: Academic Press
Năm: 2002
[13] X. Gao, A. Bharath, A. Stanton, A. Hughes, N. Chapman, and S. Thom. Measure- ment of vessel diameters on retinal images for cardiovascular studies. In Proceedings of Medical Image Understanding and Analysis, 2001 Sách, tạp chí
Tiêu đề: Measurement of vessel diameters on retinal images for cardiovascular studies
Tác giả: X. Gao, A. Bharath, A. Stanton, A. Hughes, N. Chapman, S. Thom
Nhà XB: Proceedings of Medical Image Understanding and Analysis
Năm: 2001
[14] X. Gao, A. Bharath, A. Stanton, A. Hughes, N. Chapman, and S. Thom. A method of vessel tracking for vessel diameter measurement on retinal images. In Proceedings IEEE International Conference on Image Processing 2001, pages 881–884, 2001 Sách, tạp chí
Tiêu đề: A method of vessel tracking for vessel diameter measurement on retinal images
Tác giả: X. Gao, A. Bharath, A. Stanton, A. Hughes, N. Chapman, S. Thom
Nhà XB: Proceedings IEEE International Conference on Image Processing
Năm: 2001
[15] R. Giansanti, P. Fumelli, G. Passerini, and P. Zingaretti. Imaging system for reti- nal change evaluation. Sixth International Conference on Image Processing and Its Applications, 2:530–534, 1997 Sách, tạp chí
Tiêu đề: Imaging system for retinal change evaluation
Tác giả: R. Giansanti, P. Fumelli, G. Passerini, P. Zingaretti
Nhà XB: Sixth International Conference on Image Processing and Its Applications
Năm: 1997
[16] M. Hammer, S. Leistritz, L. Leistritz, D. Schweitzer, and E. Thamm. Monte-carlo simulation of retinal vessel profiles for the interpretation of in viva oxymetric mea- surements by imaging fundus reflectometry. In Proceedings SPIE Medical Apps of Lasers in Derm, Ophth, Dent,& Endo, volume 3192, pages 211–218, 1997 Sách, tạp chí
Tiêu đề: Monte-carlo simulation of retinal vessel profiles for the interpretation of in viva oxymetric measurements by imaging fundus reflectometry
Tác giả: M. Hammer, S. Leistritz, L. Leistritz, D. Schweitzer, E. Thamm
Nhà XB: Proceedings SPIE Medical Apps of Lasers in Derm, Ophth, Dent,& Endo
Năm: 1997
[17] A. Hoover, V. Kouznetsova, and M. Goldbaum. Locating blood vessels in retinal im- ages by piecewise threshold probing of a matched filter response. IEEE Transactions on Medical Imaging, 19(3):203–210, 2000 Sách, tạp chí
Tiêu đề: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response
Tác giả: A. Hoover, V. Kouznetsova, M. Goldbaum
Nhà XB: IEEE Transactions on Medical Imaging
Năm: 2000
[18] A. Houben, M. Canoy, H. Paling, P. Derhaag, and P. de Leeuw. Quantitative analysis of retinal vascular changes in essential and renovascular hypertension. Journal of Hypertension, 13:1729–1733, 1995 Sách, tạp chí
Tiêu đề: Quantitative analysis of retinal vascular changes in essential and renovascular hypertension
Tác giả: A. Houben, M. Canoy, H. Paling, P. Derhaag, P. de Leeuw
Nhà XB: Journal of Hypertension
Năm: 1995
[21] P. Jasiobedzki, D. McLeod, and C. Taylor. Detection of non-perfused zones in retinal images. Computer-Based Medical Systems: Fourth Annual IEEE Symposium, pages 162–169, 1991 Sách, tạp chí
Tiêu đề: Computer-Based Medical Systems: Fourth Annual IEEE Symposium
Tác giả: P. Jasiobedzki, D. McLeod, C. Taylor
Năm: 1991
[22] P. Jasiobedzki, C. Taylor, and J. Brunt. Automated analysis of retinal images. Image, Vision and Computing, 3(11):139–144, 1993 Sách, tạp chí
Tiêu đề: Automated analysis of retinal images
Tác giả: P. Jasiobedzki, C. Taylor, J. Brunt
Nhà XB: Image, Vision and Computing
Năm: 1993
[24] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. Interna- tional Journal of Computer Vision, 1(4):321–331, 1988 Sách, tạp chí
Tiêu đề: Snakes: Active contour models
Tác giả: M. Kass, A. Witkin, D. Terzopoulos
Nhà XB: International Journal of Computer Vision
Năm: 1988
[25] B. Kochner, D. Schuhmann, M. Michaelis, G. Mann, and K. Englmeier. Course tracking and contour extraction of retinal vessels from color fundus photographs: Most efficient use of steerable filters for model-based image analysis. In SPIE Conference on Image Processing, volume 3338, pages 755–761, 1998 Sách, tạp chí
Tiêu đề: Course tracking and contour extraction of retinal vessels from color fundus photographs: Most efficient use of steerable filters for model-based image analysis
Tác giả: B. Kochner, D. Schuhmann, M. Michaelis, G. Mann, K. Englmeier
Nhà XB: SPIE Conference on Image Processing
Năm: 1998
[26] H. Li and O. Chutatape. Fundus image features extraction. In J. Enderle, editor, Proc. IEEE Int. Conf. Engineering in Medicine and Biology, volume 4, pages 3071 – 3073, Chicago, IL, USA, July 2000. Intelligent Machines Lab., Nanyang Technol.Univ., Singapore Sách, tạp chí
Tiêu đề: Fundus image features extraction
Tác giả: H. Li, O. Chutatape
Nhà XB: Intelligent Machines Lab., Nanyang Technol.Univ., Singapore
Năm: 2000
[31] L. Pedersen, M. Grunkin, B. Ersbll, K. Madsen, M. Larsen, N. Christoffersen, and U. Skands. Quantitative measurement of changes in retinal vessel diameter in ocular fundus images. Pattern Recognition, 21:1215–1223, 2000 Sách, tạp chí
Tiêu đề: Quantitative measurement of changes in retinal vessel diameter in ocular fundus images
Tác giả: L. Pedersen, M. Grunkin, B. Ersbll, K. Madsen, M. Larsen, N. Christoffersen, U. Skands
Nhà XB: Pattern Recognition
Năm: 2000
[32] A. Pinz, S. Bernogger, P. Datlinger, and A. Kruger. Mapping the human retina.IEEE Transactions on Medical Imaging, 17(4):606–620, Aug 1998 Sách, tạp chí
Tiêu đề: Mapping the human retina
Tác giả: A. Pinz, S. Bernogger, P. Datlinger, A. Kruger
Nhà XB: IEEE Transactions on Medical Imaging
Năm: 1998