Archives

  • 2022-09
  • 2022-08
  • 2022-07
  • 2022-06
  • 2022-05
  • 2022-04
  • 2021-03
  • 2020-08
  • 2020-07
  • 2020-03
  • 2019-11
  • 2019-10
  • 2019-09
  • 2019-08
  • 2019-07
  • br b with the estimated ALI in

    2019-09-24


    (b) with the estimated ALI in blue, and the left (yellow) and right (red) region domains. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
    A comprehensive study was also done in Zhong et al. (2017) to compare tissue architectures on two large series of tumour, i.e. Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KCCC) and concluded that: (a) CMF and features based on fea-ture learning (i.e., deep learning approaches) are outperforming other texture-/color-based features; (b) Cellular saliency (e.g., nu-clei in tissue histology sections) incorporation weakens the perfor-mance for models built upon pixel-/patch-level features; and (c) Sparse feature encoder significantly improves the classification per-formance for models built upon CMF. Bianconi et al. (2015) used a few sets of appearance features based on visual perception to discriminate tumour epithelium from stroma in colorectal cancer. These features are related to human perception which can be in-terpreted by a pathologist in a meaningful way.
    Fig. 4. The probabilistic maps generated by our approach for WSI (a) and TMA (b). Arranged as columns are the original images, and the associated probabilistic maps of epithelium, stroma and background tissues, respectively.
    3. Proposed method
    The proposed image analysis pipeline consists of 3 stages (see Fig. 2). In the first stage, we segment the epithelium using a novel approach based on fuzzy c- means and a Signed Pressure Force level set: the Fuzzy Signed Pressure Force approach (FSPF), with the fuzzy c-means used to control the Latrunculin A of the contour over time. In the second stage, we calculate for each independent com-ponents from the produced epithelium map a set of novel morpho-metric features based on the Axis of Least Inertia (ALI), and a set of novel appearance descriptors based on Weighted Average Intensity profiles (WAI), which are invariant to staining. Finally, we distin-guish between tumour and normal epithelium using two Self Or-ganizing Maps (SOMs) of those two feature sets, previously trained on manually labelled images.
    In the following, we discuss in detail each component of the proposed system, providing deep insights into the major contribu-tions of the system.
    3.1. Epithelium segmentation
    We propose a novel machine-learning approach to epithelium segmentation capable of dealing with a wide variety of stains, and with both positively- and negatively- stained epithelium. It is based on a variant of the Signed Pressure Force method, and in-corporates information about the epithelial area into the segmen-tation framework.
    The SPF method (Zhang, Zhang, Song, & Zhou, 2010) combines the advantages of the Chan-Vese “active contour without edges” model (C − V ), which is region-based (Chan & Vese, 2001), with those of the Geodesic Active Contour models (GAC), which is edge-based (Caselles, Kimmel, & Sapiro, 1997). It statistically models the 
    information inside and outside the contour to construct a region-based Signed Pressure Force (SPF) function. The SPF function is so-called because it tends to make the contour C shrink when it is outside the region of interest, and expand otherwise based on the signs of the forces inside and outside the contour. The evolution of the contour is controlled by the following PDE:
    where I(x) is an input image, α is a balloon force parameter that modulates the propagation speed of the contour, φ is a level set function (Zhang et al., 2010), and the SPF function spf is defined as
    max
    where c+ (C) and c− (C) are defined as the global average intensity inside and outside the contour, respectively.