\documentclass[twocolumn,twoside]{article}
\makeatletter\if@twocolumn\PassOptionsToPackage{switch}{lineno}\else\fi\makeatother
\usepackage{amsfonts,amssymb,amsbsy,latexsym,amsmath,tabulary,graphicx,times,xcolor}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Following additional macros are required to function some
% functions which are not available in the class used.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{url,multirow,morefloats,floatflt,cancel,tfrupee}
\makeatletter
\AtBeginDocument{\@ifpackageloaded{textcomp}{}{\usepackage{textcomp}}}
\makeatother
\usepackage{colortbl}
\usepackage{xcolor}
\usepackage{pifont}
\usepackage[nointegrals]{wasysym}
\urlstyle{rm}
\makeatletter
%%%For Table column width calculation.
\def\mcWidth#1{\csname TY@F#1\endcsname+\tabcolsep}
%%Hacking center and right align for table
\def\cAlignHack{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip}
\def\rAlignHack{\rightskip\z@skip\leftskip\@flushglue \parindent\z@\parfillskip\z@skip}
%Etal definition in references
\@ifundefined{etal}{\def\etal{\textit{et~al}}}{}
%\if@twocolumn\usepackage{dblfloatfix}\fi
\usepackage{ifxetex}
\ifxetex\else\if@twocolumn\@ifpackageloaded{stfloats}{}{\usepackage{dblfloatfix}}\fi\fi
\AtBeginDocument{
\expandafter\ifx\csname eqalign\endcsname\relax
\def\eqalign#1{\null\vcenter{\def\\{\cr}\openup\jot\m@th
\ialign{\strut$\displaystyle{##}$\hfil&$\displaystyle{{}##}$\hfil
\crcr#1\crcr}}\,}
\fi
}
%For fixing hardfail when unicode letters appear inside table with endfloat
\AtBeginDocument{%
\@ifpackageloaded{endfloat}%
{\renewcommand\efloat@iwrite[1]{\immediate\expandafter\protected@write\csname efloat@post#1\endcsname{}}}{\newif\ifefloat@tables}%
}%
\def\BreakURLText#1{\@tfor\brk@tempa:=#1\do{\brk@tempa\hskip0pt}}
\let\lt=<
\let\gt=>
\def\processVert{\ifmmode|\else\textbar\fi}
\let\processvert\processVert
\@ifundefined{subparagraph}{
\def\subparagraph{\@startsection{paragraph}{5}{2\parindent}{0ex plus 0.1ex minus 0.1ex}%
{0ex}{\normalfont\small\itshape}}%
}{}
% These are now gobbled, so won't appear in the PDF.
\newcommand\role[1]{\unskip}
\newcommand\aucollab[1]{\unskip}
\@ifundefined{tsGraphicsScaleX}{\gdef\tsGraphicsScaleX{1}}{}
\@ifundefined{tsGraphicsScaleY}{\gdef\tsGraphicsScaleY{.9}}{}
% To automatically resize figures to fit inside the text area
\def\checkGraphicsWidth{\ifdim\Gin@nat@width>\linewidth
\tsGraphicsScaleX\linewidth\else\Gin@nat@width\fi}
\def\checkGraphicsHeight{\ifdim\Gin@nat@height>.9\textheight
\tsGraphicsScaleY\textheight\else\Gin@nat@height\fi}
\def\fixFloatSize#1{}%\@ifundefined{processdelayedfloats}{\setbox0=\hbox{\includegraphics{#1}}\ifnum\wd0<\columnwidth\relax\renewenvironment{figure*}{\begin{figure}}{\end{figure}}\fi}{}}
\let\ts@includegraphics\includegraphics
\def\inlinegraphic[#1]#2{{\edef\@tempa{#1}\edef\baseline@shift{\ifx\@tempa\@empty0\else#1\fi}\edef\tempZ{\the\numexpr(\numexpr(\baseline@shift*\f@size/100))}\protect\raisebox{\tempZ pt}{\ts@includegraphics{#2}}}}
%\renewcommand{\includegraphics}[1]{\ts@includegraphics[width=\checkGraphicsWidth]{#1}}
\AtBeginDocument{\def\includegraphics{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\def\URL#1#2{\@ifundefined{href}{#2}{\href{#1}{#2}}}
%%For url break
\def\UrlOrds{\do\*\do\-\do\~\do\'\do\"\do\-}%
\g@addto@macro{\UrlBreaks}{\UrlOrds}
\edef\fntEncoding{\f@encoding}
\def\EUoneEnc{EU1}
\makeatother
\def\floatpagefraction{0.8}
\def\dblfloatpagefraction{0.8}
\def\style#1#2{#2}
\def\xxxguillemotleft{\fontencoding{T1}\selectfont\guillemotleft}
\def\xxxguillemotright{\fontencoding{T1}\selectfont\guillemotright}
\newif\ifmultipleabstract\multipleabstractfalse%
\newenvironment{typesetAbstractGroup}{}{}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage[authoryear]{natbib}
\makeatletter\input{size10-pointfive.clo}\makeatother%
\definecolor{kwdboxcolor}{RGB}{242,242,242}
\usepackage[hidelinks,colorlinks=true,allcolors=blue]{hyperref}
\linespread{1}
\def\floatpagefraction{0.8}
\usepackage[paperheight=11.69in,paperwidth=8.26in,top=1in,bottom=1in,left=1in,right=.75in,headsep=24pt]{geometry}
\usepackage{multirow-custom}
\makeatletter
\def\hlinewd#1{%
\noalign{\ifnum0=`}\fi\hrule \@height #1%
\futurelet\reserved@a\@xhline}
\def\tbltoprule{\hlinewd{1pt}\\[-14pt]}
\def\tblbottomrule{\noalign{\vspace*{6pt}}\hline\noalign{\vspace*{2pt}}}
\def\tblmidrule{\hline\noalign{\vspace*{2pt}}}
\let\@articleType\@empty
\let\@journalDoi\@empty
\let\@journalVolume\@empty
\let\@journalIssue\@empty
\let\@crossMarkLink\@empty
\let\@receivedDate\@empty
\let\@acceptedDate\@empty
\let\@revisedDate\@empty
\let\@copyrightYear\@empty
\let\@firstPage\@empty
\def\articleType#1{\gdef\@articleType{#1}}
\def\journalDoi#1{\gdef\@journalDoi{#1}}
\def\crossMarkLink#1{\gdef\@crossMarkLink{#1}}
\def\receivedDate#1{\gdef\@receivedDate{#1}}
\def\acceptedDate#1{\gdef\@acceptedDate{#1}}
\def\revisedDate#1{\gdef\@revisedDate{#1}}
\def\copyrightYear#1{\gdef\@copyrightYear{#1}}
\def\journalVolume#1{\gdef\@journalVolume{#1}}
\def\journalIssue#1{\gdef\@journalIssue{#1}}
\def\firstPage#1{\gdef\@firstPage{#1}}
\def\author#1{%
\gdef\@author{%
\hskip-\dimexpr(\tabcolsep)\hskip5pt%
\parbox{\dimexpr\textwidth-1pt}%
{\fontsize{11}{13}\selectfont\raggedright #1}%
}%
}
\usepackage{pharmascope-abs}
\usepackage{caption}
\usepackage{lastpage}
\usepackage{fancyhdr}
\usepackage[noindentafter,explicit]{titlesec}
\usepackage{fontspec}
\setmainfont[%
BoldFont=cambriab.otf,%
ItalicFont=CAMBRIAI.otf,%
BoldItalicFont=CAMBRIAZ.otf]{Cambria.otf}
\def\title#1{%
\gdef\@title{%
\vspace*{-40pt}%
\ifx\@articleType\@empty\else{\fontsize{10}{12}\scshape\selectfont\hspace{8pt}\@articleType\hfill\mbox{}\par\vspace{2pt}}\fi%
\minipage{\linewidth}
\hrulefill\\[-0.7pt]%
\mbox{~}\hspace{5pt}\parbox{.1\linewidth}{\includegraphics[width=75pt,height=50pt]{ijrps_logo.png}}\hfill
\fcolorbox{kwdboxcolor}{kwdboxcolor}{\parbox{.792\linewidth}{%
\begin{center}\fontsize{17}{17}\selectfont\scshape\vskip-7pt International Journal of Research in Pharmaceutical Sciences\hfill\end{center}%
\vspace*{-10pt}\hspace*{4pt}{\fontsize{8}{9}\selectfont Published by JK Welfare \& Pharmascope Foundation\hfill Journal Home Page: \href{http://www.pharmascope.org/ijrps}{\color{blue}\underline{\smash{www.pharmascope.org/ijrps}}}}\hspace*{4pt}\mbox{}}}%
\par\vspace*{-1pt}\rule{\linewidth}{1.3pt}%
\endminipage%
\par\vspace*{9.2pt}\parbox{.98\linewidth}{\linespread{.9}\raggedright\fontsize{14}{17}\selectfont #1}%
\vspace*{-8pt}%
}
}
\setlength{\parindent}{0pt}
\setlength{\parskip}{0.4pc plus 1pt minus 1pt}
\def\abbrvJournalTitle{Int. J. Res. Pharm. Sci.}
\fancypagestyle{headings}{%
\renewcommand{\headrulewidth}{0pt}%
\renewcommand{\footrulewidth}{0.3pt}
\fancyhf{}%
\fancyhead[R]{%
\fontsize{9.12}{11}\selectfont\RunningAuthor,\ \abbrvJournalTitle,\ \ifx\@journalVolume\@empty X\else\@journalVolume\fi%
\ifx\@journalIssue\@empty\else(\@journalIssue)\fi%
,\ \ifx\@firstPage\@empty 1\else\@firstPage\fi-\pageref*{LastPage}%
}%
\fancyfoot[LO,RE]{\fontsize{9.12}{11}\selectfont\textcopyright\ International Journal of Research in Pharmaceutical Sciences}%
\fancyfoot[RO,LE]{\fontsize{9.12}{11}\selectfont\thepage}
}\pagestyle{headings}
\fancypagestyle{plain}{%
\renewcommand{\headrulewidth}{0pt}%
\renewcommand{\footrulewidth}{0.3pt}%
\fancyhf{}%
\fancyhead[R]{%
\fontsize{9.12}{11}\selectfont\RunningAuthor,\ \abbrvJournalTitle,\ \ifx\@journalVolume\@empty X\else\@journalVolume\fi%
\ifx\@journalIssue\@empty\else(\@journalIssue)\fi%
,\ \ifx\@firstPage\@empty 1\else\@firstPage\fi-\pageref*{LastPage}%
}%
\fancyfoot[LO,RE]{\fontsize{9.12}{11}\selectfont\textcopyright\ International Journal of Research in Pharmaceutical Sciences}%
\fancyfoot[RO,LE]{\fontsize{9.12}{11}\selectfont\thepage}
\ifx\@firstPage\@empty\else\setcounter{page}{\@firstPage}\fi
}
\def\NormalBaseline{\def\baselinestretch{1.1}}
\usepackage{textcase}
\setcounter{secnumdepth}{0}
\titleformat{\section}[block]{\bfseries\boldmath\NormalBaseline\filright\fontsize{10.5}{13}\selectfont}
{\thesection}
{6pt}
{\MakeTextUppercase{#1}}
[]
\titleformat{\subsection}[block]{\bfseries\boldmath\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubsection}
{6pt}
{#1}
[]
\titleformat{\subsubsection}[block]{\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubsubsection}
{6pt}
{#1}
[]
\titleformat{\paragraph}[block]{\NormalBaseline\filright\fontsize{10.5}{10}\selectfont}
{\theparagraph}
{6pt}
{#1}
[]
\titleformat{\subparagraph}[block]{\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubparagraph}
{6pt}
{#1}
[]
\titlespacing{\section}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subsection}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subsubsection}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\paragraph}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subparagraph}{0pt}{.5\baselineskip}{.5\baselineskip}
\captionsetup[figure]{skip=1.4pt,font=bf,labelsep=colon,justification=raggedright,singlelinecheck=false}
\captionsetup[table]{skip=1.4pt,font=bf,labelsep=colon,justification=raggedright,singlelinecheck=false}
\def\bibyear#1{#1}
\def\bibjtitle#1{#1}
\def\bibauand{}
\setlength\bibsep{3pt}
\setlength\bibhang{8pt}
\makeatother
\date{}
\usepackage{float}
\begin{document}
\def\RunningAuthor{Shobha Rani N and Chinmayi S Rao}
\firstPage{2071}
\articleType{Original Article}
\receivedDate{09.03.2019}
\acceptedDate{18.06.2019}
\revisedDate{14.06.2019}
\journalVolume{10}
\journalIssue{3}
\journalDoi{ijrps.v10i3.1423}
\copyrightYear{2019}
\def\authorCount{2}
\def\affCount{1}
\def\journalTitle{International Journal of Research in Pharmaceutical Sciences}
\title{\textbf{Exploration and evaluation of efficient pre-processing and segmentation technique for breast cancer diagnosis based on mammograms}}
\author{Shobha Rani N\textsuperscript{*},
Chinmayi S Rao~\\[5pt]{Department of computer science\unskip, Amrita School of Arts and Sciences, Mysuru, Amrita Vishwa Vidyapeetham, India}}
\begin{abstract}
Breast cancer is the second leading cause of death for women everywhere in the world. Since the reason behind the disease remains unknown, early detection and diagnosis is the key challenge for breast cancer control. In this work, mammogram images are initially subject to pre-processing using Laplacian filter for enhancement of tumour regions, Gaussian mixture model, Gaussian kernel FCM, Otsu global thresholding and FCM technique are employed for segmentation. Further, the efficiency of segmentation techniques is analyzed by classifying the samples into benign, malignant and healthy using Gray Level Co-occurrence Matrix (GLCM) features. Linear discriminant analysis classifier is used a combination based on which efficiency used for classification of mammograms. Ensemble methods are evaluated. The efficiency has resulted in better accuracy with the ensemble-based method. The experimentation is conducted in the mini MIAS database of mammograms, and the efficiency of the linear discriminant analyzer is found to be 89.19\% for GKFCM, 83.78\% with Otsu and 78.38\% with FCM method with GLCM features.
\end{abstract}\def\keywordstitle{Keywords}
\begin{keywords}CLAHE (Contrast Limited Adaptive Histogram Equalization),\newline Log (Laplacian of Gaussian),\newline GMM (Gaussian Mixture Model,\newline Gray Level Co-occurrence Matrix),\newline Ensemble Based Technique
\end{keywords}
\twocolumn[ \maketitle {\printKwdAbsBox}]
\makeatletter\textsuperscript{*}Corresponding Author\par Name:\ Shobha Rani N~\\ Phone:\ +91-9741316315~\\ Email:\ nshobharani@asas.mysore.amrita.edu
\par\vspace*{-11pt}\hrulefill\par{\fontsize{12}{14}\selectfont ISSN: 0975-7538}\par%
\textsc{DOI:}\ \href{https://doi.org/10.26452/\@journalDoi}{\textcolor{blue}{\underline{\smash{https://doi.org/10.26452/\@journalDoi}}}}\par%
\vspace*{-11pt}\hrulefill\\{\fontsize{9.12}{10.12}\selectfont Production and Hosted by}\par{\fontsize{12}{14}\selectfont Pharmascope.org}\par%
\vspace*{-7pt}{\fontsize{9.12}{10.12}\selectfont\textcopyright\ \@copyrightYear\ $|$ All rights reserved.}\par%
\vspace*{-11pt}\rule{\linewidth}{1.2pt}
\makeatother
\section{Introduction}
Breast cancer is a considerable prime cause which affects the lives of women's and leads to death all around the world \unskip~\citep{562071:12937287}. According to statistical data from the WCR, the proportion of breast tumors in cancers diagnosed is up to 30\%, resulting in 15\% death due to cancer worldwide. Fortunately, the early diagnosis of breast cancer is a great help to treat the disease, to alleviate both the physical pain and psychological distress fully-fledged by patients \unskip~\citep{562071:12937299,562071:12937289}.
Breast cancers that are found as a result of their inflicting symptoms tend to be larger and additionally advanced. The abnormalities like the existence of a breast mass, change in shape, the dimension of the breast, differences in the colour of the breast skin, breast aches, etc. are some symptoms of breast cancer. In distinction, breast cancers found throughout screening exams are additional doubtless to be smaller and still confined to the breast. The dimensions of carcinoma and the way it unfolds area unite two of the foremost necessary factors in predicting the prognosis or outlook of women with the malady \unskip~\citep{562071:12937319}.
Computerized imaging techniques are being used for screening the abnormalities of masses formed in the breast. Mammography is one of the widely used imaging standards for diagnosis of breast cancers. In particular cases, diagnostic techniques are combined with clinical breast exams and breast self-exams, for possibilities of locating cancer area unit even larger. Obtaining diagnosis at an early stage may help in prescribing treatment at the benign stage that may reduce the impact of stressful life and diagnosing after treatment will place your mind relaxed. Also, diagnosis of cancer at an early stage may additionally save your life.
There exist several other imaging techniques to diagnose breast cancer such as magnetic resonance image (MRI), ultrasound imaging and biopsy. RI uses magnetic energy to screen the mammograms. So, it can reveal the abnormalities within the breasts, whether it's symmetric, asymmetric. Diagnosis of breast classified into normal, benign and malignant. It is common to contribute to CAD systems relevant to breast cancer. Tumors can be identified as locally low-density areas on mammograms. However, their absolute values are not constant and vary in size, background changes, photographic conditions, and so on.
\bgroup
\fixFloatSize{images/eb9a685b-a72c-4ec2-b4c1-55759ba31005-upicture1.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/eb9a685b-a72c-4ec2-b4c1-55759ba31005-upicture1.png}{\includegraphics{images/eb9a685b-a72c-4ec2-b4c1-55759ba31005-upicture1.png}}{}
\makeatother
\caption{\boldmath {Samples of Mammogram Images.}}
\label{f-27f79b06c96c}
\end{figure}
\egroup
\bgroup
\fixFloatSize{images/20c69488-32aa-4acc-99ec-d767820f3d70-upicture2.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/20c69488-32aa-4acc-99ec-d767820f3d70-upicture2.png}{\includegraphics{images/20c69488-32aa-4acc-99ec-d767820f3d70-upicture2.png}}{}
\makeatother
\caption{\boldmath {Automated Diagnosis System for Detection of Breast Cancer Using Mammogram Analysis}}
\label{f-d3c1b68c8b24}
\end{figure}
\egroup
\textbf{Literature Survey}
Review of some of the existing techniques for detecting mass in mammograms is as discussed. A technique using multiple instance learning and convolution network by slicing the image into 2D and analyses the DBT images \unskip~\citep{562071:12937281}. In a different work, breast cancer diagnosis is carried out using shallow CNN and deep CNN (Virtual Image Rendering, Feature Generation) \unskip~\citep{562071:12937282}. A technique for detection of lesions are performed using convolutional neural networks by \unskip~\citet{562071:12937283}. Breast cancer is predicted by using single hidden layer feed forward with correlation-based strategy with high dimensional data factor \unskip~\citep{562071:12937284}. In a similar study based on tissues of breast through nuclei-based characteristics, texture feature and which is also able to classify the histological images \unskip~\citep{562071:12937286}. Gray Level Co-occurrence Matrix (GLCM) texture-based feature is employed for classification by \unskip~\citep{562071:12937287,562071:12937310,562071:12937309}. Cancer is predicted on the basis of change in temperature between the breasts, the thermograms are classified into normal and abnormal based on SVM classification \unskip~\citep{562071:12937454}. Deep neural networks and Recursive Feature Elimination were used for classification \unskip~\citep{562071:12937288}.
Furthermore, a Content Based Image Retrieval (CBIR) approach which uses the semi-automatic segmentation had improved texture-based representation of masses to archive best trade of between CBIR and cancer diagnosis. \unskip~\citep{562071:12937289}. Breast ultrasound imaging mainly concentrated on various segmentation approaches to detect the breast cancer \unskip~\citep{562071:12937290}. Large scale deep learning for detection of mammographic lesions is carried out using convolution neural network to enhance the performance of computer aided system to diagnosis the cancer and improve the performances of region based lesion/mass detection \unskip~\citep{562071:12937291,562071:12937292,562071:12937300,562071:12937305} Breast cancer can be classified using fuzzy rule-based logic to improve the prediction and analysis \unskip~\citep{562071:12937293,562071:12937298}. Deep Multi-instance network classifies the mammogram into benign or malignant \unskip~\citep{562071:12937294}. Hybrid Computer-aided-diagnosis System for Prediction of Breast Cancer Recurrence (HPBCR) using optimized ensemble learning this method is accurate and precise the cancer. \unskip~\citep{562071:12937295}. Cancer detected using texture feature and further classified based on support vector machine \unskip~\citep{562071:12937296,562071:12937312}.
An automatic program wavelet energy entropy (WEE) and linear regression classifier (LRC) are the two mature technique used for detecting abnormal breast. The results of wavelet energy entropy and linear regression classifier are better than five state art methods with accuracy of about 91.85\% \unskip~\citep{562071:12937297}. The large number of nuclei and the size of the high-resolution digitized pathology images used for detection of breast carcinoma using Stacked Sparse Auto Encoder (SSAE). SSAE method hadobtained improved F-measure value of 8.49\% and precision-recall cure of 78.83\% and also well performed compared to other nine state art of nuclei detection. \unskip~\citep{562071:12937301,562071:12937302}. Genetic algorithm and rotation forest to diagnosis breast cancer. It is one of the best and highest classification accuracies was obtained compared to all researches. The good classification accuracy was achieved by Support Vector Machine (SVM) which is about 99.8\% \unskip~\citep{562071:12937304,562071:12937315}. Genetically optimized neural network model with proposed genetic programming crossover and mutation operators to diagnose the cancer. Genetic programming algorithm the help reaches the solutions faster with more generalized solutions \unskip~\citep{562071:12937306}. Detection of Breast Abnormality from Thermograms Using Curvelet Transform Based Feature Extraction. Breast thermography is one such imaging modality, which represents the temperature variations of breast in the form of intensity variations on an image. In the last decade, several studies have been made to evaluate the potential of breast thermograms in detecting abnormal breast conditions. Curvelet frequency is used to improve the efficiency\unskip~\citep{562071:12937318}. MRI breast cancer diagnosis hybrid approach using adaptive ant-based segmentation and multilayer perceptron neural networks classifier hybrid approach that combines the advantages of fuzzy sets, ant-based clustering and multilayer perceptron neural networks (MLPNN) classifier, in conjunction with statistical-based feature extraction technique \unskip~\citep{562071:12937314}.
\section{Materials and Methods}
The proposed method to discover the masses consists of three stages. The goal of the first stage is to enhance the image using a different pre-processing filter which removes the noise and highlights the tumour region. In the next stage, the tumour is extracted from enhanced images using proposed segmentation techniques. Finally, feature matrix is created based upon 13 GLCM features used classify the extracted tumor area into benign, malignant and normal using Linear Discriminant Analyzer (LDA). A detailed explanation of the methodology, the analysis and discussion of the results are as discussed subsequently. The datasets downloaded from mini-MIAS samples are taken for the segmentation and classification purpose. A few samples are shown below in Figure~\ref{f-27f79b06c96c}. This work proposes a mammogram classification system. A mammogram is classified as normal, benign and malignant. The proposed framework of breast cancer classification is shown inFigure~\ref{f-27f79b06c96c}. The proposed methodology of breast cancer is shown in Figure~\ref{f-d3c1b68c8b24}
\textbf{Pre-Processing }
\textbf{Contrast Limited Adaptive Histogram Equalization }
Here we first discuss of mammogram images pre-processing and enhancement, by contrast, limited adaptive histogram equalization(CLAHE) which contrast the mammogram by eliminating the noise by noise amplification.
\textbf{Homomorphic filter}
This filter is used to eliminate multiplicative noise and some characteristics from the images. It is used to correcting the non-uniform illumination in the images.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{1}
\let\theHequation\theequation
\label{dfg-08b42054a8ca}
\begin{array}{@{}l}n(e,f)=a(e,f)\ast b(e,f)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Where n, is an image, a is the illumination and b is reflectance. Illumination usually varies slowly across the image as compared to the coefficient of reflection, which may include amendment quite suddenly at object edges. This distinction is the key to separating out the illumination part from the coefficient of reflection part. In homomorphic filtering, we tend to first rework the increasing parts to additive parts by moving to the log domain.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{2}
\let\theHequation\theequation
\label{dfg-fe81f7b801cf}
\begin{array}{@{}l}\ln(\;n(e,f))=\ln(\;a(e,f))+\ln(\;b(e,f))\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Apply Fourier transformation to obtain below equation:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{3}
\let\theHequation\theequation
\label{dfg-a5ba2d72f319}
\begin{array}{@{}l}n(k,l)=a(k,l)+b(k,l)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Use exponential function to eliminate with log function to get new image.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{4}
\let\theHequation\theequation
\label{dfg-141db439d9e4}
\begin{array}{@{}l}new(k,l)=exp(n(k,l))\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\textbf{High boost filter}
This filter is often fascinating to emphasize high-frequency elements representing the image details without eliminating low-frequency elements representing the fundamental variety of the signal.
\textbf{Laplacian of Gaussian filter}
Laplacian filters square measure derivative filters accustomed find areas of rapid in images. Since derivative filters square measure terribly sensitive to noise, it's common to smooth the image before applying the Laplacian. This two-step method is known as LoG operation.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{5}
\let\theHequation\theequation
\label{dfg-95ee450d6fb8}
\begin{array}{@{}l}l(x,y)=f(x,y)=\frac{\sigma ^{2}f(x,y)}{\sigma x^{2}}+\frac{\sigma ^{2}f(x,y)}{\sigma y^{2}}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Where L (x, y) is the Laplacian and f (x, y) is the intensity of the image.
\textbf{Histogram equalization}
This filter will increase the global contrast of the many images, particularly once the usable information of the image is delineated by close contrast values. Through this adjustment, the intensities will be better distributed on the histogram. This permits for areas of lower local contrast to gain a better contrast. Histogram effort accomplishes this by effectively spreading out the foremost frequent intensity values.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{6}
\let\theHequation\theequation
\label{dfg-ff5e63fb8ab2}
\begin{array}{@{}l}I_{hb}=I_0+cI_{hp}=(W_{ap}+cW_{hp})\ast I_0=W_{hb}\ast I_0\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Where the high boost filter image, is the original image, c is a constant, is high boost convolution kernel.
\textbf{Anisotropic Diffusion}
This filter removes the noise without removing any significant parts from the images such as edges, lines and other details from the images. Which is used to interpret the images. Laplacian filter. This filter are derivative filters accustomed to realize areas of speedy modification in images. Since derivative filters are terribly sensitive to noise, it's common to smooth the image.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{7}
\let\theHequation\theequation
\label{dfg-b1798f7e6406}
\begin{array}{@{}l}L(x,y)=\frac{\sigma ^{2}I}{\sigma x^{2}}+\frac{\sigma ^{2}I}{\sigma y^{2}}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\textbf{Adaptive mean filter}
This filter performs a spatial process to see that pixels in a picture are full of impulse noise. This filter classifies constituents as noise by comparing every pixel within the image to its encompassing neighbor pixels. The dimensions of the neighborhood are adjustable, in addition as the threshold for the comparison. A constituent that's totally different from a majority of its neighbors.
\textbf{Unsharp mask filter}
This filter creates a mask using unsharp, negative images using a blurred function from the real images. It creates a less blurred image than the original image.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{8}
\let\theHequation\theequation
\label{dfg-321d34b7ad08}
\begin{array}{@{}l}g(x,y)=f(x,y)-f_s\;(x,y)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Where f is the input image, g is an edge image formed by the unsharp filter. Fs is the smoothed version of the input image.
\textbf{Segmentation}
Image segmentation could be an important and essential part and is one in all the foremost tough tasks in image process and pattern recognition, and determines the standard of the ultimate analysis.
Computer-aided diagnosis system can facilitate medical specialist in reading and interpreting diagnosis. The goal for the segmentation is to find the suspicious is on assist radiologists in diagnosis
\textbf{GMM segmentation}
The GMM may be an applied mathematics model which will describe abstraction well distribute the information properties within the parameter area. GMM is outlined because the linear mixture of the Gaussian density performs, i.e.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{9}
\let\theHequation\theequation
\label{dfg-0362a5b9d560}
\begin{array}{@{}l}P(x)={\textstyle\sum_{i=1}^{m}}\pi_in_i(x\backslash\mu_i,c_i)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Where indicates the conventional distribution of the power or tri-medium with the mean vector and covariance matrix - portion for every cluster, and the pre-probability of Gaussian distribution is examined during this sample information Produces. These previous prospects should meet
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{10}
\let\theHequation\theequation
\label{dfg-8564008422b5}
\begin{array}{@{}l}{\textstyle\sum_{i=1}^{m}}\pi_i=1,0\leq\pi_i\leq1\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
For a selected mixture model, the shape should be accustomed to classify information or grouping
Also, confirm the unknown parameters that the varied components of Gaussian contain the shape, and these unknown parameters are, and.
There are several methods for estimating these parameters; the foremost common technique depends on the EM rule the maximum likelihood.
The density function P(x|) to segment the region using the Gaussian function is followed as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{11}
\let\theHequation\theequation
\label{dfg-ee2bc6e58419}
\begin{array}{@{}l}\style{font-size:14px}{P(x/\theta_i)=n(x/\mu_{i,c})=\frac1{(2\pi)^{{\displaystyle\frac d2}\left|c\right|^{\displaystyle\frac12}}}exp\{-\frac12(x-\mu)^{T}c^{-1}(x-\mu)}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Result of segmentation is shown in Figure~\ref{f-39b9cfc4be94}
\bgroup
\fixFloatSize{images/a080d0e5-2af2-47a7-8230-a0a32a3b559f-upicture3.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/a080d0e5-2af2-47a7-8230-a0a32a3b559f-upicture3.png}{\includegraphics{images/a080d0e5-2af2-47a7-8230-a0a32a3b559f-upicture3.png}}{}
\makeatother
\caption{\boldmath {Segmented Images}}
\label{f-39b9cfc4be94}
\end{figure}
\egroup
\textbf{Gaussian Kernel Fuzzy C-means Clustering}
A method of improving the accuracy of intuitionistic fuzzy c-means by using a kernel function when calculating the distance of a data point from a cluster center, that is, mapping data points from an input space to a high-dimensional space, where a distance Gaussian kernel function is measured using a Gaussian Kernel Function.
The kernel function is a generalization of the distance metric, which measures the distance between two data points because the data points are mapped to a high-dimensional space in which they can be more clearly separated.
\textbf{Fuzzy C-Means Clustering}
Fuzzy clustering is a form of clustering in which each data point can belong to multiple clusters. Clustering or clustering involves assigning data points to clusters such that items in the same cluster are as similar as possible, while items belonging to different clusters are as different as possible. Identify clusters by similarity measures. These similarity measures include distance, connectivity, and strength. Different similarity measures can be selected based on the data or application. In fuzzy clustering, each point of the image is related to each group based on some membership value. Membership values range from 0 to 1.
This membership value measures the extent to which the pixel belongs to that particular cluster. It is an iterative partitioning method that generates an ideal c-partition and a clustering center as a centroid.
\textbf{Feature extraction}
Feature extraction linearly or non-linearly transforms the co-ordinates system of the original variables.
The GLCM functions characterize the texture of an image by conniving however usually pairs of pixels with specific values and in a very such spatial relationship occurs in an image, making a GLCM, and so extracting applied mathematics measures from this matrix. The texture rules consistent with the weight of the equation. The texture is classified consistent with the degree. the square term second-order equation
Mean: For the average, m of the pixel values for the selected image, the value is estimated in the image where the central stack occurs. The mean is calculated using the below formula:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{12}
\let\theHequation\theequation
\label{dfg-ff44775a5d85}
\begin{array}{@{}l}\mu=\frac1{mn}{\textstyle\sum_{i=1}^{m}}P(i,j)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Standard Deviation: For standard deviation, for an estimate, the average square deviation of the gray image pixel value P (i, j) of its mean value. Standard deviation describes dispersion within the local area. It is determined using the formula.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{13}
\let\theHequation\theequation
\label{dfg-88360fc67be4}
\begin{array}{@{}l}\sigma =\sqrt{\frac1{mn}{\textstyle\sum_{i=1}^{m}}{\textstyle\sum_{j=1}^{n}}(P(i,j)-\mu)^{2}}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Smoothness: Relative Smoothness, R Scale for gray level contrast measurement is used to create relative softness descriptors. Smoothness is determined using the below formula.
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{14}
\let\theHequation\theequation
\label{dfg-fc636ee32fca}
\begin{array}{@{}l}r=1-\frac1{1+\sigma ^{2}}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Entropy: Entropy is a standard of randomness which is used to distinguish the texture of an input image. Entropy, h is also used to describe the distribution variation in the region. Entropy can calculate the total image as follows:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{15}
\let\theHequation\theequation
\label{dfg-5edd43d68a1d}
\begin{array}{@{}l}H=-{\textstyle\sum_{k=0}^{l=1}}Pr_k(\log_2Pr_k)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Skewness: Skewness, S distinguishes the degree of asymmetry of the pixel's distribution in the selected frame around the center. skewness is a pure number that distinguishes only the distribution form. The formula is given to find the deviation in the following equation:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{16}
\let\theHequation\theequation
\label{dfg-9e8708808857}
\begin{array}{@{}l}s=\frac1{mn}{\textstyle\sum_{j=1}^{n}}(\frac{P(i,j)-\mu}\sigma )^{3}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Kurtosis: Kurtosis, K measures the degree or equal distribution for normal distribution. The equation of kurtosis is:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{17}
\let\theHequation\theequation
\label{dfg-e29d98ffac18}
\begin{array}{@{}l}k=\{\frac1{mn}{\textstyle\sum_{i-1}^{m}}{\textstyle\sum_{j-1}^{n}}\lbrack\frac{P(i,j)-\mu^{4}}\sigma \rbrack\}-3\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Root Mean Square: RMS calculates the value for every row and column of input, along with the vectors of the selected dimension of the input, or the entire entry. RMS value for the jth column, the mxn input matrix is given in the equation below:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{18}
\let\theHequation\theequation
\label{dfg-00c24165db84}
\begin{array}{@{}l}x=\sqrt{\frac{\sum_{i=1}^{m}\left|\mu_{ij}\right|^{2}}m}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
IDM: IDM is the measure of the image texture. The IDM range within from 0.0 for an image to 1.0 for the non-texture image. The formula is to find out the IDM is given in the equation below:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{19}
\let\theHequation\theequation
\label{dfg-6bd520f0f7ab}
\begin{array}{@{}l}h={\textstyle\sum_{i,j}}\frac{P(i,j)}{1+\left|i+j\right|}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Energy: Energy returns of the sum of square elements in the gray-level common presence matrix (GLCM). Energy is known as monotheism. The energy range is [0 1]. Power is 1 for a still image. The formula is to find the energy is given below:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{20}
\let\theHequation\theequation
\label{dfg-355822392370}
\begin{array}{@{}l}e={\textstyle\sum_{i,j}}P(i,j)^{2}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Contrast: Contrast displays the measure of the density contrast between the pixel and neighbor pixel on the entire image. The range of variance is [0] size (GLCM, 1) -1) {\textasciicircum} 2]. The contrast is 0 for a still image. The variance is calculated using the equation below:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{21}
\let\theHequation\theequation
\label{dfg-3e1d75a6fe57}
\begin{array}{@{}l}c={\textstyle\sum_{i,j}}\left|i-j\right|^{2}P(i,j)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Correlation: Correlation is a measure of how closely the pixel relates to the entire image. The link range is [-1,1]. The relationship is 1 or -1 of the images that is positively or negatively associated. The link is not a number for a static image. The equation below illustrates the correlation calculation:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{22}
\let\theHequation\theequation
\label{dfg-5739c8df7ed2}
\begin{array}{@{}l}correlation={\textstyle\sum_{i,j}}\frac{(i-\mu_i)(j-\mu_j)P(i,j)}{\sigma _i\sigma _j}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Homogeneity: Returns a homogeneity value which measures the distribution of GLCM elements into a diagonal GLCM. The homogeneity range is [0 1]. The homogeneity is for the diagonal GLCM. The homogeneity is evaluated using the equation:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{23}
\let\theHequation\theequation
\label{dfg-17042c109d68}
\begin{array}{@{}l}homogenity={\textstyle\sum_{i,j}}\frac{P(i,j)}{1+\left|i-j\right|}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Variance: The variance is the root of the standard deviation. The formula to find variance is:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{24}
\let\theHequation\theequation
\label{dfg-3f2655830569}
\begin{array}{@{}l}v=\sqrt\sigma \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\textbf{Classification}
Classification of the image refers to the labelling of images into one of a number of predefined categories. The classification includes pre-processing of an image, segmentation of the image and feature extraction of the image. Classification is done through two approaches. First one is na{\"{\i}}ve Bayesian and second one is linear discriminant analysis.
\textbf{Classification using Linear Discriminant Analysis}
Linear Discriminant Analysis is the traditional method of classification. The main idea of this method is decision-making borders directly by improving the error criterion of separation categories of things. If there are categories n and linear. The analysis of discrimination classifies observations as follows n Linear functions:
\let\saveeqnno\theequation
\let\savefrac\frac
\def\dispfrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\dispfrac
\gdef\theequation{25}
\let\theHequation\theequation
\label{dfg-4a2e1939189e}
\begin{array}{@{}l}\delta_k(x)=x\frac{\mu k}{\sigma ^{2}}-\frac{\mu^{2}k}{\sigma ^{2}}+\log(\pi_k)\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Let be the discriminant, x is an observation for the class. By taking the log of density class will give linear discriminant.
\section{Results and Discussion}
The Experiment is performed on MATLAB{\textsuperscript{\textregistered}} on a machine with Intel{\textsuperscript{\textregistered}} Core TM i3 processor @ 2.00 GHz and 4 GB of RAM. As mentioned earlier, mammogram collections are chosen from the Mini-Mammographic Image Analysis Society (MIAS)database of mammograms. Images from this database, are used for the experiment. Size of each converted mammogram patch is 256x256.
\textbf{Dataset Description}
Breast cancer is a trivial kind of cancer among women except for non-cancerous skin cancer. This cancer affects women all the while their lifetime. Appears in each man and women, even though cancer in man is rare. Breast cancer is a malignancy, which develops from breast cells. Although scientists know some risk factors like aging, genetic risk factors, family history, menstrual periods, lack of children and obesity that increase women's chances of breast cancer, they not know what causes most types of breast cancer. Research is current to find out additional, and scientists are creating great progress in understanding; however, some changes in DNA will cause normal cancer cells to become cancerous.
The dataset used for the study is obtained from a mini-MIAS database of mammograms [51] comprising a collection of samples digitized at 50 microns and were reduced to a 200-micron pixel edge so that every image is of dimensions1024 \ensuremath{\times} 1024 pixels.
\textbf{Result of Pre-Processing.}
Pre-processing filters are used to remove the noise, smooths and contrast the given the input images. Various preprocessing filter and used to pre-process the image are shown below in Figure~\ref{f-01327e57a563}.
\bgroup
\fixFloatSize{images/adae5770-282f-49ca-b222-5d9425c3e985-upicture4.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/adae5770-282f-49ca-b222-5d9425c3e985-upicture4.png}{\includegraphics{images/adae5770-282f-49ca-b222-5d9425c3e985-upicture4.png}}{}
\makeatother
\caption{\boldmath {Result of Various Preprocessing Filter}}
\label{f-01327e57a563}
\end{figure}
\egroup
\textbf{Performance of classifier {\textendash}FCM.}
The fuzzy c-mean cluster used to evaluate the performance of the classification.FCM image is shown below in Figure~\ref{f-aeb510a11db2}. The performance of classifier by using FCM, which has been elaborated as sensitivity, specificity, precision, true positive value, false positive, true negative, false negative. Which depicts the overall results in the table. In class 1 image the specificity is about 95.83. In class 2 image, the sensitivity is about 94.11. In class 3 image, the specificity is about 93.33 are shown in Table~\ref{tw-139268392320}.
\begin{table*}[!htbp]
\caption{\boldmath {Performance of classifier -FCM} }
\label{tw-139268392320}
\def\arraystretch{1.1}
\ignorespaces
\centering
\begin{tabulary}{\linewidth}{LLLLLLLL}
\tbltoprule \rowcolor{kwdboxcolor}Sensitivity & Specificity & Precision & True Positive & False Positive & True Negative & False Negative & Class\\
\tblmidrule
61.53 &
95.83 &
88.88 &
8 &
1 &
23 &
5 &
1\\
94.11 &
75.00 &
76.19 &
16 &
5 &
15 &
1 &
2\\
71.42 &
93.33 &
71.42 &
5 &
2 &
28 &
2 &
3\\
\tblbottomrule
\end{tabulary}\par
\end{table*}
\bgroup
\fixFloatSize{images/3f80d952-c9c3-4620-b7e1-76f5a9c78c6c-upicture5.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/3f80d952-c9c3-4620-b7e1-76f5a9c78c6c-upicture5.png}{\includegraphics{images/3f80d952-c9c3-4620-b7e1-76f5a9c78c6c-upicture5.png}}{}
\makeatother
\caption{\boldmath {Result of FCM classier}}
\label{f-aeb510a11db2}
\end{figure}
\egroup
The FCM classifiers show the accuracy of 78.38\% with the error rate of 21.62. Sensitivity, number of true positive cases over the number of actual positive cases achieve 75.69\%. Specificity, a number of true positive cases over the number of actual positive cases is 88.06\%. The precision of the rate of 78.84\%, 11.94\% of the false-positive rate. The system achieves test's accuracy of 76.12\%, And MatthewsCorrelationCoefficient, kappa, kappa has got 66.04 and 51.35 respectively. The performance of the classifier is shown in Figure~\ref{f-b5a8db7375c9}.
\bgroup
\fixFloatSize{images/e082e7e7-99cb-4828-bf7e-1ed2db938af7-upicture6.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/e082e7e7-99cb-4828-bf7e-1ed2db938af7-upicture6.png}{\includegraphics{images/e082e7e7-99cb-4828-bf7e-1ed2db938af7-upicture6.png}}{}
\makeatother
\caption{\boldmath {Performance for The Proposed System.}}
\label{f-b5a8db7375c9}
\end{figure}
\egroup
\begin{table*}[!htbp]
\caption{\boldmath {Performance of classifier {\textendash}GKFCM-1} }
\label{tw-f77bd596983b}
\def\arraystretch{1.1}
\ignorespaces
\centering
\begin{tabulary}{\linewidth}{LLLLLLLL}
\tbltoprule \rowcolor{kwdboxcolor}Sensitivity & Specificity & Precision & True Positive & False Positive & True Negative & False Negative & Class\\
\tblmidrule
84.61 &
95.83 &
91.66 &
11 &
1 &
23 &
2 &
1\\
94.11 &
85.00 &
84.21 &
16 &
3 &
17 &
1 &
2\\
85.71 &
100 &
100 &
6 &
0 &
30 &
1 &
3\\
\tblbottomrule
\end{tabulary}\par
\end{table*}
\bgroup
\fixFloatSize{images/4d9cd373-e570-49c9-ba91-b71abd5d149d-upicture7.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/4d9cd373-e570-49c9-ba91-b71abd5d149d-upicture7.png}{\includegraphics{images/4d9cd373-e570-49c9-ba91-b71abd5d149d-upicture7.png}}{}
\makeatother
\caption{\boldmath {Result of GKFCM-1 Classier}}
\label{f-2acd94068f04}
\end{figure}
\egroup
\bgroup
\fixFloatSize{images/1b8d26bb-f2aa-41b0-8212-8c7552592816-upicture8.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/1b8d26bb-f2aa-41b0-8212-8c7552592816-upicture8.png}{\includegraphics{images/1b8d26bb-f2aa-41b0-8212-8c7552592816-upicture8.png}}{}
\makeatother
\caption{\boldmath {Performance for The Proposed System.}}
\label{f-f8acad7f5f86}
\end{figure}
\egroup
\begin{table*}[!htbp]
\caption{\boldmath {Performance of classifier {\textendash}GKFCM-2} }
\label{tw-3475a7cf12c1}
\def\arraystretch{1.1}
\ignorespaces
\centering
\begin{tabulary}{\linewidth}{LLLLLLLL}
\tbltoprule \rowcolor{kwdboxcolor}Sensitivity & Specificity & Precision & True Positive & False Positive & True Negative & False Negative & Class\\
\tblmidrule
64.70 &
85.00 &
78.57 &
11 &
3 &
17 &
6 &
1\\
78.75 &
65.21 &
57.89 &
11 &
8 &
15 &
3 &
2\\
66.66 &
100 &
100 &
4 &
0 &
31 &
2 &
3\\
\tblbottomrule
\end{tabulary}\par
\end{table*}
\bgroup
\fixFloatSize{images/427c23d3-dd3a-494e-b919-8f841121d8b4-upicture9.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/427c23d3-dd3a-494e-b919-8f841121d8b4-upicture9.png}{\includegraphics{images/427c23d3-dd3a-494e-b919-8f841121d8b4-upicture9.png}}{}
\makeatother
\caption{\boldmath {Result of GKFCM-2 Classier}}
\label{f-a64b8d1a1934}
\end{figure}
\egroup
\bgroup
\fixFloatSize{images/9b669763-fa1c-469a-9876-0bf46b58c791-upicture10.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/9b669763-fa1c-469a-9876-0bf46b58c791-upicture10.png}{\includegraphics{images/9b669763-fa1c-469a-9876-0bf46b58c791-upicture10.png}}{}
\makeatother
\caption{\boldmath {Performance for The Proposed System}}
\label{f-ab6669093341}
\end{figure}
\egroup
\begin{table*}[!htbp]
\caption{\boldmath {Performance of classifier{\textendash}OTSU} }
\label{tw-d8e882817b31}
\def\arraystretch{1.1}
\ignorespaces
\centering
\begin{tabulary}{\linewidth}{p{\dimexpr.12\linewidth-2\tabcolsep}p{\dimexpr.12\linewidth-2\tabcolsep}p{\dimexpr.12\linewidth-2\tabcolsep}p{\dimexpr.1053\linewidth-2\tabcolsep}p{\dimexpr.1347\linewidth-2\tabcolsep}p{\dimexpr.12\linewidth-2\tabcolsep}p{\dimexpr.1399\linewidth-2\tabcolsep}p{\dimexpr.1401\linewidth-2\tabcolsep}}
\tbltoprule \rowcolor{kwdboxcolor}Sensitivity & Specificity & Precision & True Positive & False Positive & True Negative & False Negative & Class\\
\tblmidrule
76.92 &
100. &
10 &
10 &
0 &
24 &
3 &
1\\
100 &
70.00 &
73.91 &
17 &
6 &
14 &
0 &
2\\
57.14 &
100. &
100 &
4 &
0 &
30 &
3 &
3\\
\tblbottomrule
\end{tabulary}\par
\end{table*}
\bgroup
\fixFloatSize{images/3eee2516-ce15-4eda-954b-3e869fa62e6d-upicture11.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/3eee2516-ce15-4eda-954b-3e869fa62e6d-upicture11.png}{\includegraphics{images/3eee2516-ce15-4eda-954b-3e869fa62e6d-upicture11.png}}{}
\makeatother
\caption{\boldmath {Result of OTSU Classifier}}
\label{f-379cbbdc66ea}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/9c6db0a5-9e69-429a-9a59-65200dd6def1-upicture12.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/9c6db0a5-9e69-429a-9a59-65200dd6def1-upicture12.png}{\includegraphics{images/9c6db0a5-9e69-429a-9a59-65200dd6def1-upicture12.png}}{}
\makeatother
\caption{\boldmath {Performance for the proposed systems}}
\label{f-81dd439a9922}
\end{figure}
\egroup
\textbf{Performance of classifier {\textendash}GKFCM}
\textbf{GKFCM-1}
Gaussian kernel fuzzy c mean cluster used to evaluate the performance of the classification. GKFCM-1 image is shown in Figure~\ref{f-2acd94068f04}. The performance of classifier by using GKFCM, which has been elaborated as sensitivity, specificity, precision, true positive value, false positive, true negative, false negative. Which depicts the overall results in the table. In class 1 image the specificity is about 95.83. In class 2 image, the sensitivity is about 94.11. In class 3 image, the specificity and precision are about 100 are shown in Table~\ref{tw-f77bd596983b}.
The GKFCM-1 classifiers show the accuracy of 89.19\% with the error rate of 10.81. Sensitivity, number of true positive cases over the number of actual positive cases achieve 88.15\%. Specificity, a number of true positive cases over the number of actual positive cases is 88.06\%. The precision of the rate of 91.96\%, 6.39\% of the false-positive rate. The system achieves the test's accuracy of 89.73\% and Matthews correlation coefficient. Kappa has got 84.0 and 75.68, respectively. The performance of the classifier is shown in Figure~\ref{f-f8acad7f5f86}.
\textbf{GKFCM-2}
Gaussian kernel fuzzy c mean cluster used to evaluate the performance of the classification. GKFCM-2 image is shown below in Figure~\ref{f-a64b8d1a1934}. In this table shows the performance of classifier by using GKFCM, which has been elaborated as sensitivity, specificity, precision, true positive value, false positive, true negative, false negative, which depicts the overall results in the table. In class 1 image the specificity is about 85.00. In class 2 image, the sensitivity is about 78.75. In class 3 image, the specificity and precision are about 100 are shown in Table~\ref{tw-3475a7cf12c1}.
The GKFCM-2 classifiers show the accuracy of 70.27 with the error rate of 29.73\%. Sensitivity, number of true positive cases over the number of actual positive cases achieve 69.98\%. Specificity, a number of true positive cases over the number of actual positive cases is 83.41\%. The precision of the rate of 78.82\%, 16.59\% of the false-positive rate. The system achieves test's accuracy of 72.54\%, and Matthews correlation coefficient, kappa, kappa has got 57.57\% and 33.11\% respectively. The performance of the classifier is shown in Figure~\ref{f-ab6669093341}.
\textbf{Performance of classifier {\textendash} OTSU}
OTSU mean cluster used to evaluate the performance of the classification. OTSU-1 image is shown below in Figure~\ref{f-379cbbdc66ea}. In this table shows the performance of classifier by using GKFCM, which has been elaborated as sensitivity, specificity, precision, true positive value, false positive, true negative, false negative, which depicts the overall results in the table. In class 1 image the specificity is about 100. In class 2 image, the sensitivity is about 100. In class 3 image, the specificity and precision are about 100. are shown in Table~\ref{tw-d8e882817b31}.
The FCM classifiers show the accuracy of 83.78\% with the error rate of 16.22\% Sensitivity, a number of true positive cases over the number of actual positive cases achieve 78.02\%. Specificity, a number of true positive cases over the number of actual positive cases is 90.0\%. The precision of the rate of 91.30\%, 100.00\% of the false-positive rate. The system achieves the test's accuracy of 81.56\%, And MatthewsCorrelationCoefficient, kappa, kappa, kappa has got 75.56\% and 63.51\% respectively. The performance of the classifier is shown in Figure~\ref{f-81dd439a9922}.
\section{Conclusions}
In our proposed system, a "non-linear" enhancement method is used for emphasizing tumour region of a mammogram. The main advantage of this method is to boost the intensity in barely abnormal gray level regions of a mammogram. Segmentation of tumour in the mammogram is performed using a Gaussian mixture model, also tested with Gaussian kernel FCM, FCM and OTSU technique. The extracted ROI's are classified using Linear Discriminant Analyzer that had resulted into a maximum accuracy of 89.19\% with GKFCM cluster-1 extracted ROI's with GLCM features.
\section*{Acknowledgement} The authors are grateful to Amrita Vishwa Vidyapeetham and Amrita School of Arts and Sciences for providing an opportunity for this research work.
\bibliographystyle{pharmascope_apa-custom}
\bibliography{\jobname}
\end{document}