\documentclass[twocolumn,twoside]{article}
\makeatletter\if@twocolumn\PassOptionsToPackage{switch}{lineno}\else\fi\makeatother
\usepackage{amsfonts,amssymb,amsbsy,latexsym,amsmath,tabulary,graphicx,times,xcolor}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Following additional macros are required to function some
% functions which are not available in the class used.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{url,multirow,morefloats,floatflt,cancel,tfrupee}
\makeatletter
\AtBeginDocument{\@ifpackageloaded{textcomp}{}{\usepackage{textcomp}}}
\makeatother
\usepackage{colortbl}
\usepackage{xcolor}
\usepackage{pifont}
\usepackage[nointegrals]{wasysym}
\urlstyle{rm}
\makeatletter
%%%For Table column width calculation.
\def\mcWidth#1{\csname TY@F#1\endcsname+\tabcolsep}
%%Hacking center and right align for table
\def\cAlignHack{\rightskip\@flushglue\leftskip\@flushglue\parindent\z@\parfillskip\z@skip}
\def\rAlignHack{\rightskip\z@skip\leftskip\@flushglue \parindent\z@\parfillskip\z@skip}
%Etal definition in references
\@ifundefined{etal}{\def\etal{\textit{et~al}}}{}
%\if@twocolumn\usepackage{dblfloatfix}\fi
\usepackage{ifxetex}
\ifxetex\else\if@twocolumn\@ifpackageloaded{stfloats}{}{\usepackage{dblfloatfix}}\fi\fi
\AtBeginDocument{
\expandafter\ifx\csname eqalign\endcsname\relax
\def\eqalign#1{\null\vcenter{\def\\{\cr}\openup\jot\m@th
\ialign{\strut$\displaystyle{##}$\hfil&$\displaystyle{{}##}$\hfil
\crcr#1\crcr}}\,}
\fi
}
%For fixing hardfail when unicode letters appear inside table with endfloat
\AtBeginDocument{%
\@ifpackageloaded{endfloat}%
{\renewcommand\efloat@iwrite[1]{\immediate\expandafter\protected@write\csname efloat@post#1\endcsname{}}}{\newif\ifefloat@tables}%
}%
\def\BreakURLText#1{\@tfor\brk@tempa:=#1\do{\brk@tempa\hskip0pt}}
\let\lt=<
\let\gt=>
\def\processVert{\ifmmode|\else\textbar\fi}
\let\processvert\processVert
\@ifundefined{subparagraph}{
\def\subparagraph{\@startsection{paragraph}{5}{2\parindent}{0ex plus 0.1ex minus 0.1ex}%
{0ex}{\normalfont\small\itshape}}%
}{}
% These are now gobbled, so won't appear in the PDF.
\newcommand\role[1]{\unskip}
\newcommand\aucollab[1]{\unskip}
\@ifundefined{tsGraphicsScaleX}{\gdef\tsGraphicsScaleX{1}}{}
\@ifundefined{tsGraphicsScaleY}{\gdef\tsGraphicsScaleY{.9}}{}
% To automatically resize figures to fit inside the text area
\def\checkGraphicsWidth{\ifdim\Gin@nat@width>\linewidth
\tsGraphicsScaleX\linewidth\else\Gin@nat@width\fi}
\def\checkGraphicsHeight{\ifdim\Gin@nat@height>.9\textheight
\tsGraphicsScaleY\textheight\else\Gin@nat@height\fi}
\def\fixFloatSize#1{}%\@ifundefined{processdelayedfloats}{\setbox0=\hbox{\includegraphics{#1}}\ifnum\wd0<\columnwidth\relax\renewenvironment{figure*}{\begin{figure}}{\end{figure}}\fi}{}}
\let\ts@includegraphics\includegraphics
\def\inlinegraphic[#1]#2{{\edef\@tempa{#1}\edef\baseline@shift{\ifx\@tempa\@empty0\else#1\fi}\edef\tempZ{\the\numexpr(\numexpr(\baseline@shift*\f@size/100))}\protect\raisebox{\tempZ pt}{\ts@includegraphics{#2}}}}
%\renewcommand{\includegraphics}[1]{\ts@includegraphics[width=\checkGraphicsWidth]{#1}}
\AtBeginDocument{\def\includegraphics{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\def\URL#1#2{\@ifundefined{href}{#2}{\href{#1}{#2}}}
%%For url break
\def\UrlOrds{\do\*\do\-\do\~\do\'\do\"\do\-}%
\g@addto@macro{\UrlBreaks}{\UrlOrds}
\edef\fntEncoding{\f@encoding}
\def\EUoneEnc{EU1}
\makeatother
\def\floatpagefraction{0.8}
\def\dblfloatpagefraction{0.8}
\def\style#1#2{#2}
\def\xxxguillemotleft{\fontencoding{T1}\selectfont\guillemotleft}
\def\xxxguillemotright{\fontencoding{T1}\selectfont\guillemotright}
\newif\ifmultipleabstract\multipleabstractfalse%
\newenvironment{typesetAbstractGroup}{}{}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage[authoryear]{natbib}
\makeatletter\input{size10-pointfive.clo}\makeatother%
\definecolor{kwdboxcolor}{RGB}{242,242,242}
\usepackage[hidelinks,colorlinks=true,allcolors=blue]{hyperref}
\linespread{1}
\def\floatpagefraction{0.8}
\usepackage[paperheight=11.69in,paperwidth=8.26in,top=1in,bottom=1in,left=1in,right=.75in,headsep=24pt]{geometry}
\usepackage{multirow-custom}
\makeatletter
\def\hlinewd#1{%
\noalign{\ifnum0=`}\fi\hrule \@height #1%
\futurelet\reserved@a\@xhline}
\def\tbltoprule{\hlinewd{1pt}\\[-14pt]}
\def\tblbottomrule{\noalign{\vspace*{6pt}}\hline\noalign{\vspace*{2pt}}}
\def\tblmidrule{\hline\noalign{\vspace*{2pt}}}
\let\@articleType\@empty
\let\@journalDoi\@empty
\let\@journalVolume\@empty
\let\@journalIssue\@empty
\let\@crossMarkLink\@empty
\let\@receivedDate\@empty
\let\@acceptedDate\@empty
\let\@revisedDate\@empty
\let\@copyrightYear\@empty
\let\@firstPage\@empty
\def\articleType#1{\gdef\@articleType{#1}}
\def\journalDoi#1{\gdef\@journalDoi{#1}}
\def\crossMarkLink#1{\gdef\@crossMarkLink{#1}}
\def\receivedDate#1{\gdef\@receivedDate{#1}}
\def\acceptedDate#1{\gdef\@acceptedDate{#1}}
\def\revisedDate#1{\gdef\@revisedDate{#1}}
\def\copyrightYear#1{\gdef\@copyrightYear{#1}}
\def\journalVolume#1{\gdef\@journalVolume{#1}}
\def\journalIssue#1{\gdef\@journalIssue{#1}}
\def\firstPage#1{\gdef\@firstPage{#1}}
\def\author#1{%
\gdef\@author{%
\hskip-\dimexpr(\tabcolsep)\hskip5pt%
\parbox{\dimexpr\textwidth-1pt}%
{\fontsize{11}{13}\selectfont\raggedright #1}%
}%
}
\usepackage{pharmascope-abs}
\usepackage{caption}
\usepackage{lastpage}
\usepackage{fancyhdr}
\usepackage[noindentafter,explicit]{titlesec}
\usepackage{fontspec}
\setmainfont[%
BoldFont=cambriab.otf,%
ItalicFont=CAMBRIAI.otf,%
BoldItalicFont=CAMBRIAZ.otf]{Cambria.otf}
\def\title#1{%
\gdef\@title{%
\vspace*{-40pt}%
\ifx\@articleType\@empty\else{\fontsize{10}{12}\scshape\selectfont\hspace{8pt}\@articleType\hfill\mbox{}\par\vspace{2pt}}\fi%
\minipage{\linewidth}
\hrulefill\\[-0.7pt]%
\mbox{~}\hspace{5pt}\parbox{.1\linewidth}{\includegraphics[width=75pt,height=50pt]{ijrps_logo.png}}\hfill
\fcolorbox{kwdboxcolor}{kwdboxcolor}{\parbox{.792\linewidth}{%
\begin{center}\fontsize{17}{17}\selectfont\scshape\vskip-7pt International Journal of Research in Pharmaceutical Sciences\hfill\end{center}%
\vspace*{-10pt}\hspace*{4pt}{\fontsize{8}{9}\selectfont Published by JK Welfare \& Pharmascope Foundation\hfill Journal Home Page: \href{http://www.pharmascope.org/ijrps}{\color{blue}\underline{\smash{www.pharmascope.org/ijrps}}}}\hspace*{4pt}\mbox{}}}%
\par\vspace*{-1pt}\rule{\linewidth}{1.3pt}%
\endminipage%
\par\vspace*{9.2pt}\parbox{.98\linewidth}{\linespread{.9}\raggedright\fontsize{14}{17}\selectfont #1}%
\vspace*{-8pt}%
}
}
\setlength{\parindent}{0pt}
\setlength{\parskip}{0.4pc plus 1pt minus 1pt}
\def\abbrvJournalTitle{Int. J. Res. Pharm. Sci.}
\fancypagestyle{headings}{%
\renewcommand{\headrulewidth}{0pt}%
\renewcommand{\footrulewidth}{0.3pt}
\fancyhf{}%
\fancyhead[R]{%
\fontsize{9.12}{11}\selectfont\RunningAuthor,\ \abbrvJournalTitle,\ \ifx\@journalVolume\@empty X\else\@journalVolume\fi%
\ifx\@journalIssue\@empty\else(\@journalIssue)\fi%
,\ \ifx\@firstPage\@empty 1\else\@firstPage\fi-\pageref*{LastPage}%
}%
\fancyfoot[LO,RE]{\fontsize{9.12}{11}\selectfont\textcopyright\ International Journal of Research in Pharmaceutical Sciences}%
\fancyfoot[RO,LE]{\fontsize{9.12}{11}\selectfont\thepage}
}\pagestyle{headings}
\fancypagestyle{plain}{%
\renewcommand{\headrulewidth}{0pt}%
\renewcommand{\footrulewidth}{0.3pt}%
\fancyhf{}%
\fancyhead[R]{%
\fontsize{9.12}{11}\selectfont\RunningAuthor,\ \abbrvJournalTitle,\ \ifx\@journalVolume\@empty X\else\@journalVolume\fi%
\ifx\@journalIssue\@empty\else(\@journalIssue)\fi%
,\ \ifx\@firstPage\@empty 1\else\@firstPage\fi-\pageref*{LastPage}%
}%
\fancyfoot[LO,RE]{\fontsize{9.12}{11}\selectfont\textcopyright\ International Journal of Research in Pharmaceutical Sciences}%
\fancyfoot[RO,LE]{\fontsize{9.12}{11}\selectfont\thepage}
\ifx\@firstPage\@empty\else\setcounter{page}{\@firstPage}\fi
}
\def\NormalBaseline{\def\baselinestretch{1.1}}
\usepackage{textcase}
\setcounter{secnumdepth}{0}
\titleformat{\section}[block]{\bfseries\boldmath\NormalBaseline\filright\fontsize{10.5}{13}\selectfont}
{\thesection}
{6pt}
{\MakeTextUppercase{#1}}
[]
\titleformat{\subsection}[block]{\bfseries\boldmath\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubsection}
{6pt}
{#1}
[]
\titleformat{\subsubsection}[block]{\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubsubsection}
{6pt}
{#1}
[]
\titleformat{\paragraph}[block]{\NormalBaseline\filright\fontsize{10.5}{10}\selectfont}
{\theparagraph}
{6pt}
{#1}
[]
\titleformat{\subparagraph}[block]{\NormalBaseline\filright\fontsize{10.5}{12}\selectfont}
{\thesubparagraph}
{6pt}
{#1}
[]
\titlespacing{\section}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subsection}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subsubsection}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\paragraph}{0pt}{.5\baselineskip}{.5\baselineskip}
\titlespacing{\subparagraph}{0pt}{.5\baselineskip}{.5\baselineskip}
\captionsetup[figure]{skip=1.4pt,font=bf,labelsep=colon,justification=raggedright,singlelinecheck=false}
\captionsetup[table]{skip=1.4pt,font=bf,labelsep=colon,justification=raggedright,singlelinecheck=false}
\def\bibyear#1{#1}
\def\bibjtitle#1{#1}
\def\bibauand{}
\setlength\bibsep{3pt}
\setlength\bibhang{8pt}
\makeatother
\date{}
\usepackage{float}
\begin{document}
\def\RunningAuthor{Shobha Rani N et al.,}
\firstPage{2163}
\articleType{Original Article}
\receivedDate{04.03.2019}
\acceptedDate{25.06.2019}
\revisedDate{20.06.2019}
\journalVolume{10}
\journalIssue{3}
\journalDoi{ijrps.v10i3.1443}
\copyrightYear{2019}
\def\authorCount{3}
\def\affCount{1}
\def\journalTitle{International Journal of Research in Pharmaceutical Sciences}
\title{\textbf{Patch analysis based lung cancer classification}}
\author{Shobha Rani N\textsuperscript{*},
Rakshitha B S,
Rohith V~\\[5pt]{Department of Computer Science\unskip, Amrita School of Arts \& Sciences, Amrita Vishwa Vidyapeetham, Mysuru-570026, Karnataka, India}}
\begin{abstract}
Lung Cancer may be a variety of Cancer that begins in the Lungs because of those that smokes often. However, there Area unit rare probabilities those area unit non-smokers get Affected because of unhealthy pollution and Harmful gasses. The detection of tumor is incredibly vital that helps to detect affected neoplasm areas in the lungs. Computed tomography help us to understand the cancer positions in patients. The detection of cancer tumours are performed by scanning the images of computed tomography. Lung cancer identification system goes with a method of Morphological opening and Gray level co-occurrence matrix (GLCM) feature extraction and Normalized cross-correlation with patches Analysis. Lung cancer classification using Linear Discriminant Analysis (LDA) gives good results of Accuracy of 81.81\%. Patch Analysis is a new method to find lung cancer.
\end{abstract}\def\keywordstitle{Keywords}
\begin{keywords}Connected Component Analysis,\newline Gray level co-occurrence matrix,\newline Gaussian Smoothing,\newline Morphological opening,\newline Normalized Cross-Correlation,\newline Naive Bayes,\newline Region of Interest
\end{keywords}
\twocolumn[ \maketitle {\printKwdAbsBox}]
\makeatletter\textsuperscript{*}Corresponding Author\par Name:\ Shobha Rani N~\\ Phone:\ +91-9741316315~\\ Email:\ n\_shobharani@asas.mysore.amrita.edu
\par\vspace*{-11pt}\hrulefill\par{\fontsize{12}{14}\selectfont ISSN: 0975-7538}\par%
\textsc{DOI:}\ \href{https://doi.org/10.26452/\@journalDoi}{\textcolor{blue}{\underline{\smash{https://doi.org/10.26452/\@journalDoi}}}}\par%
\vspace*{-11pt}\hrulefill\\{\fontsize{9.12}{10.12}\selectfont Production and Hosted by}\par{\fontsize{12}{14}\selectfont Pharmascope.org}\par%
\vspace*{-7pt}{\fontsize{9.12}{10.12}\selectfont\textcopyright\ \@copyrightYear\ $|$ All rights reserved.}\par%
\vspace*{-11pt}\rule{\linewidth}{1.2pt}
\makeatother
\section{Introduction}
Lung cancer is largely seen on chest through radiography, and computerized axial tomography (CT) scans. Researchers have reviewed existing technique to presenting the feature dataset created at the feature extraction stage is fed into a number of classifiers like XG Boost and Random Forest. \unskip~\citep{571151:13187409} .Most of the existing methods tests on CT (Computed Tomography) scan images that is having mainly four stages. \unskip~\citep{571151:13187410} .The CT scan of lung images was analyzed with the assistance of Optimal Deep Neural Network (ODNN) and Linear Discriminate Analysis (LDA). \unskip~\citep{571151:13187411} . Densely evaluating and pooling the predictions for different versions of the same object improves recognition performance. \unskip~\citep{571151:13187412} .The covering new developments in screening eligibility criteria and the possible benefits and the harm of screening with CT. This leads to investigating the effect of different types of CAD on CT in lung nodule detection and the effect of CAD on radiologist's decision outcomes which covers new developments in screening eligibility criteria and the possible benefits and the harm of screening with CT. \unskip~\citep{571151:13187413} . The feasibility of applying a new deep learning-based CAD scheme to automatically recognize abdominal section of the human body from CT scans and segment Subcutaneous Fat Area (SFA) and Visceral Fat Area(VFA) from volumetric CT data with high accuracy or agreement with the manual segmentation results. \unskip~\citet{571151:13187414} . Enhancement of document image prior to Region of Interest (ROI) processing is the inclination of efficient optical recognition systems. \unskip~\citep{571151:13187415} .The probabilistic outputs of the systems and surrogate ground were analyzed by using receiver operating characteristic analysis and area under the curve. \unskip~\citep{571151:13187416} . Computed Tomography (CT) is being the most sought because of imaging sensitivity, high resolution and isotropic acquisition in locating the lung lesions. \unskip~\citep{571151:13187417}. Artificial neural networks are having a different approach to problem-solving for the generation of computing. \unskip~\citep{571151:13187418} . Carcinoma is that the leading reason behind cancer deaths within the most countries, among each men and ladies. Binarization is the technique used for optical character recognition. Binarization technique is employed supported arrangement with the assistance of quad tree structure. Binarization of every average threshold is measured in deep neural network coaching \unskip~\citep{571151:13187419} . Morphological Operations are implemented mathematical morphology is a procedure to assess sectioned structures/images in light of random functions and variables, set hypotheses, and so forth. \unskip~\citep{571151:13187420} . Tumor segmentation method for CT Images, which takes non-enhancing lung tumors from healthy tissues are carried out by the clustering method. The method uses a pre-processing technique that removes unwanted artifacts using median and wiener filters. \unskip~\citet{571151:13187421} . To investigating the research for CT ventilation functional image-based Intensity-modulated radiation therapy (IMRT) plans designed to avoid irradiating highly-functional lung regions are comparable to single-photon emission CT (SPECT) ventilation functional image-based plans \unskip~\citep{571151:13187422}. Segmentation stage plays a very significant role in the image classification process. The foreground object appears to be encased in a catchment basin.\unskip~\citep{571151:13187423} . The methods were used. Candidate detection algorithms play an important role in the performance of any CAD system, as it determines the maximum detection sensitivity of subsequent stages.\unskip~\citep{571151:13187424} . SAW gas chromatography can be used to realize the wide spectrum, fast and high sensitivity analysis. Using airbags sampling, direct injection mode, we have gotten several volatile organic compounds reported in the literature by GC/SAW analysis. \unskip~\citep{571151:13187425}. Software workflow for image-guided intervention, Algorithm framework which incorporates an iterative serial image segmentation and registration strategy in order to improve the longitudinal stability for 3-D image series, the subsequent images have been globally aligned onto the space of the baseline by applying for the rigid registration in Insight Toolkit (ITK). \unskip~\citep{571151:13187426} .Using the biomarkers which accelerating assessments of responses for the treatment could get more benefit for patients by providing earlier diagnoses of progressive disease, particularly when there are multiple options for treatment. \unskip~\citep{571151:13187427} . Textural and geometric features are extracted from the lung nodules by using gray level matrix method is feed as input to backpropagation neural networks to classifying tumor. \unskip~\citep{571151:13187428} . The dosimetric impact of using CT images for treatment planning target definition and the daily target coverage in body radiotherapy of lung cancer. \unskip~\citep{571151:13187429} . Some people focus on monitoring the development of lung nodules detected in successive chest low dose CT scans of a patient. \unskip~\citep{571151:13187430} .The volumetric shape index map, which are based on local Gaussian and means curvatures which is based on the eigenvalues of a Hessian matrix, which are calculated for each voxel within the lungs to enhance objects of a specific shape with high spherical elements. \unskip~\citep{571151:13187431} . The method implemented a lobe segmentation algorithms which uses two-stage approaches are- adaptive fissure sweeping to finding fissure regions lung nodule, and wavelet transforms to identifying the fissure locations and curvatures within these regions mentioned.\unskip~\citep{571151:13187432} . The clustering is sensitive to initialization of cluster points and optimal initialization of the cluster points by using the Genetic Algorithm approach. \unskip~\citep{571151:13187433} . Lung cancer claims to be a lot of lives for every year than colon, prostate, gonad and breast cancers that area unit along combined. This Classification system contains a Database of Patches. The input test scanned CT image undergoes to Adaptive Binarization technique to get grayscale image or Threshold image. Further Grayscale image is used for the Normalized Cross-correlation test to obtain Classification of Lung Cancer. Normalized Cross-Correlation is the best template matching algorithm which uses the Patch analysis Database. This patches will be compared with the target or input image of computed tomography. Here the proposed System which gives good Accuracy. The Architecture of Proposed system, as shown in Figure~\ref{f-188438ee2909} .
\section{Materials and Methods}
The work proposes a Lung cancer classification system. The task of browsing the CT images. All computed tomography images are in DICOM format. Binarization is the technique of converting CT image into Thresholding or Grayscale image. It contains black and White Partitions. It will help to Identify Lung Cancer. Lung cancer Classified as Emphysema, Fibrosis, Ground Glass Opacity (GGO), Healthy, Micro nodules. Early Identification of Lung Cancer Can save Patients Life's. With the help of Patch Analysis, Normalized Correlation takes place. It is a searching algorithm to locate the tumor of lung cancer. Patches will be extracted in input images and tested with Database Patches of five class collections. The result of Normalized Correlation recognizes the lung cancer type. The Linear Discriminant Analysis (LDA) classification gives good Accuracy.
\bgroup
\fixFloatSize{images/7f4d42f2-824b-4f1d-a96a-e056c299ef17-upicture1.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/7f4d42f2-824b-4f1d-a96a-e056c299ef17-upicture1.png}{\includegraphics{images/7f4d42f2-824b-4f1d-a96a-e056c299ef17-upicture1.png}}{}
\makeatother
\caption{\boldmath {Architecture of Patch Analysis Based Lung Cancer Classification }}
\label{f-188438ee2909}
\end{figure}
\egroup
\bgroup
\fixFloatSize{images/3ba35cc6-9761-4913-909a-840525d230c7-upicture2.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/3ba35cc6-9761-4913-909a-840525d230c7-upicture2.png}{\includegraphics{images/3ba35cc6-9761-4913-909a-840525d230c7-upicture2.png}}{}
\makeatother
\caption{\boldmath {Architecture of Training Pattern Analysis}}
\label{f-05122cc91956}
\end{figure}
\egroup
\bgroup
\fixFloatSize{images/a02d739f-9b3d-4c6f-9379-71a93d5ccda0-upicture3.png}
\begin{figure}[!htbp]
\centering \makeatletter\IfFileExists{images/a02d739f-9b3d-4c6f-9379-71a93d5ccda0-upicture3.png}{\includegraphics{images/a02d739f-9b3d-4c6f-9379-71a93d5ccda0-upicture3.png}}{}
\makeatother
\caption{\boldmath {Architecture of Testing Pipeline of Patch Analysis}}
\label{f-f287f3e74dd9}
\end{figure}
\egroup
\textbf{Pre-Processing }
Image process is that the commonest stage in Digital image process. In this method, we can get an enhanced image by performing some preprocessing operation to the input images ofCT scan. The enhanced image will help to extract important features easily. Computed tomography images will be considered for the input of the Proposed System. All CT scan images are in DICOM format. Here CT image will undergoes smoothing fro noise removal as shown in Figure~\ref{f-9883f63802d3} .
\textbf{Otsu Thresholding}
Based on Grayscale intensity level Otsu method will assign Pixel values. Thresholding is an image processing method used to convert a grey scale image (value of pixels ranging from 0-255) into a binary image (value of pixels can have only 2 values: 0 or 1).
Thresholding techniques are mainly used in segmentation The simplest thresholding methods replace each pixel in an image with a black pixel if the pixel intensity is less than some fixed constant T, else it is replaced with a white pixel. It applies global thresholding for Binarization of an image. This Binarized image gives complete thresholding. Otsu thresholding deals with monochrome image and iterates all attainable threshold values and calculates the edge unfold of pixel-level in every side of the threshold. It is one of the global thresholding technique with histogram representation to calculate the optimal thresholding or Otsu Thresholding Binary image as displayed in Figure~\ref{f-9883f63802d3} .
Otsu threshold that minimizes the variance amongthe category, outlined as a weighted add of variances of the 2categories of Weights \ensuremath{_{}}w\ensuremath{_{0\ }}and w\ensuremath{_{1\ }}are the probabilities of the two classes separated by a threshold (t), and $\sigma _0^{2} $ and $\sigma _1^{2} $are variances of these two classes in equation 1
$\sigma _w^{2}(t)=\;w_0(t)\sigma _0^{2}(t)\;+\;w_1(t)\sigma _1^{2}(t) $ (1)
\textbf{Gaussian Smoothing }
Gaussian Smoothing is one of the special filters. This used to remove the noise in a CT scan image by blurring the image. The Gaussian filter detects the probability distribution in input CT image. And it is a symmetric function which never equal to zero. This smoothing is similar to the mean filter. It uses Different Kernel to represent the shape of a bell-like a hump. Here two dimensional Gaussian is used to take away the noise appearance in CT image. For an isotropic or circularly symmetric Gaussian filter, $\sigma $represents a standard deviation of Gaussian distribution. x and y Represents a horizontal and vertical axis from an origin in equation 2.
$G(x,\;y)=\;\frac1{2\pi\sigma ^{2}}e\;-\;\frac{x^{2}+y^{2}}{2\sigma ^{2}} $ (2)
\bgroup
\fixFloatSize{images/89b00dcd-f7b7-40fd-af42-a1b22b09dd0a-upicture4.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/89b00dcd-f7b7-40fd-af42-a1b22b09dd0a-upicture4.png}{\includegraphics{images/89b00dcd-f7b7-40fd-af42-a1b22b09dd0a-upicture4.png}}{}
\makeatother
\caption{\boldmath {A) Original CT image, B) Smoothing for Noise Removal, C) Otsu Thresholding Binary image, D) Morphological Opening. }}
\label{f-9883f63802d3}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/b7243154-fbfe-459c-aeab-36dea18a8ada-upicture5.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/b7243154-fbfe-459c-aeab-36dea18a8ada-upicture5.png}{\includegraphics{images/b7243154-fbfe-459c-aeab-36dea18a8ada-upicture5.png}}{}
\makeatother
\caption{\boldmath {E) Segmentation of CT image , F) Small object removal Based on ROI, G) Segmentation of a grayscale image, H) Binarization ofthe segmented image.}}
\label{f-f1886125e1f9}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/04d2d0d9-af84-49f7-a2b4-937f4e202574-upicture6.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/04d2d0d9-af84-49f7-a2b4-937f4e202574-upicture6.png}{\includegraphics{images/04d2d0d9-af84-49f7-a2b4-937f4e202574-upicture6.png}}{}
\makeatother
\caption{\boldmath {Mask Training of Segmentation}}
\label{f-925eeee55952}
\end{figure*}
\egroup
\textbf{Morphological Opening:}
Binary image contains imperfections in shape. Due to noise removal, the binary image has increased its shape and Structure. To solve this problem, the proposed system undergoes with Erosion\ensuremath{\ominus } \{{\tt\string\ \unskip}displaystyle {\tt\string\ \unskip}ominus \} and Deletion technique as morphological operations before segmentation as shown in Figure~\ref{f-9883f63802d3}
Remove objects having a radius butfive pixels by gap it with the disk-shaped structuring component. Erosion is the removal of structures of certain shape and size, given by Structure in CT scan. And can split apart joined objects and strip away extrusions. Similarly Deletion filling of holes of certain shape and size, given by Structure of CT scan. And also can repair breaks and intrusions in the CT image of lung cancer. Proposed method undergoes Opening Morphology. In this Erosion \ensuremath{\ominus } followed by Deletion $\oplus $rule is applied. A morphological opening is denoted by $\textdegree $the symbol, And A and B are sets in equation 3 as follows.
$A^\circ B=(A\circleddash B)\oplus B $ (3)
\textbf{Segmentation}
Image segmentation is the technique of partitioning the image into components, known as segments. In the proposed system Morphological opening undergoes small object removal method. Here removing the unwanted area in the Lung Cancer image to extract the Region of interest (ROI) as displayed in Figure~\ref{f-f1886125e1f9} .
\bgroup
\fixFloatSize{images/3d61049d-e3f9-46bf-bbe3-5b72a06b0c77-upicture7.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/3d61049d-e3f9-46bf-bbe3-5b72a06b0c77-upicture7.png}{\includegraphics{images/3d61049d-e3f9-46bf-bbe3-5b72a06b0c77-upicture7.png}}{}
\makeatother
\caption{\boldmath {Mask Testing of Segmentation}}
\label{f-6b5732e0f0cb}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/27ce161b-d43c-440e-8871-ba7c9b0c4595-upicture8.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/27ce161b-d43c-440e-8871-ba7c9b0c4595-upicture8.png}{\includegraphics{images/27ce161b-d43c-440e-8871-ba7c9b0c4595-upicture8.png}}{}
\makeatother
\caption{\boldmath {Patch testing of Emphysema.}}
\label{f-77aaf9b5243d}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/f00480f2-4654-4a0c-b1db-4239a9420635-upicture9.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/f00480f2-4654-4a0c-b1db-4239a9420635-upicture9.png}{\includegraphics{images/f00480f2-4654-4a0c-b1db-4239a9420635-upicture9.png}}{}
\makeatother
\caption{\boldmath {Patch testing of Fibrosis.}}
\label{f-4c18e3b28ef6}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/fef5448e-154c-4052-ac82-5087d6adeef6-upicture10.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/fef5448e-154c-4052-ac82-5087d6adeef6-upicture10.png}{\includegraphics{images/fef5448e-154c-4052-ac82-5087d6adeef6-upicture10.png}}{}
\makeatother
\caption{\boldmath {Patch testing of Ground glass.}}
\label{f-f19483ff7431}
\end{figure*}
\egroup
\textbf{Connected Component Analysis}
Connected components labelling scans a picture and teams its components into parts supported pixel property. This will works by scanning a picture, pixel-by-pixel within the order of prime to bottom and left to right. It will determine the connected constituent regions within the given input pictures. The Regions of adjacent pixels can provide a constant set of intensity values denoted by V. In binary image V= 1, and just in case of gray level image, the values are going to be taken as a range. Labelling is giving particular value to the pixels or boxing the particular Region of interest (ROI). It helps to lung cancer identification or tumor detection Based on Region of interest.
\textbf{Feature Extraction}
Feature Extraction is the main step in image processing. This image processing will help to extract the features of the CT scan. A feature contains some specific data that is extracted from the image to grasp the main points of the image. The Proposed system undergoes GLCM feature extraction. Here 12 features are extracted from the CT scan image.
\textbf{GLCM Feature Extraction}
A Second-order methodology considers the connection of cluster pixels in an input image of the CT scan. It connects the relationship of pixels called as reference and neighbour pixel. Reference pixels expressed as (1, 0) of horizontal and vertical directions. Neighbour pixels helps the reference pixels for connectivity. GLCM matrix is always an equal number of Rows and Columns. The Gray-level co-occurrence matrix (GLCM) functions characterize the texture of CT scan image by conniving the pairs of the pixel with specific values associated during a mere special relationship occur in CT scan, then extracting applied math measures from this matrix.
\bgroup
\fixFloatSize{images/fd910177-79c0-41f6-a66e-ee997fd6c2b4-upicture11.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/fd910177-79c0-41f6-a66e-ee997fd6c2b4-upicture11.png}{\includegraphics{images/fd910177-79c0-41f6-a66e-ee997fd6c2b4-upicture11.png}}{}
\makeatother
\caption{\boldmath {Patch testing of Healthy class.}}
\label{f-3087311d1da1}
\end{figure*}
\egroup
\bgroup
\fixFloatSize{images/fa96a7d3-9ed9-44d4-87eb-20849fe77ef6-upicture12.png}
\begin{figure*}[!htbp]
\centering \makeatletter\IfFileExists{images/fa96a7d3-9ed9-44d4-87eb-20849fe77ef6-upicture12.png}{\includegraphics{images/fa96a7d3-9ed9-44d4-87eb-20849fe77ef6-upicture12.png}}{}
\makeatother
\caption{\boldmath {Patch testing of Micro Nodules.}}
\label{f-f1aae5c8c7cf}
\end{figure*}
\egroup
Contrast: In the Grey level Co-Occurrence matrix, it will measure the local variations. In short kind, it's referred to as CON. It defers the calculation of the intensity contrast connection of pixels and its neighbour over the complete image. In the Grey level Co-Occurrence matrix, it will measure the local variations. In short kind, it's referred to as CON. Contrast displays the measure of the density contrast between the pixel and neighbour pixel on the entire image. The range of variance is [0] size (GLCM, 1) -1) {\textasciicircum} 2]. The contrast is 0 for a still image. Sum of Square Variance is the name of Contrast as follows in equation 4.
$Contrast={\textstyle\sum_{i,\;j=0}^{N-1}}P_{i,\;j}(i-j)^{2} $ (4)
Correlation: In specified picture element pairs, it'll live the joint level and takes the possibilities worth of joint level. Correlation is a measure of how closely the pixel relates to the entire image. The link range is [-1, 1]. The relationship is 1 or -1 of the images that is positively or negatively associated. The link is not a number for a static image. It passes the calculation of the correlation of a picture element and its neighbour over the full image means that it figures out the linear dependency of grey levels on those of neighbouring pixels in equation 5.
$Correlation={\textstyle\sum_{i,\;j=0}^{N-1}}\;P_{i,\;j}\frac{(i-\mu)(j-\mu)}\sigma $ (5)
Energy: It will be calculated to urge the sum of square components of GLCM. It additionally called Angular second movement. Since energy is employed for doing work, so orderliness. Energy returns of the sum of square elements in the Gray-level common presence matrix (GLCM). Energy is known as monotheism. The energy range is [0 1]. Power is 1 for a still image. It makes use for the feel that calculates orders in a picture or CT scan. Here is the formula of energy in equation 6.
$Energy={\textstyle\sum_{i,\;j=0}^{N-1}}\;(P_{i,\;j})^{2} $ (6)
Homogeneity\textbf{:} Briefly term it's going by the name of HOM. It passes the worth that calculates the tightness of distribution of the weather within the GLCM to the GLCM diagonal. Returns a homogeneity value which measures the distribution of GLCM elements into a diagonal GLCM. The homogeneity range is [0 1]. The homogeneity is for the diagonal GLCM. Equation 7 tells about the homogeneity calculation.
$Homogenity={\textstyle\sum_{i,\;j=0}^{N-1}}\;\frac{P_{i,\;j}}{1+(i-j)^{2}} $ (7)
Mean: Mean is also called as Average mean distribution. It is the sum of the collection of Features divided by n number of elements. For the average, m of the pixel values for the selected image, the value is estimated in the image where the central stack occurs. This is simple mean helps GLCM feature Matrix to get Average mean value using equation 8.
$\mu={\textstyle\sum_{i,\;j=0}^{N-1}}\;i\;P_{i\;j} $ (8)
Standard Deviation: Standard deviation is also called a square of variance. It is one of the statistic measures, and it is expressed in the same unit as the mean. For standard deviation, for an estimate the average square deviation of the Gray image pixel value P (i, j) of its mean value. Standard deviation describes dispersion within the local area using equation 9 as follows.
$\sigma _i=\sqrt{{\sigma ^{2}}_i} $ (9)
Entropy: It is the information of an image to do image compression. It measures the loss of information and also collects the image information. Entropy is a standard of randomness which is used to distinguish the texture of an input image. Entropy, h is also used to describe the distribution variation in the region. This Entropy will be used for GLCM matrix as required equation 10 follows.
$Entropy={\textstyle\sum_{i,\;j=0}^{N-1}}\;-1n(P_{i\;j})\;P_{i\;j} $ (10)
RMS: It is Root mean value. It will be calculated as the Square root of the arithmetic mean of Square of Ordinates. Consider sample length as L, and N is a number of times and ${Y^{2}}_1,\;{Y^{2}}_2,\;......{Y^{2}}_n $ as Ordinates. This RMS is used for GLCM matrix fallows equation number 11.
$RMS=\sqrt{\frac{{Y^{2}}_{1\;},\;{Y^{2}}_{2\;},\;.....{Y^{2}}_n\;}n} $ (11)
Variance: It is a type of statistical measure where it measures the dataset distribution with the help of average. The size of data spread can be measured using Variance. The average of the squared deviation of each value in the dataset from the mean. This will contribute to GLCM matrix. Using this formula as follows equation 12.
$\sigma ^{2}={\textstyle\sum_{i,\;j=0}^{N-1}}\;P_{i\;j}\;(i-\mu) $ (12)
Smoothness: Smoothing the image for reducing the noise in the input image. It is one of the statistical technique to handle the data, and it created the Approximated Functions. These Approximated functions try to capture required patterns in the data. This contributes to GLCM Matrix. As follows equation 13.
$S_t=\infty.x_t+(1-\infty).S_{t-1}=S_{t-1}+\infty.(x_t-S_{t-1}) $ (13)
Kurtosis: It is the measure of data of normal distribution. There are two types of Kurtosis. The Positive Kurtosis represents heavy-tailed, and negative kurtosis light tailed distribution. This is the main Feature for GLCM matrix. Based on several Kurtosis measures, only Co-occurrence matrix is calculated. As shown in the equation 14,
$Kurtosis=\frac{\sum_{i=1}^{N}(Y_i-\overline Y)^{4}/N}{s^{4}} $ (14)
Skewness: It is a measure of the symmetry of data that look like left to right at center Point as same. The skewness of normal distribution is Zero. But for symmetric data is near to Zero. For all univariate data $Y_1,\;Y_2,\;.....Y_n $ here is the formula in equation 15:
$Skewness=\frac{\sum_{i=1}^{N}(Y_i-\overline Y)^{3}/N}{s^{3}} $ (15)
\textbf{Classification}
Classification is an important step in this Proposed System. The classification method is split into the training section and therefore, the testing section. The familiar information is given within the training section, and unknown information is given within the testing section. The accuracy depends on the efficiency of classification.
\textbf{Naive Bayes Classification}
Na{\"{\i}}ve Bayes classifier is also named as possibilities classifiers with the support of bays theorem. Bayes theorem consists of conditional probability using prior Knowledge. It is applicable for a large set of Database. It is mainly used for textual data analysis. Naive Bayes classification uses Gray level co-occurrence matrix (GLCM) features as data and analyse the Probability conditions of Patches. Here every Probability value of particular GLCM feature is considered as an independent value. It will consider the five classes of lung cancer. They are Emphysema, Fibrosis, Ground Glass Opacity (GGO), Healthy, Micro nodules. The classes of the highest Probability will be considered as the most likely class or Maximum a Posteriori (MAP).
The Na{\"{\i}}ve Bayes of Probability formula contains Posterior Possibility $P(x\vert c) $ and likelihood $P(x\vert c) $ and $P(c) $ is class prior possibility, and$P(x) $ as Predictor Prior Possibility. Where Range of Probability as $P(c\vert x)=P(x_1\vert c)\times P(x_2\vert c)\times.......\times P(x_n\vert c)\times P(c) $ using equation 15
$P(x\vert c)=\frac{P(x\vert c)\;P(c)}{P(x)} $ (16)
\textbf{ Linear Discriminant Analysis of classification }
LDA is additionally referred to as Linear Discriminant Analysis. Here Increase the spread of data between one class to another class. It used only in unsupervised learning rule with Fisher's linear discriminant generation. It works based on independent variables of Na{\"{\i}}ve Bayes classifier of each observation with continuous quantities. When groups having A Priory, then Discriminant Analysis used. The analysis of discrimination classifies observations as follows n Linear functions:
$\delta_k(x)=x\frac{\mu_k}{\sigma ^{2}}-\frac{\mu_k^{2}}{2\sigma ^{2}}+\log({\textstyle\prod_k}) $ (17)
Let $\delta_k(x) $ be the discriminant, x is an observation for the class. By taking the log of density class will give linear discriminant in equation 17.
\textbf{Patch Analysis with Target Image}
Patch Analysis is one of the important technique in feature extraction. A patch is that the low-level graphics perform for making patch graphics objects. A patch object is one or additional polygons outlined by the coordinates of its vertices. In the Proposed System Patches are created based on five classes of lung cancer. They are Emphysema, as viewed in Figure~\ref{f-77aaf9b5243d} , Fibrosis in Figure~\ref{f-4c18e3b28ef6} , Ground Glass Opacity (GGO), as shown in Figure~\ref{f-f19483ff7431} . Healthy, as seen in Figure~\ref{f-3087311d1da1} , Micro nodules as seen in Figure~\ref{f-f1aae5c8c7cf} . Each of the class contains various Patches of lung cancer. This existing patches and input image patches will be compared.
$\frac1n{\textstyle\sum_{x,\;y\;}^{0}}\frac1{\sigma \;f\;\sigma \;t}f(x,\;y)t(x,\;y){\textstyle\;} $ (18)
\textbf{Algorithm For Lung Cancer Classification}
\begin{enumerate}
\item \relax Read an input image\textit{ I }of a radiological pattern
\item \relax Convert image \textit{I} to grayscale image \textit{I\_gray}
\item \relax Transform \textit{I\_gray to I\_binary}
\item \relax Perform Morphological opening on \textit{I\_binary} to obtain \textit{I\_opening}
\item \relax Analyse connected component \textit{C\ensuremath{_{i\ }}}from \textit{I\_opening} where i=1,2,3....n\ensuremath{_{}}
\item \relax For each \textit{C\ensuremath{_{i}}}performs normalized cross-correlation analysis using the patches \textit{P\ensuremath{_{j}}} where j= 1,2,3....1000
\item \relax Compute the strongest correlation of patches \textit{P\ensuremath{_{j}}} with \textit{I\_opening}
\item \relax The strongest correlation pattern match is classified as the target class of the image \textit{I}
\end{enumerate}
\textbf{Correlation of patterns with a target image}
It is the type of Digital signal processing Algorithm. It will measure the similarities of Input image and Patches of each class in the Database. Correlation describes the convolution theorem and the attendant possibility of efficiently computing correlation in the frequency domain using the fast Fourier transform. It will collect the random vectors of segmented image and extract the pairs of Homogeneous Patches. Here r(x) is test pattern and s(x) is reference pattern and normalized correlation between r(x) and s(x) between -1 and +1; reaches +1 if and only, if r(x) = s(x) as showed in the equation 19 as follows.
$-1\leq\frac{\int r(x)s(x)dx}{\sqrt{\int\vert r(x)\vert^{2}dx\int\vert s(x)\vert^{2}dx}} $ (19)
\section{Results and Discussion}
The main motivation for the analysis work is to develop a computer-aided methodology for automatic tumor detection and diagnosing in the Lung of Patient's pic. This analysis work is incredibly helpful for doctors, or the radiologist automatically locates the tumor space within the CT image further surgery. The most motivation for the analysis work is to develop a computer-aided methodology for automatic tumor detection and diagnosing in the Lungs of Patient's image. This analysis work is incredibly helpful for doctors, or the medical specialist automatically locates the tumor space within the CT image further surgery. The Experiment is performed on $MATLAB\circledR $ on a machine with $Intel\circledR\;Core\;TM\;i3 $ processor @ 2.00 GHz and 4 GB of RAM. The dataset contains 5 class of benign and 5 class of Malignant. The dataset contains a matrix of $512\times512 $ pixels. All CT scan images are in DICOM format.
\textbf{Dataset Description}
SPIE is the Medical Imaging Conference, conducted a ``Grand Challenge'' on quantitative image analysis strategies for the diagnostic classification of malignant and benign respiratory organ nodules. The LUNGx Challenge cangivea novelchance for participants to match their algorithms to those of others from academe, industry, and government in an exceedingly structured, direct mannerexploitationa similarinformation sets. The dataset contains 5 class of benign and 5 class of Malignant. The dataset contains a matrix of $512\times512 $ pixels.\textbf{\space }
\textbf{Training Pattern Analysis}
Training Dataset: Training Dataset contains CT image patches of lung cancer. Here it contains five classes of lung cancer, named as Emphysema, Fibrosis, Ground Glass Opacity (GGO), Healthy, Micro nodules. Training the dataset and Accuracy level at every different condition of lung cancer. Training the data helps to Train the proposed model for good performance results in machine learning.
Grayscale Image Generation: Grayscale image is also called thresholding of CT scan. It is the type of pre-processing image technique that helps to convert the RGB image to grayscale. Grayscale image generation helps to measure the intensity of light in a monochrome image. This image representation contains black and white, based on intensity level at each pixels.
Binary Image Generation: Binary image is also called a Binarization technique or digital image. It having only two values in each pixels. It shows clear lung representation with the numbering of each pixels as zeros and ones. It is also a Grayscale image with black and white colour and helps for segmentation in Figure~\ref{f-925eeee55952} .
GLCM Feature Extraction: After Binarization of an image, the GLCM co matrix will be generated based on statistical data. This matrix of feature extraction Training of $456\times12 $ doubles order trained. Here twelve features are extracted as shown in Figure~\ref{f-05122cc91956} .
\textbf{ Testing Pipeline of Patch Analysis:} \textbf{\space }
Input Test Image: Input image is the CT image from the dataset and completely match with the training dataset. This Dataset helps to test the proposed model to check the model working and responding for the given training dataset. Test input contains 5 lung, cancer classes. Each input test class contains1000 patches as testing Dataset as sample shown in Figure~\ref{f-6b5732e0f0cb} .
Preprocessing: Preprocessing used to get an enhanced image. Here two technique of pre-processing is used.one is a Grayscale image, and another one is Binary image. Grayscale image measures the intensity of light in the CT scan input. The illumination at each pixel points are known from Grayscale image. And Binary image is a white and black colour image. Binary image used to compare the intensity of the pixels of each patches of the lung.
GLCM Feature extraction: Gray level co-occurrence matrix (GLCM) is a quantization level of input CT scan. Totally Twelve features are extracted from input CT image. The testing Matrix is having an order of $120\times12 $ doubles. These Extracted Features undergoes Na{\"{\i}}ve Bayesian and Linear Discriminant Analysis (LDA) Classification.
Classification: Classification is one in all the necessary step during this planned system. Naive Bayesian and LDA classification are the two approaches of classification. Na{\"{\i}}ve Bayesian works on Conditional Probability, whereas LDA is unsupervised learning. And the result will classify the lung cancer type as Emphysema as viewed in Figure~\ref{f-77aaf9b5243d} , Fibrosis in Figure~\ref{f-4c18e3b28ef6} , Ground Glass Opacity (GGO), as shown in Figure~\ref{f-f19483ff7431} . Healthy, as seen in Figure~\ref{f-3087311d1da1} , Micro nodules as seen in Figure~\ref{f-f1aae5c8c7cf} .
Knowledge Trained Feature maps: From trained feature map generation, the trained dataset will be stored in a database. Based on human knowledge, pre-trained benchmark dataset used as training data. This data will be compared with training feature maps Input to check the proposed system efficiency, as shown in Figure~\ref{f-f287f3e74dd9} .
\section{Conclusions}
This paper is consisting of Pattern analysis of lung cancer. It discovered from completely different ways of algorithms applied for giving different improvement, which supplies higher performance. GLCM gives 12 features and that given as input to the classifier that decides whether or not the respiratory organ nodule is cancerous or non-cancerous. Here we are comparing all The Patches extracted within an existing dataset. Create patch graphics object: Patch is the low-level graphics function for creating patch graphics objects. A patch object is one or more polygons defined by the coordinates of its vertices. Patches created in testing, then the comparison will happen between existing patches and new patches. And also classify the cancer type. Classification of Proposed system Achieved 81.81\%. In Future work to improve on Accuracy Level and adding more number of Possibility Classes of lung cancer. Prediction of stages of lung cancer will be added.
\section*{Acknowledgement}The authors are grateful to Amrita Vishwa Vidyapeetham and Amrita School of Arts And Sciences for providing an opportunity for this research work.
\bibliographystyle{pharmascope_apa-custom}
\bibliography{\jobname}
\end{document}